Read on to learn more.
Who Needs to Comply with the Act
The Act applies to and contains obligations for both developers and deployers of high-risk AI systems. A developer is a person doing business in Colorado who develops or intentionally and substantially modifies an AI system. A deployer is a person doing business in Colorado who deploys or uses a high-risk AI system.
Unlike US state privacy laws, there is no express threshold for the application of the Colorado AI Act.
Definitions of Key Terms
Here are some key definitions as mentioned and discussed within the official legal text of the Act:
Algorithmic Discrimination
“Algorithmic Discrimination” means any condition in which the use of an artificial intelligence results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency of the English language, national origin, race, religion, productive health, sex, veteran status, or other classification protected under the laws of Colorado or federal laws.
However, Algorithmic discrimination does not include:
- The offer, license, or use of a high-risk AI system by a developer or deployer for the sole purpose of:
- The developer’s or deployer’s self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state or federal law; or
- Expanding an applicant, customer or participant pool to increase diversity or redress historical discrimination. or
- An act or omission by or on behalf of a private club or other establishment that is not in fact open to the public, as set forth in Title II of the Federal “Civil Rights Act of 1964”.
Artificial Intelligence System
“Artificial Intelligence System” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
Consequential Decision
Consequential Decision means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:
- Educational enrollment or an education opportunity;
- Employment or an employment opportunity;
- A financial or lending service;
- An essential government service;
- Health care services;
- Housing;
- Insurance; or
- A legal service.
Consumer
“Consumer” means an individual who is a Colorado resident.
Deploy
“Deploy” means to use a high-risk artificial intelligence system.
High-Risk AI System
“High-Risk Artificial Intelligence System” means any AI system that, when deployed, makes, or is a substantial factor in making a consequential decision
However, it does not include:
- An AI system if the AI system is intended to:
- Perform a narrow procedural task;
- Detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or
- The following technologies, unless in cases where their deployment or use leads to a consequential decision:
- Anti-fraud technology that does not use facial recognition technology;
- Anti-malware;
- Anti-virus;
- AI-enabled video games;
- Calculators;
- Cybersecurity;
- Databases;
- Data storage;
- Firewall;
- Internet domain registration;
- Internet website loading;
- Networking;
- Spam and robocall filtering;
- Spell-checking;
- Spreadsheets;
- Web caching;
- Web hosting or any similar technology; or
- Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals, or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
Intentional and Substantial Modification
“Intentional and Substantial Modification” or “Intentionally and Substantially Modifies” means a deliberate change made to an AI system that results in any new reasonably foreseeable risk of algorithmic discrimination.
It does not include a change made to a high-risk AI system or the performance of a high-risk AI system, if:
- The high-risk AI system continues to learn after the high-risk AI system is:
- Offered, sold, leased, licensed, given, or otherwise made available to a deployer; or
- Deployed;
- The change is made to the high-risk AI system as a result of the learning described above;
- The change was predetermined by the deployer, or a third party contracted by the deployer, when the deployer or third party completed an initial assessment of such high-risk AI system;
- The change is included in technical documentation for the high-risk AI system.
Substantial Factor
“Substantial Factor” includes any use of an AI system to generate any content, decision, prediction, or recommendation considering a consumer that is used as a basis to make consequential decisions concerning the consumer.
Duties of Developers
The Colorado AI Act imposes stringent obligations on developers to ensure transparency and protect the public interest. These obligations are discussed below in detail:
Duty of Reasonable Care to Protect Consumers Against Algorithmic Discrimination
Developers of a high-risk AI system must use reasonable care to protect their consumers from any reasonably foreseeable risks of algorithmic discrimination from the use of such a high-risk AI system.
There is a rebuttable presumption that the developer has undertaken reasonable care if the developer has complied with the relevant requirements of this act as well as any additional requirements and obligations established by rules enacted by the Attorney General’s office.
Required Documentation to be Provided to Deployers
Unless exempted from this requirement, the developer of a high-risk AI system will make the following available to the deployer or other developers of the high-risk AI system:
- A general statement that describes the reasonably foreseeable uses and known harmful and inappropriate uses of the high-risk AI system;
- Documentation that provides details of:
- The types of data used to train the high-risk AI system;
- The known or reasonably foreseeable limitations of the high-risk AI system, as well as the known or reasonably foreseeable risk of algorithmic discrimination from the intended use of the high-risk AI system;
- The purposes of the high-risk AI system;
- The intended benefits and uses of the high-risk AI system;
- All other relevant information is required for the deployer to comply with the requirements of this Act.
- Documentation that describes:
- How the high-risk AI system was evaluated for performance as well as algorithmic discrimination mitigation measures undertaken before the system was offered, sold, leased, given, or made available to the general public;
- The data governance measures used to cover the training datasets and the measures used to evaluate the suitability of the data sources, potential biases, and mitigation measures;
- The intended outputs of the high-risk AI system;
- The measures undertaken by the developer to mitigate the foreseeable risks of the algorithmic discrimination that may arise from the deployment of the high-risk AI system;
- How the high-risk AI system should be used and monitored when it is used to make or act as a substantial factor in making a consequential decision.
- Any additional documentation necessary to assist the deployer in understanding the outputs and overall performance of the high-risk AI system, including its potential risks of algorithmic discrimination.
Documentation for Impact Assessments
Unless exempted per this Act, the developer that offers, sellers, leases, licenses, gives, or otherwise makes the high-risk AI system available to the deployer or other developers of the high-risk AI system is required to make it available to the deployer or the developer relevant documentation and other information, to the extent feasible, via model cards, dataset cards, or other impact assessments necessary for a deployer or any third party working with a deployer to complete an impact assessment.
A developer who also serves as the deployer for a high-risk AI system is only required to generate such documentation if the high-risk AI system is provided to an unaffiliated entity acting as a deployer.
High-risk AI System Details on Website
The developer must make a statement available on its website or public use case inventory that is clear and easily understandable and summarizes the following:
- The types of high-risk AI systems the developer has developed or intentionally or substantially modified that are currently available to a deployer or other developers;
- How the developer manages the known and reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional or substantial modification of the types of high-risk AI systems currently in use.
The aforementioned statement must be updated as necessary to ensure it remains accurate within ninety days after the developer makes any intentional or substantial modifications to the high-risk AI system.
Notice to Attorney General & Others
A developer of a high-risk AI system must disclose any known or reasonably foreseeable risks related to algorithmic discrimination from the use of a high-risk AI system to the Attorney General as well as other known deployers and other developers of the high-risk AI system in a manner prescribed by the Attorney General without unreasonable delay but no later than ninety days from the date when:
- The developer discovers through ongoing testing and analysis that the developer’s high-risk AI system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination;
- The developer receives a credible report from a deployer that the high-risk AI system has been deployed and resulted in an instance of algorithmic discrimination.
None of the aforementioned documentation requirements will require the developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would lead to a security risk to the developer.
On or after February 1, 2026, the Attorney General may require the developer to disclose any documentation that the developer is required to disclose per this Act. The Attorney General may evaluate each statement or documentation to ensure its compliance with the relevant requirements of the Act. The developer may designate any information or documentation submitted under this requirement as a trade secret or proprietary information. To the extent that any information submitted under this designation is subject to attorney-client privilege or work-product protection, it does not constitute a waiver of privilege or protection.
Duties of Deployers
Duty of Reasonable Care to Protect Consumers Against Algorithmic Discrimination
A deployer of a high-risk AI system must use appropriate precautions and steps to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
There is a rebuttable presumption that a deployer of a high-risk AI system used reasonable care if the deployer has complied with the requirements or obligations set forth in this Act or the Rules promulgated by the Attorney General.
Risk Management Policy and Program
A deployer of a high-risk AI system must implement a risk management policy and program that governs the deployer’s use and deployment of the high-risk AI system. Such a policy and program must elaborate on the principles, processes, and personnel being used by the deployer to identify, document, and mitigate any known or foreseeable risks of algorithmic discrimination. Additionally, the risk management policy and program must be thoroughly planned and implemented, with regular updates and reviews throughout the lifecycle of the high-risk AI system.
More importantly, this risk management policy and program must be reasonable considering:
- The guidance and standards set forth in the latest version of the “Artificial Intelligence Risk Management Framework” published by the National Institute of Standards and Technology in the United States Department of Commerce, Standard ISO/IEC 42001 of the International Organization for Standardization, or any other nationally or internationally recognized risk management framework for AI systems or any other risk management framework for AI systems that the Attorney General deems appropriate;
- The size and complexity of the deployer;
- The nature and scope of the high-risk AI system deployed by the deployer, including its intended uses;
- The sensitivity and volume of data processed in connection with the high-risk AI system deployed by the deployer.
A risk management policy and program implemented by the deployer related to the aforementioned requirements may cover multiple high-risk AI systems deployed by a deployer.
Completion of an Impact Assessment
Unless exempted:
- A deployer or third party contracted by the deployer that deploys the high-risk AI system must complete an impact assessment for the high-risk AI system;
- The deployer or a third party contracted by the deployer must complete the impact assessment for the high-risk AI system at least annually and within ninety days after any intentional and substantial modification is made to the high-risk AI system.
The impact assessment conducted per the aforementioned requirement must include the following:
- A statement by the deployer that discloses the purpose, intended use cases, the context of the deployment, and benefits provided by the high-risk AI system;
- An analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the risks and mitigation measures in place;
- A description of the categories of data that the high-risk AI system processes as inputs and the subsequent output;
- An overview of the categories of data that the deployer uses to customize the high-risk AI system;
- Any metrics used to evaluate the performance and limitations of the high-risk AI system;
- A description of any transparency measures taken related to the high-risk AI system, including any steps to inform the customer about the high-risk AI system when such a high-risk AI system is in use;
- A description of the post-deployment monitoring and user safeguards related to high-risk AI systems, such as oversight, use, and learning process established by the deployer to address any issues arising from the use of high-risk AI systems.
In addition to any aforementioned information, an impact assessment carried out following an intentional and substantial modification to the high-risk AI system must include a statement that discloses the extent to which the high-risk AI system was used in a manner that was consistent with, or varied from, the developer’s intended uses of the high-risk AI system.
A single impact assessment may address a comparable set of high-risk AI systems deployed by the deployer.
If a deployer or a third party contracted by the deployer completed an impact assessment to comply with another applicable law or regulation, then such an assessment will satisfy the requirements of this Act, provided it is similar in the required scope and effect to the impact assessment that is otherwise conducted pursuant to the Act.
Record Keeping
A deployer must maintain all conducted impact assessments for a high-risk AI system for at least three years following the final deployment of the high-risk AI system.
Review of the high-risk AI system
A deployer or a third party contracted by the deployer must review the deployment of the high-risk AI system at least annually to ensure it does not cause algorithmic discrimination.
Notification Requirement to the Consumer
A deployer who deploys a high-risk AI system to make, or be a substantial factor in making, a consequential decision concerning a customer, the deployer must:
- Notify the customer that the deployer has deployed such a high-risk AI system to make, or be a substantial factor in making a consequential decision before such a decision is made;
- Provide the customer a statement disclosing the purpose of the high-risk AI system and the nature of the consequential decision, along with the contact information of the deployer, a description, in plain language, of the high-risk AI system, and instructions on how to access the public statement required per this Act;
- Provide consumers information related to their right to opt out of the processing of their personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
Adverse Consequential Decision
The deployer who deploys a high-risk AI system to make or be a substantial factor in making a consequential decision regarding a consumer should, if the consequential decision is adverse to the consumer, provide to the consumer:
- A statement disclosing the principal reason for the consequential decision, including:
- The degree to which, and the manner in which, the high-risk AI system contributed to the consequential decision;
- The type of data that was processed by the high-risk AI system in making the consequential decision;
- The sources of the data used.
- An opportunity to correct any incorrect personal data that the high-risk AI system may have processed in making, or as a substantial factor in making, the consequential decision;
- An opportunity to appeal an adverse consequential decision concerning the consumer where the appeal allows for human review unless providing the opportunity for appeal is not in the best interest of the consumer.
Unless otherwise stated, a deployer must provide notice, statement, contact information, and description regarding the deployment of the high-risk AI system to make, or be a substantial factor in making the consequential decision, and the adverse consequential decision:
- Directly to the consumer;
- In plain and easy-to-understand language;
- In all the language where the deployer provides contracts, disclaimers, sale announcements, and other information to consumers;
- In an easily accessible format for consumers with disabilities.
If a notice, statement, contact information, and description cannot be provided directly to the consumers, the deployer must make them available in a manner that is easily accessible by the consumer.
High-risk AI System Details on Website
Furthermore, the deployer must ensure the availability of a statement on its website that is clear and readily understandable and summarizes the following:
- The types of high-risk AI systems currently deployed by the deployer;
- How the deployer manages the known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of the high-risk AI system;
- The nature, source, and extent of the information collected by the deployer in detail.
The aforementioned statement must be updated periodically.
However, this statement requirement does not apply to a deployer if at the time the deployer deploys the high-risk AI system and all the time the high-risk AI system is deployed.
- The deployer:
- Has less than fifty full-time employees;
- Does not use its own data to train the high-risk AI system.
- The high-risk AI system:
- Is used as disclosed in its intended uses;
- Continues to learn from data derived from sources other than the deployer’s own data.
- The deployer makes impact assessments available to the consumers that:
- The developer of the high-risk AI system has completed and provided to the deployer;
- Includes information substantially similar to the information in the impact assessment required per the provisions of this Act.
Notification to the Attorney General
If the deployer deploys a high-risk AI system on or after February 1, 2026, and subsequently discovers that the system has caused algorithmic discrimination, the deployer must send the Attorney General a notice that discloses this discovery without delay and within ninety days.
Nothing in this Act requires the deployer to disclose a trade secret or information protected from disclosure by state or federal law. If the deployer withholds information, it must notify the consumer and provide a justification for this.
Documentation Required by the Attorney General
The Attorney General may require a deployer or a third party contracted by the deployer to disclose, within ninety days after the request, the risk management policy implemented, the impact assessment completed, and other documents maintained per the provisions of this Act. The Attorney General may then evaluate these documents for compliance with the requirements of this Act.
A deployer may designate a document as including proprietary information or a trade secret. If the risk management policy, impact assessment, or other documents shared with the Attorney General include information subject to Attorney-Client privilege or work-product protection, the disclosure will not constitute a waiver of the privilege or protection.
Disclosure of the AI System to Consumers
A deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes an AI system available that is intended to interact with the consumer must ensure that each consumer interacting with that AI system is disclosed of such interactions.
Such disclosures are not required in instances where it would be evident to a reasonable person that they’re interacting with an AI system.
Exemptions
While the Colorado AI Act contains robust obligations for the developers and deployers of high-risk AI systems, it also provides certain exemptions to balance regulatory oversight. These exemptions include:
Compliance With Other Legal Obligations
Nothing in the Act would restrict a developer’s, deployer’s, or any other entity’s ability to:
- Comply with relevant federal, state, or municipal laws, ordinances, or regulations;
- Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons issued by a federal, state, municipal authority, or other government authority;
- Cooperate with law enforcement agencies;
- Investigate, establish, exercise, defend, or prepare for legal claims;
- Take immediate action to protect the interests necessary for the life or physical safety of a consumer or other individual;
- Prevent, detect, respond to, or protect against security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or illegal activity by any means other than the use of facial recognition technology;
- Engage in public or peer-reviewed scientific research in the public interest;
- Conduct research, testing, and development activities related to AI systems other than tests conducted in real-world conditions before placing such AI systems on the market;
- Assist another developer, deployer, or other entity with any of the obligations imposed under the Act.
Furthermore, the obligations imposed on developers, deployers, and or any other entities under the Act do not restrict their ability to:
- Initiate a product recall;
- Identify and repair technical errors in existing or future functionality.
Evidentiary Privilege
The obligations imposed under the Act on developers, deployers, and other entities do not apply where compliance would violate an evidentiary privilege under the laws of their relevant state.
Freedom of Speech and Press
Nothing in the Act imposes obligations on developers, deployers, or other entities that adversely affect the rights or freedoms of persons, such as the right to freedom of speech and freedom of the press granted under the First Amendment to the United States Constitution, and Section 10 of Article II of the state constitution.
Federal Exemptions
Nothing in the Act applies to developers, deployers, and other entities:
- Insofar as the developer, deployer, and other entities put into service or substantially modify a high-risk AI system:
- That has been approved, authorized, certified, cleared, developed, or granted by a federal agency such as the Federal Food and Drug Administration (FDA) or Federal Aviation Administration (FAA) and acts within the scope of that federal agency’s authority;
- That is in compliance with standards established by a federal agency, such as those established by the federal Office of the National Coordinator for Health Information Technology.
- Conduct research in support of an application seeking approval or certification from a federal agency;
- Perform work under or in connection with a contract with the United States Department of Commerce, the Department of Defense, or the National Aeronautics and Space Administration unless the developer, developer, or other entity is working on a high-risk AI system that makes consequential decisions related to employment or housing;
- Is a covered entity under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), and the regulation under that Act can be amended and provide recommendations that:
- Are AI-generated;
- Require a healthcare provider to take actions related to implementing these recommendations;
- Are not considered to be high-risk.
Industry Specific Exemptions
Nothing in the Act applies to any AI system acquired by or for the Federal government or any Federal agency or department unless the AI system is a high-risk AI system that makes a consequential decision related to employment or housing.
Insurers, fraternal benefit societies, and AI developers for insurers if they are already complying with specific existing insurance regulations and rules established by the Commissioner of Insurance.
A bank, out-of-state-bank, or credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or any other affiliate will be considered to be in full compliance with the requirements of this Act if they comply with any other equivalent regulations that apply to the use of high-risk AI systems and such regulations:
- Impose similarly strict requirements;
- Require the bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or any other affiliate to:
- Regularly audit the bank's, out-of-state-bank, or credit unions chartered by the state of Colorado, federal credit unions, out-of-state credit unions or any other affiliate's use of high-risk AI systems for compliance with state or federal anti-discrimination laws applicable to these institutes;
- Mitigate any causes of algorithmic discrimination as a result of the use of high-risk AI systems.
Regulatory Authority
The Attorney General of Colorado will have the exclusive authority to enforce the provisions of the Act.
Affirmative Defense
In any action commenced by the Attorney General to enforce provisions of the Act, it is an affirmative defense that the developer, deployer, or other entity:
- Discovers and cures the violation as a result of:
- Feedback that the developer, deployer, or other entity encourages the consumer to share with the developer, deployer, or other entity;
- Adversarial testing or red teaming;
- An internal review process.
- Is otherwise compliant with the following:
- The latest version of the Artificial Intelligence Risk Management Framework;
- Another nationally or internationally recognized risk management framework for AI systems if such standards are equivalent to the requirements per this Act;
- Any other risk management framework recommended by the Attorney General.
A developer, deployer or the third party bears the burden to demonstrate to the Attorney General that they are in compliance with the above requirements.
However, nothing in this act or the enforcement powers granted to the Attorney General will preempt or otherwise affect any right, claim, remedy, presumption, or defense available at law or in equity.
Similarly, this Act does not provide any basis for and will not be subject to a private right to action for any violations of the provisions of this Act.
Attorney General Rule Making
The Attorney General may promulgate rules as necessary to implement and enforce the provisions of the Colorado AI Act, including:
- The documentation and requirements for developers;
- The contents of and requirements related to notices and disclosures;
- The content of and requirements related to risk management policy and program;
- The content of and requirements related to impact assessments;
- The requirements for rebuttable presumptions;
- The requirements for affirmative defense.
Effective Data
The Colorado AI Act will come into effect on February 1, 2026.
How can Securiti Help
Though Colorado is the first US state to have a comprehensive AI regulation of this nature, other states and departments will inevitably follow suit. While these future regulations may share similarities and differences with the Colorado AI Act, one thing remains clear: organizations will need a reliable solution that can help them address any obligations such regulations place on them.
Curiously enough, automation represents the best option for organizations to do so.
That is where Securiti can help. It is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. Additionally, it provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Globally renowned and reputable enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance, as well as the safe use of data and GenAI capabilities.
Request a demo today and learn more about how Securiti can help you comply with the Colorado AI Act.