On May 17, 2024, Colorado’s governor signed the Colorado AI Act (the “Act”), which imposes groundbreaking requirements on AI developers and deployers. Though narrower in scope, the Act bears many similarities to the EU AI Act, including a focus on documentation, notice, and risk assessment obligations for “high-risk” AI systems and minimal requirements for other AI systems. Companies covered by the Act should review their obligations and start implementing compliance measures before the Act goes into effect on February 1, 2026.
Key Definitions and Scope
The Act applies to “developers” and “deployers” of “artificial intelligence systems.”
- An “artificial intelligence system” (in this post, “AI system”) is “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
- A “developer” is a person doing business in Colorado that develops an AI system or changes an AI system in a way that creates a new reasonably foreseeable risk of “algorithmic discrimination” (as defined below).
or intentionally and substantially modifies an artificial intelligence system.” - A “deployer” is a person doing business in Colorado that deploys (i.e., uses) a high-risk AI system.
Most of the Act’s provisions apply to “high-risk artificial intelligence systems” (in this post, “high-risk AI system”), meaning those that make “consequential decisions” or assist in making such decisions in a way that could alter the outcome. A “consequential decision” is one that “has a material legal or similarly significant effect” on the cost, terms, or provision or denial to consumers of education or employment opportunities; financial, lending, healthcare, or legal services; essential government services; housing; or insurance. The Act exempts some AI systems from the high-risk category, including AI systems that perform a narrow procedural task and chatbots that do not, when deployed, make or substantially assist in making a consequential decision and have an acceptable use policy that prohibits generating discriminatory or “harmful” content. For purposes of the Act, a “consumer” means a Colorado resident.
Obligations For All AI Systems
The only obligation on AI systems that are not high-risk is that, if an AI system is intended to interact with consumers, the deployer or developer must ensure that consumers are notified that they are interacting with an AI system.
Obligations For High-Risk AI Systems
Most of the Act’s obligations apply to high-risk AI systems and relate to “algorithmic discrimination,” meaning “any condition in which the use of an [AI] system results in an unlawful differential treatment or impact that disfavors an individual or group” on the basis of legally protected classifications (including race, ethnicity, color, or national origin, religion, age, sex, disability, veteran status, or reproductive health). Unless otherwise stated, as used below, “risk(s)” means “known or reasonably foreseeable risk(s)” of algorithmic discrimination.
Developers and deployers of high-risk AI systems must use reasonable care to protect consumers from risks of algorithmic discrimination. Their other obligations differ slightly based on their roles.
A developer of a high-risk AI system must:
- Make available documentation to help deployers of the system understand the outputs of the system, monitor it for algorithmic discrimination, and protect users against such discrimination—including documentation about the system’s training data, limitations, intended uses and outputs, and foreseeable risks of, testing for, and mitigation of algorithmic discrimination.
- Make available documentation as necessary to help deployers complete required impact assessments.
- Provide a statement on its website summarizing the types of high-risk AI systems it provides and how it manages risks of algorithmic discrimination.
- Report any risk of algorithmic discrimination arising from intended use of the system to the Colorado Attorney General (“CO AG”) and deployers of the system within 90 days of discovering the risk.
Subject to certain exceptions, a deployer of a high-risk AI system must:
- Implement and regularly review a risk management policy and program governing use of the system, including measures to identify, document, and mitigate risks of algorithmic discrimination.
- Complete an annual impact assessment of the system that covers the purpose, intended use, and context of the system, algorithmic discrimination risks and any mitigation measures, the data processed and used to customize the system (if relevant), metrics to evaluate performance, and any transparency measures, monitoring, or safeguards implemented by the deployer.
- Provide a statement on its website summarizing the high-risk AI systems it uses, how it manages algorithmic discrimination risks, and the nature, source, and extent of information it collects and uses.
Without exception, all deployers of high-risk AI systems must:
- If the system is used to make consequential decisions (as defined above) about consumers, notify consumers of this fact and the purpose and nature of the system, the nature of the decision, how to contact the deployer, and the right to opt out of personal data processing for profiling and legally or similarly significant decisions. If the decision is adverse to the consumer, the deployer must also inform the consumer of how the decision was made and the opportunity to appeal the decision.
- Report instances of algorithmic discrimination caused by the system to the Colorado Attorney General (“CO AG”) within 90 days of discovering the instance.
Enforcement
There is no private right of action under the Act. The CO AG has exclusive authority to enforce the Act and authority to promulgate rules as necessary to implement and enforce the Act. Violations of the Act constitute unfair trade practices, for which the CO AG may seek civil penalties of up to $20,000 per violation.