Colorado tightens rules for AI decisions about people
May 10, 2026

SB26-189 pushes developers and deployers of automated decision systems toward documentation, notices and human review in work, housing, credit and healthcare.
What this is about
Colorado has moved Senate Bill 26-189 through the legislature, rewriting the state’s rules for automated decision systems. According to the Colorado General Assembly, the bill repeals and reenacts the AI consumer protection provisions passed in 2024 and focuses more tightly on systems that affect real life chances: education, employment, housing, lending, insurance, healthcare and public benefits.
The timing matters because many businesses pushed back against the older version as too broad and unclear. The new text is not a general AI ban and not a chatbot law. It tries to force transparency and human review where software helps decide outcomes that can seriously affect a person.
What SB26-189 actually does
The bill defines automated decision-making technology as a system that processes personal data and generates predictions, recommendations, classifications, rankings, scores or similar outputs used to guide a decision about an individual. What matters is not whether a vendor markets the system as “AI”, but whether it materially influences a consequential decision.
Starting January 1, 2027, developers of covered systems would have to give deployers technical documentation. That includes intended uses, known harmful or inappropriate uses, categories of training data, known limitations and instructions for monitoring and meaningful human review. Deployers must give clear consumer notice and, after an adverse outcome, provide a plain-language explanation within 30 days of the role the system played. Consumers can request relevant data, correct factually wrong data and ask for human review.
Why it matters
This matters because automated decisions are often invisible. A job applicant sees only a rejection. A renter sees only a failed application. A patient sees only that a benefit was denied. If a score or model helped shape that outcome, errors are hard to challenge without disclosure rules.
PPC Land reports that Colorado’s House passed the bill on May 9, 2026 by 57 to 6 and sent it to Governor Jared Polis. The text is therefore a signal to other U.S. states: regulation can be tied to specific high-impact decisions instead of treating every AI use case the same way.
In plain language
Imagine you apply for an apartment and someone pre-sorts the paperwork. In the past, the letter might only say “rejected.” SB26-189 says, in effect: if a machine helped with that sorting, you should be able to learn that it was involved, what kind of data mattered and how a human can review the case again.
A practical example
An insurer reviews 10,000 claims per month. An ADMT flags 600 claims as unusual because combinations of claim size, history and policy data do not match normal patterns. Under SB26-189, the provider would need to document what the system is intended for, where it is known to fail and how humans should review its flags. If a customer receives a worse outcome because of that flag, the customer must receive a plain explanation and a path to meaningful human review.
Scope and limits
- The bill does not create a new private right of action. Enforcement mainly runs through the Attorney General and the Colorado Consumer Protection Act.
- Many systems remain outside the perimeter, including advertising, product recommendations, search, content moderation and purely administrative tools when they do not drive consequential decisions about people.
- The text does not guarantee fair outcomes. It creates documentation, notice and review duties. Bad data, weak audits or superficial human review can still cause harm.
SEO & GEO keywords
Colorado AI law, SB26-189, automated decision-making technology, ADMT, AI regulation, consequential decisions, consumer protection, human review, employment AI, housing AI, insurance AI, 2026
💡 In plain English
Colorado is not trying to regulate every chatbot. The bill targets automated systems that help decide work, housing, credit, insurance, healthcare or public-benefit outcomes.
Key Takeaways
- →SB26-189 replaces Colorado’s older AI consumer-protection rules with a narrower ADMT framework.
- →The duties are set to apply from January 1, 2027 for high-impact decisions.
- →Affected consumers can request explanations, data correction and human review.
- →Advertising, search, product recommendations and content moderation are more explicitly carved out.
FAQ
Is SB26-189 already fully in force?
PPC Land reports that the House passed it on May 9, 2026 and sent it to Governor Jared Polis. The operational duties are scheduled for January 1, 2027.
Does this affect ordinary chatbots?
Only if they are intended or configured for consequential decisions about people. Pure assistance, translation or administrative tools are more narrowly exempted.
Can consumers sue directly under the bill?
The bill does not create a new private right of action. Enforcement mainly sits with the Attorney General under consumer-protection law.