Colorado Governor Jared Polis signed SB 24-205 on May 17, 2024, making Colorado the first American state to pass comprehensive AI regulation targeting algorithmic discrimination, according to the Colorado General Assembly. The law takes effect June 30, 2026. SaaS companies whose products make or influence consequential decisions, from loan approvals to hiring recommendations to insurance pricing, must comply or face enforcement by the Colorado Attorney General. With over 1,100 state-level AI bills introduced across the U.S. in 2025, according to the Brennan Center for Justice, 2025, Colorado’s law is a template for what’s coming nationwide.
- Colorado AI Act (SB 24-205) takes effect June 30, 2026, targeting AI systems that make consequential decisions.
- Applies to any company whose AI product affects Colorado consumers, regardless of company headquarters location.
- Requires impact assessments, transparency notices, human oversight, and algorithmic discrimination risk mitigation.
- Violations treated as unfair trade practices under the Colorado Consumer Protection Act, enforced by the state AG.
- Over 1,100 state-level AI bills were introduced in 2025; Colorado is the compliance template for other states.
What the Colorado AI Act Requires
The Colorado AI Act creates obligations for two categories of entities: developers (companies that build AI systems) and deployers (companies that use AI systems in their products or operations). Most SaaS companies fall into both categories.
Developer obligations. If you build an AI system that could be used for consequential decisions, you must:
- Provide deployers with documentation describing the system’s capabilities, limitations, and intended uses
- Disclose known risks of algorithmic discrimination
- Publish a general statement on your website describing the types of significant-risk AI systems you develop and how you manage risks
- Provide deployers with the information they need to complete their own impact assessments
Deployer obligations. If you use a regulated AI system in your product or operations, you must:
- Implement a risk management policy and program
- Complete an AI impact assessment before deployment and annually thereafter
- Notify consumers when an AI system makes or substantially contributes to a consequential decision about them
- Provide consumers with an explanation of the decision, including the principal reasons behind it
- Allow consumers to appeal AI-driven consequential decisions and access human review
- Notify the Colorado AG within 90 days of discovering that your system has caused algorithmic discrimination
The definition of “consequential decision” is broad. It covers decisions related to education enrollment or opportunity, employment or employment opportunity, financial or lending services, essential government services, healthcare services, housing, insurance, and legal services. If your SaaS product touches any of these domains and serves Colorado consumers, the law likely applies.
McKinsey’s 2025 survey found that 72% of enterprises now use generative AI in at least one business function, according to McKinsey, 2025. Many of those business functions fall within the Colorado AI Act’s definition of consequential decisions.
How to Determine If Your SaaS Product Qualifies as Regulated
The risk classification turns on two questions: does your AI system make or substantially factor into a decision, and is that decision consequential?
PwC found that 52% of companies experienced increased crisis frequency over five years, with AI incidents growing fastest, according to PwC, 2023.
Step 1: Map your AI touchpoints. Catalog every point in your product where an AI model produces an output that influences a decision. This includes recommendation engines, scoring algorithms, chatbot responses that guide user decisions, automated approvals or denials, and AI-generated content that users rely on for decision-making.
Step 2: Classify decision types. For each AI touchpoint, determine whether the output affects a consumer’s access to any of the 8 consequential categories: education, employment, financial services, government services, healthcare, housing, insurance, or legal services.
Step 3: Assess substantiality. The law applies when AI is a “substantial factor” in the decision, not merely a minor input. An AI-powered resume screener that determines which candidates advance to interviews is clearly a substantial factor. An AI spell-checker used during a job application is not. The gray area in between requires legal judgment.
Here are examples by SaaS category:
| SaaS Type | Likely Regulated | Rationale |
|---|---|---|
| HR/Recruiting platform | Yes | Employment decisions |
| Lending/Credit software | Yes | Financial services decisions |
| Insurance underwriting | Yes | Insurance decisions |
| Healthcare scheduling | Possible | Healthcare access decisions |
| Marketing automation | Unlikely | Not a consequential category |
| Project management | Unlikely | Internal operations |
| Customer support chatbot | Possible | If it makes binding service decisions |
The NIST AI Risk Management Framework provides a practical methodology for this classification exercise, according to NIST, 2023. NIST’s Map function was designed for exactly this kind of risk identification. For a detailed comparison of how NIST’s classification aligns with Colorado’s, see our guide on AI trust frameworks.
How to Complete an AI Impact Assessment
The Colorado AI Act requires deployers of regulated AI systems to complete impact assessments before deployment and update them annually. Here’s what the assessment must include:
1. System purpose and intended benefits. Document the specific purpose of the AI system, the problem it solves, and the benefits it provides to consumers and your organization. Be specific: “automates initial resume screening to reduce time-to-hire from 14 days to 3 days” is better than “improves recruitment efficiency.”
2. Categories of data processed. List every data category the AI system ingests, including personal data, demographic data, behavioral data, and third-party data. For each category, document the source, retention period, and how it influences system output.
3. Outputs and decision types. Describe what the AI system produces: scores, rankings, recommendations, approvals, denials, or classifications. Map each output to the consequential decision categories it could affect.
4. Algorithmic discrimination risk analysis. Assess the risk that the system produces discriminatory outcomes based on protected characteristics including race, color, national origin, sex, disability, and age. Document the testing methodology you use to measure disparate impact. The EEOC’s guidance on AI in hiring decisions applies directly here, according to EEOC, 2023.
5. Risk mitigation measures. Document the controls you have implemented to reduce algorithmic discrimination risk. These typically include:
- Bias testing and auditing procedures
- Training data quality controls
- Human oversight for edge cases and appeals
- Output verification against real-world sources
- Regular performance monitoring and retraining schedules
6. Performance metrics and monitoring. Stanford HAI researchers found that AI hallucination rates range from 3% to 20% depending on the domain, according to Stanford HAI, 2025. Define quantitative metrics for system performance, including accuracy rates, false positive and negative rates, and disparate impact ratios across protected groups. Establish thresholds that trigger review or system shutdown.
A verification API adds a measurable accuracy layer to the assessment. By verifying AI outputs against independent sources, you document that your system’s factual claims are accurate, not just that its decisions are statistically unbiased. Each verification call produces a timestamped audit record that becomes part of your impact assessment evidence:
import requests
def verify_ai_decision_basis(claim):
response = requests.post(
"https://api.webcite.co/api/v1/verify",
headers={
"x-api-key": "your-api-key",
"Content-Type": "application/json"
},
json={
"claim": claim,
"include_stance": True,
"include_verdict": True
}
)
return response.json()
result = verify_ai_decision_basis(
"Colorado requires AI impact assessments for regulated systems"
)
print(result["verdict"]["result"]) #=> "supported"
print(result["verdict"]["confidence"]) #=> 96
#=> Audit trail: claim, sources, confidence, timestamp
Consumer Transparency and Notification Requirements
The Colorado AI Act establishes specific transparency obligations that differ from GDPR and the EU AI Act in important ways.
Pre-decision notification. Before a regulated AI system makes a consequential decision, you must inform the consumer that AI will be used. The notification must be clear, conspicuous, and provided in a timely manner. “Your application will be evaluated using an automated decision system” is the minimum; best practice includes explaining what data is used and how the system works at a high level.
Post-decision explanation. After a consequential decision is made, consumers have the right to a statement of the principal reasons behind it. This doesn’t require revealing proprietary algorithms, but it does require explaining the key factors. For a loan denial: “Your application was denied based on credit history length (primary factor), debt-to-income ratio, and employment stability.” A 2024 Stanford Law study found that even RAG-based legal AI tools hallucinate in 17% to 33% of queries, according to Magesh et al., Stanford Law School, 2024. Providing accurate explanations requires verification infrastructure that catches errors before they reach consumers.
Right to appeal and human review. Consumers must have the opportunity to appeal a consequential decision and receive human review. This means your product architecture must include an escalation path from automated decision to human reviewer. The human reviewer must have the authority and information needed to override the AI’s decision.
Right to opt out. When technically feasible, consumers must have the option to opt out of AI-driven consequential decisions and receive a human-only process instead.
These requirements create engineering obligations. Your product needs:
- A notification system that triggers before AI-driven decisions
- An explanation generator that produces plain-language rationales
- An appeal workflow with human review capabilities
- An opt-out mechanism with an alternative decision path
A Cisco survey found that 97% of enterprises have paused AI deployments over privacy and governance concerns, according to Cisco, 2024. For SaaS companies, these features must be built into the product, not bolted on as an afterthought. Products that serve multiple states will need to support Colorado’s requirements alongside emerging requirements from other states.
Enforcement: How the Colorado AG Will Act
The Colorado AI Act is enforced exclusively by the Colorado Attorney General through the Colorado Consumer Protection Act (CCPA). Gartner predicts that by 2026, organizations managing AI transparency will see 40% improvement in AI trust scores, according to Gartner, 2024. There is no private right of action, meaning individual consumers cannot sue companies directly for violations.
Violations of the Colorado AI Act are treated as unfair or deceptive trade practices under the CCPA. The AG can pursue:
- Civil penalties for each violation
- Injunctive relief ordering companies to stop using non-compliant AI systems
- Corrective actions requiring companies to fix identified issues and report on compliance
- Consumer remedies including restitution for affected individuals
The enforcement model resembles GDPR’s approach more than traditional U.S. product liability law. Rather than waiting for harm and suing for damages, the AG can investigate proactively and enforce compliance requirements.
Governor Polis’s signing statement expressed reservations about the law’s breadth but signed it to establish Colorado as a leader in AI governance, according to the Colorado Governor’s Office, 2024. The statement noted that the legislature should refine the law’s scope in future sessions, suggesting potential amendments before the June 2026 effective date.
The AG’s enforcement approach will likely follow the GDPR pattern: initial focus on egregious violations and large companies, followed by broader enforcement as the regulatory apparatus matures. The GDPR Enforcement Tracker records over 2,000 fines totaling more than 4.5 billion EUR since 2018, according to GDPR Enforcement Tracker. Colorado’s AI Act, while smaller in scope, uses the same gradual-escalation enforcement model.
How the Colorado AI Act Compares to Other AI Regulations
Colorado isn’t legislating in isolation. Understanding how SB 24-205 fits within the broader regulatory landscape helps organizations build compliance infrastructure that scales.
| Dimension | Colorado AI Act | EU AI Act | NIST AI RMF |
|---|---|---|---|
| Effective date | June 30, 2026 | August 2, 2026 (full) | Active since Jan 2023 |
| Legal status | Mandatory (state law) | Mandatory (EU regulation) | Voluntary |
| Scope | Colorado consumers | EU market (global reach) | U.S.-focused guidance |
| Risk approach | Consequential decisions | 4-tier risk classification | 4 functions (Govern, Map, Measure, Manage) |
| Impact assessment | Required annually | Required for critical AI | Recommended |
| Consumer notification | Required | Required under Article 50 | Recommended |
| Human oversight | Required (appeal right) | Required for critical AI | Recommended |
| Penalties | Consumer protection enforcement | Up to 35M EUR / 7% revenue | None |
| Private right of action | No | Limited | N/A |
The EU AI Act and Colorado AI Act have nearly identical enforcement timelines, creating a compressed compliance window for companies serving both markets. Building infrastructure for one framework significantly reduces the effort needed for the other.
California’s AB 2885, Illinois’s AI Video Interview Act, and New York City’s Local Law 144 on automated employment decision tools represent additional state and local AI requirements, according to Wilson Sonsini, 2026. SaaS companies serving multiple states should design their compliance architecture for the most restrictive requirements and apply them uniformly. For a detailed comparison of the EU AI Act’s requirements and deadlines, see our EU AI Act compliance guide.
Building a Compliance Roadmap for June 2026
With 131 days until the June 30, 2026 effective date, SaaS companies should follow this phased approach:
Phase 1: Classification (Now through March 2026). Complete the risk assessment described in section 2. Map every AI touchpoint in your product. Classify each as regulated or not. Document your methodology. This phase requires no engineering work, only analysis and documentation.
Phase 2: Impact Assessment (March through April 2026). Complete your first AI impact assessment for each regulated system. Document data categories, outputs, discrimination risks, and mitigation measures. Use NIST AI RMF’s Map and Measure functions as your assessment methodology.
Phase 3: Engineering Implementation (April through May 2026). Build the required product features: pre-decision notifications, post-decision explanations, appeal workflows, human review capabilities, and opt-out mechanisms. Integrate a verification API to create audit trails for AI output accuracy.
Phase 4: Testing and Documentation (May through June 2026). Test all compliance features end-to-end. Conduct bias audits on each regulated system. Finalize documentation. Train customer-facing teams on the new notification and appeal processes.
Phase 5: Ongoing Operations (July 2026 onward). Establish annual impact assessment review schedules. Monitor system performance against defined metrics. Maintain audit trails through verification API logs. Report any discovered algorithmic discrimination to the AG within 90 days.
Webcite’s free tier at $0 per month includes 50 credits for testing your compliance integration. Each verification uses 4 credits. The Builder plan at $20 per month provides 500 credits for production. Enterprise plans offer 10,000+ credits with custom pricing for high-volume compliance workflows. Authentication uses the x-api-key header.
Frequently Asked Questions
When does the Colorado AI Act take effect?
The Colorado AI Act (SB 24-205) takes effect on June 30, 2026. Governor Jared Polis signed the bill on May 17, 2024. The law applies to developers and deployers of regulated AI systems that make or substantially contribute to consequential decisions affecting Colorado consumers.
What qualifies as a regulated AI system under the Colorado AI Act?
A regulated AI system is any AI system that makes or is a substantial factor in making a consequential decision. Consequential decisions include those related to education, employment, financial services, healthcare, housing, insurance, and legal services. If your AI product influences any of these decision categories, it qualifies as regulated.
Does the Colorado AI Act apply to companies outside Colorado?
Yes. The law applies to any developer or deployer of regulated AI systems that operate in Colorado or affect Colorado consumers. A SaaS company headquartered in California whose product is used by Colorado residents falls under the law’s scope.
What are the penalties for violating the Colorado AI Act?
The Colorado Attorney General enforces the Act through the Colorado Consumer Protection Act. Violations are treated as unfair or deceptive trade practices. Penalties include civil penalties, injunctive relief, and enforcement actions. The AG can also require organizations to undertake corrective actions and submit compliance reports.
What is an AI impact assessment under the Colorado AI Act?
An AI impact assessment is a documented evaluation that identifies the purpose of a regulated AI system, its intended benefits and risks, the categories of data processed, how risks of algorithmic discrimination are mitigated, and the system’s performance metrics. Both developers and deployers must complete and maintain these assessments.
How does the Colorado AI Act differ from the EU AI Act?
Both laws regulate substantial-risk AI systems, but they differ in scope and penalties. The EU AI Act covers all AI systems with a four-tier risk classification and imposes fines up to 35 million EUR or 7% of global revenue. The Colorado AI Act focuses specifically on consequential decisions, uses existing consumer protection enforcement mechanisms, and applies to Colorado consumers rather than an entire economic bloc.