Colorado is set to implement sweeping AI legislation by February 2026, mirroring extensive EU regulations aimed at managing high-risk AI systems and promoting transparency and equity.
Colorado and EU Lead in Comprehensive AI Legislation
Colorado, USA & Brussels, EU – As summer draws to a close, regulatory advancements in artificial intelligence (AI) are gaining momentum on both sides of the Atlantic. Colorado is set to become the first US state to implement a sweeping AI law effective from 1 February 2026. Similarly, the European Union has laid the groundwork for its extensive AI regulation, slated for phased applicability between February 2025 and August 2026.
Both the Colorado statute and the EU regulations will govern entities that develop and utilise AI systems, extending their reach to those importing and distributing these technologies within EU Member States. The definition of AI systems under both sets of regulations is extensive, covering all forms of AI, not exclusively generative AI. A significant emphasis is placed on “high-risk” AI systems, which are those influencing critical decisions in areas such as education, employment, and housing. However, exemptions exist for simpler applications like calculators and certain research-oriented uses.
This move by Colorado and the EU is part of a broader trend in AI regulation globally. In the United States, New York City’s law, which targets AI in employment decisions, went into effect on 5 July 2023. Utah’s AI law mandates disclosures to individuals interacting with AI and has been in force since 1 May 2024. Tennessee’s legislation, which centres on protecting individuals from deepfakes, became effective on 1 July 2024.
Key Provisions and Requirements
Algorithmic Discrimination: Both Colorado and the EU regulations compel companies to address and prevent algorithmic discrimination. In Colorado, this encompasses bias on grounds such as age, race, gender, or religion. The EU’s definition extends to risks that could affect an individual’s health, safety, or fundamental rights. Quality and risk management systems must be implemented by developers and deployers to mitigate foreseeable biases.
Transparency: Mirroring Utah’s disclosure requirements, the forthcoming laws will mandate that users are informed when engaging with consumer-facing AI systems unless the interaction is “obvious” to an average person. In Colorado, residents must be notified of their right to opt-out and the process for doing so. The EU will require disclosure of AI systems responsible for generating deepfakes or processing biometric data or emotion recognition.
Developer Responsibilities: Developers of high-risk AI systems in the EU must offer comprehensive instructions and risk information to users (deployers). These guidelines are to include potential risks and the system’s technical capabilities. Colorado’s requirements for developers are analogous, ensuring users are well-informed about the AI systems they deploy.
Risk Management Systems: Both jurisdictions mandate the implementation of robust risk management programmes. Colorado developers must adopt frameworks like NIST’s updated AI Risk Management Framework or ISO/IEC 42001. The EU similarly requires businesses employing high-risk AI systems to maintain these systems, aligning with the latest AI industry standards.
Impact Assessments: Annual impact assessments are required for high-risk AI system deployers in both regions. It’s noted that similar assessments are already necessitated under GDPR and various US state laws for high-risk profiling activities.
Compliance with Existing Laws: Entities should also be mindful of ongoing obligations under GDPR and US state regulations, along with laws against unfair and deceptive trade practices. Regulatory bodies in both the EU and the US have indicated a readiness to enforce these new stipulations under existing frameworks.
Enforcement and Next Steps
In Colorado, violations of the AI law will be treated as unfair trade practices, with enforcement exclusively managed by the Attorney General’s office. Conversely, in the EU, enforcement responsibilities will be shared between the AI Office and member state authorities.
Given the impending implementation dates, companies are advised to start preparing to comply with these new regulations. Even ahead of the 2025 and 2026 deadlines, early compliance efforts can align with current regulatory expectations under GDPR and various US state laws, mitigating risks related to unfair and deceptive trade practices.
As AI continues to evolve and its integration into daily life deepens, these regulatory frameworks aim to safeguard public interests while fostering transparency, equity, and accountability in AI deployment.