Colorado is about to change how companies hire. Starting June 2026, it becomes the first US state to require employers to disclose when they’re using AI in hiring decisions and to conduct mandatory bias audits on automated recruitment tools.
The law targets the growing but largely invisible use of AI in resume screening, interview scoring, candidate ranking, and other hiring processes. If you’ve applied for a job recently, there’s a good chance AI influenced whether your application was seen by a human. Colorado is saying: applicants deserve to know, and companies need to prove their tools aren’t discriminating.
What the Law Requires
Colorado’s AI hiring regulation has two core mandates:
Transparency: Employers must disclose to job applicants when AI is being used to evaluate, score, or rank them. This applies to resume screening algorithms, automated interview analysis, chatbot pre-screening, and any other AI-driven assessment tool.
Bias Audits: Companies using automated decision-making tools in hiring must conduct regular audits to check for discriminatory outcomes. These audits examine whether the AI systems produce disparate impacts across protected categories — race, gender, age, disability, and other characteristics covered by existing anti-discrimination law.
The law doesn’t ban AI in hiring. It brings it out of the shadows and subjects it to the same accountability standards that apply to human decision-makers.
Why It Matters Now
AI hiring tools have proliferated far faster than regulation has kept up. An estimated 70-80% of large employers now use some form of automated screening, from keyword-based resume filters to AI-powered video interview analysis that scores candidates on facial expressions, word choice, and speech patterns.
The problem is well-documented:
- Resume screening tools have been found to penalize names associated with minority ethnic groups
- Video interview AI has been shown to rate candidates differently based on lighting, camera angle, and skin tone
- Keyword-based filters systematically disadvantage career changers, non-traditional candidates, and people with gaps in employment
- Automated ranking systems can encode historical hiring biases into future decisions, creating feedback loops
Most applicants never know AI played a role in their rejection. Colorado’s law changes that.
The Regulatory Precedent
Colorado is first, but unlikely to be alone for long. Several states have introduced similar legislation, and the pattern mirrors what happened with data privacy regulation after California passed CCPA in 2018 — one state leads, others follow, and eventually federal action becomes more likely.
States with AI hiring bills in progress include:
- Illinois — already requires notice for AI video interviews (2020 law), now considering broader bias audit requirements
- New York City — Local Law 144 requires bias audits for automated employment decision tools (effective 2023, enforcement ongoing)
- California — multiple bills under consideration covering AI in employment
- New Jersey, Massachusetts, Vermont — various stages of legislative drafting
The federal government has also signaled interest. The EEOC released guidance on AI hiring bias in 2022, and bipartisan support exists for some form of algorithmic accountability framework. Colorado’s law gives lawmakers a working model to study.
What Companies Need to Do
For employers using AI hiring tools — which is most large employers — Colorado’s law creates concrete compliance obligations:
-
Audit your tools. Before June 2026, companies need bias audits for any AI system involved in hiring decisions. This means hiring auditors or developing internal audit capabilities.
-
Update disclosures. Job postings and application processes need to clearly communicate when AI is involved in evaluation. Vague language won’t suffice.
-
Document decision processes. Companies need to be able to explain how AI tools influenced specific hiring decisions if challenged.
-
Prepare for multi-state compliance. If other states adopt similar laws with different requirements, companies operating nationally face a patchwork of rules. Standardizing to the strictest standard may be the pragmatic approach.
-
Review vendor contracts. Many companies use third-party AI hiring tools. The law puts compliance obligations on the employer, not the vendor. Companies need to ensure their vendors can support bias audit requirements.
The Bigger Debate
Colorado’s law sits at the intersection of two competing pressures. On one side: the legitimate need for transparency and accountability when algorithms make life-altering decisions about people’s careers. On the other: concerns that regulation could slow AI adoption, increase hiring costs, and create compliance burdens that disadvantage smaller employers.
There’s also the question of whether bias audits actually work. Auditing AI systems for bias is technically challenging — outcomes depend heavily on what data you test with, what thresholds you set, and how you define fairness. A system that passes one audit framework might fail another. The law requires audits but doesn’t yet specify a single standard.
That said, imperfect transparency beats zero transparency. Right now, most AI hiring tools operate with no oversight at all. Requiring companies to at least look for bias — and tell applicants when AI is being used — is a meaningful improvement over the status quo.
For Job Seekers
If you’re applying for jobs in Colorado after June 2026, you have new rights:
- You must be told when AI is evaluating your application
- You can request information about what data the AI considered
- Companies must have conducted bias audits on their tools
These rights don’t exist in most other states yet. But they will. And knowing they’re coming gives applicants everywhere a reason to start asking questions now — even where the law doesn’t yet require answers.
SOURCES
- X/Twitter (@status)