Author/contributor

Richard Reice photo

Richard Reice

Partner

Artificial intelligence software in human resources (HR) is no longer experimental; it is here, its usage is growing, and so is regulation regarding its use.  In a new Employee Relations Law Journal article, partner Richard Reice maps the fast-moving patchwork of regulation now shaping how employers use AI-powered  automated employment decision tools (AEDTs).

He explains that regulators are concerned that the growing use of AI-based AEDTs and the lack of human involvement in HR decisions may result in a form of automated employment discrimination

Why this Matters

Lawmakers and regulators are focused on the data sets used to train AI, model design, prompts, and how outputs guide people. Those risks sit on top of Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA).

That means regulators and plaintiffs’ lawyers will examine how your tools function and how your teams apply their outputs across the hiring lifecycle. So, if you hire, promote, or terminate with the help of software, the time to refresh your compliance program is now.

What’s Changing and When

  • New York City: lessons from Local Law 144

New York City’s rule requires bias audits and notices for some AEDTs. The law hinges on whether a tool “substantially assists” a decision. Many employers argue a human made the call, which narrows coverage in practice. Public reporting to date has been limited, and enforcement has been light; a 2024 CatLab/Consumer Reports review found only 18 of 391 employers posted audits.

  • Illinois (effective January 1, 2026)

Illinois amended the Human Rights Act to cover any employment use of AI that results in discrimination. Employers must give notice to applicants and employees, while the use of a ZIP code as a proxy for protected traits is barred. The state will set the mechanics of notice by rulemaking. This is broader than NYC and squarely aimed at fair hiring practices.

  • Colorado (effective June 30, 2026)

Colorado’s robust Artificial Intelligence Act governs “high-risk” AI used for “consequential decisions,” including hiring, promotion, and termination. Developers and deployers must use reasonable care, maintain a risk-management program, run impact assessments on a set cadence and after material changes, give clear notices, allow people to opt out of AI data processing, and provide explanations for adverse actions with a path to appeal or correct information.

  • California (effective October 1, 2025)

Although not mentioned in the original article, it should also be noted that new California regulations are now in effect as of October 1, 2025. These rules clarify how the state’s anti-discrimination laws apply to automated decision systems in employment. The regulations make it clear that using an AI tool can violate the law if it harms applicants based on protected characteristics and require employers to maintain automated decision data for a minimum of four years.

  • European Union (key duties start August 2, 2027)

The EU Artificial Intelligence Act treats many HR uses as “high-risk,” requiring data governance, documentation, human oversight, and transparency. Bans already apply to workplace emotion recognition, social scoring, and certain biometric categorization. Timelines and guidance are still shaking out, which complicates global rollouts.

What this Means for Your Program

Two fronts of exposure are converging: Compliance with new AI-specific rules and traditional discrimination claims involving AEDT-assisted decisions. Litigation teams will probe datasets, prompts, human-in-the-loop controls, and statistical methods. That raises the bar for documentation and defensibility across your AI in the HR stack.

A focused plan you can implement now:

  1. Inventory AEDTs across the HR lifecycle. Record purpose, inputs, model versions, decision points, and where humans step in. Keep versioning and change logs. This is the backbone for responding to bias, notice, and audit requests.
  2. Stand up repeatable bias testing. Define metrics, test on a cadence, log findings, and track remediation. Aim to detect and reduce AI hiring bias and other forms of algorithmic bias in employment. NIST AI RMF provides a helpful structure.
  3. Build notices and rights workflows. Draft plain-language templates for Illinois-style notices and Colorado-style disclosures, opt-outs, adverse-action explanations, and appeals. Train recruiters and HR ops to use them consistently.
  4. Run impact assessments. In Colorado, complete one yearly and within 90 days of material changes; capture datasets used, performance metrics, and mitigation steps. Treat these as living records, not one-off exercises.
  5. Scrub proxy variables. Remove inputs like ZIP code that can act as stand-ins for protected traits and undercut fair hiring practices.
  6. Tighten vendor obligations. Require transparency on training data, system limits, known risks, change control, and cooperation on audits and investigations. Bake these into contracts with AEDT providers.

Bottom Line

The rules don’t match across jurisdictions, yet the expectation is clear: Build transparent, tested, and well-governed AEDT processes that support fair hiring practices. A disciplined approach now will reduce risk as enforcement ramps up in 2026 and beyond.

Download the full article (PDF): New Regulations on the Use of Artificial Intelligence Software in Employment Are Likely to Generate Hardwired Confusion by Richard M. Reice.

 

 

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.