Artificial intelligence is transforming how employers recruit and evaluate talent. According to industry research, the percentage of HR leaders actively deploying generative AI jumped from 19% in mid-2023 to 61% by early 2025. While these tools promise efficiency and cost savings, they also introduce significant legal risks under federal anti-discrimination laws.
Understanding the Legal Framework
The EEOC has made clear that Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, sex, and national origin, applies fully to AI-driven hiring decisions. Employers remain liable for discriminatory outcomes even when those outcomes result from algorithmic tools rather than human judgment.
The legal standard centers on disparate impact, established in Griggs v. Duke Power Co., 401 U.S. 424 (1971). Employment practices that appear neutral but disproportionately exclude protected groups violate Title VII unless shown to be job-related and consistent with business necessity. AI hiring tools can trigger liability when they systematically disadvantage applicants based on protected characteristics.
While Griggs remains controlling precedent for disparate impact liability under Title VII, the framework has come under increasing judicial scrutiny. For example, in Ricci v. DeStefano, 557 U.S. 557 (2009), the Supreme Court considered whether an employer could discard test results to avoid potential disparate-impact liability. The Court held that such actions are impermissible under Title VII unless the employer can demonstrate a “strong basis in evidence” that, had it not taken the action, it would have been liable under the disparate-impact statute. This reasoning is consistent with themes expressed in Chief Justice Roberts’s opinions addressing race-conscious decision-making. In a plurality opinion in Parents Involved in Community Schools v. Seattle School District No. 1, 551 U.S. 701, 748 (2007), he wrote, “The way to stop discrimination on the basis of race is to stop discriminating on the basis of race.” More recently, in Students for Fair Admissions, Inc. v. President & Fellows of Harvard College, 600 U.S. 181, 206 (2023), Chief Justice Roberts reiterated this view: “Eliminating racial discrimination means eliminating all of it.”
The executive branch has recently undertaken measures to reshape civil rights enforcement. An April 2025 Executive Order entitled “Restoring Equality of Opportunity and Meritocracy” instructs federal agencies to “deprioritize enforcement of all statutes and regulations to the extent they include disparate-impact liability,” signaling a shift in how such claims are prioritized. Subsequently, an internal memorandum obtained by The Associated Press reveals that the EEOC plans to discharge any complaints based on disparate impact liability. In these cases, the EEOC will reportedly send “Right to Sue” letters, leaving claimants to pursue their allegations without agency participation.
Common AI Bias Scenarios
Resume scanners prioritizing certain keywords may systematically exclude qualified candidates if keywords correlate with protected characteristics. Video interviewing software evaluating facial expressions and speech patterns may disadvantage individuals with disabilities or from different cultural backgrounds. Testing software providing “job fit” scores based on perceived “cultural fit” can perpetuate existing demographics rather than assessing job-related qualifications.
The Evolving Regulatory Landscape
A growing number of states have proposed legislation and enacted AI-specific regulations. New York City Local Law 144 requires annual bias audits by independent auditors, with penalties up to $1,500 per violation.[1] California's Civil Rights Council finalized regulations effective October 2025 that prohibit employers from using automated decision systems that discriminate based on protected categories under the Fair Employment and Housing Act.[2] Illinois has enacted the Artificial Intelligence Video Interview Act, which regulates AI video interviews, requiring notice, explanation, and consent.[3] Colorado’s Senate Bill 205, effective June 30, 2026, requires employers using high-risk AI systems to conduct impact assessments, implement risk management policies, and offer human review of adverse decisions when technically feasible, as part of broader efforts to prevent algorithmic discrimination.[4]
At the federal level, the EEOC has previously taken steps to address AI discrimination. In May 2022, the agency filed its first AI-related lawsuit, EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565, alleging age discrimination by an AI-powered hiring tool. The case settled for $365,000 and signaled the agency’s willingness to pursue claims involving algorithmic bias. The EEOC’s May 2023 guidance further clarified that employers are responsible for discriminatory outcomes under Title VII, even when AI tools are developed or deployed by third-party vendors. Although the agency has since shifted its enforcement priorities, these early actions and guidance continue to shape how employers assess risk and compliance in AI-driven hiring.
Compliance Best Practices
Employers can minimize legal exposure while still benefiting from AI hiring tools through strategic compliance measures. The following practices address some of the most critical risk areas:
- Conduct Regular Bias Audits. Test AI systems using diverse candidate pools before deployment and periodically thereafter to identify statistically significant disparate outcomes for protected groups.
- Review Vendor Contracts Carefully. Ensure contracts require vendors to validate tools for bias, provide access to audit data, and address liability for discriminatory outcomes. Many AI vendors disclaim liability for algorithmic bias, leaving employers exposed under Title VII even when using third-party tools.
- Train Your Team. Educate HR personnel and hiring managers about AI limitations, potential biases, and legal obligations under Title VII and state laws. Ensure they understand that AI recommendations require critical evaluation rather than automatic acceptance.
Moving Forward
AI hiring tools offer genuine advantages in efficiency and candidate reach, but they demand careful implementation and ongoing monitoring. By conducting bias audits, maintaining human oversight, and ensuring transparency, employers can harness AI’s benefits while managing legal risks. As regulations evolve, proactive compliance will distinguish responsible employers from those facing costly litigation.
 Jim O'Brien is an associate at Goodell DeVries. He can be reached at jobrien@gdldlaw.com.
Jim O'Brien is an associate at Goodell DeVries. He can be reached at jobrien@gdldlaw.com.
Goodell DeVries assists employers with policy reviews, AI compliance audits, and representation in EEOC discrimination proceedings. Contact us to learn how we can protect your organization from costly enforcement actions.
NOTES
[1] N.Y.C. Admin. Code § 20-870 et seq.
[2] Cal. Code Regs. tit. 2, § 11008.1 et seq.
[3] 820 Ill. Comp. Stat. Ann. 42/1 et seq.
[4] Colo. Rev. Stat. § 6-1-1701 et seq. Colorado Senate Bill 24-205 was passed during the 2024 regular session and signed into law by the Governor on May 17, 2024. During a 2025 special legislative session, a new bill, Senate Bill 25B-004, was introduced and passed, extends the implementation of the bill's key provisions.
 
       
      