Skip to main content
AI

Does your company use AI in hiring? Better check your bias

New guidance clarifies that bias in AI tools may violate discrimination protections, and employers who use them could be held accountable.
article cover

Dragon Claws/Getty Images

3 min read

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

The Equal Opportunity Employment Commission (EEOC) issued new guidance Thursday to help companies steer clear of violations to Title VII of the 1964 Civil Rights Act when using AI and other software that relies on algorithmic analysis or machine learning.

The guidance clarifies that Title VII rules apply to an employer’s use of AI and similar technologies if the tech discriminates against protected groups in hiring, performance management, and monitoring.

​​“I’m not shy about using our enforcement authority when it’s necessary,” EEOC Chair Charlotte Burrows told the Associated Press about the new guidance. “We want to work with employers, but there’s certainly no exemption to the civil rights laws because you engage in discrimination some high-tech way.”

The guidance specifically addressed “disparate impact” protections in Title VII. Employers are prohibited from using tests or procedures that may disproportionately affect an employee (or candidates) based on race, color, religion, sex, or national origin.

For example, if AI is helping you with hiring, and the applicant tracking system (ATS) is biased when screening applicants because those biases are built into the algorithm, the company could still be on the hook.

Like when Amazon shelved a secret AI recruiting tool that was trained to recognize patterns in resumes submitted to the company over a 10-year period; most were from men. The tool then developed a bias toward men. This type of “disparate impact” to women candidates could be exactly what the EEOC is warning against.

“I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” Burrows said in a statement about the new guidance. “This technical assistance resource is another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.”

AI and algorithmic bias is not new, but the use of software to assist HR in areas of hiring, performance management, and productivity monitoring has increased.

“If the data that the model is being trained on is inherently biased data, it is going to generate biased output,” Christie Lindor, Bentley University professor and CEO of DE&I firm Tessi Consulting, told HR Brew. She helps companies assess AI use, making sure it’s aligned with ongoing DE&I strategies.

The EEOC noted that employers could still be held responsible for bias in AI and algorithmic tools they use, even if those tools are administered by a third-party vendor.

HR teams that use these types of vendors should find out how the models are trained and whether programmers and beta testers are diverse, Lindor said.

“Companies that [prioritize inclusivity] and use that as a core principle up front are going to be the winners, and the ones that are not, they’re … gonna have to fix…expensive mistakes,” she said.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.