Skip to main content
Tech

Making sure you’re prepared for AI in HR? We’ve got you covered

HR needs to understand AI-powered tools in order to guide policy about how it’s deployed at work.
article cover

Gmast3r/Getty Images

5 min read

Anyone who has ever used a social media application knows that people adopt new tech at different speeds: Those enjoying their golden years may have never left Facebook, whereas many Gen Z users have flocked to TikTok.

Similarly, HR teams are sizing up new AI tech at different speeds, and governments are regulating it at different paces, too.

Even though automation and AI tools are not new, they have moved into the forefront of discourse, especially after the launch of ChatGPT, according to John Rood, founder and CEO of Proceptual, a firm working with companies to navigate emerging regulation on automated hiring systems.

“Those [generative AI] products have put that idea of AI and automation into the mainstream in a way that wasn’t very accessible to the average person before,” Rood said. “We’ve had automation in HR…The technology [around] that was advancing over time, but what takes that to the next level is an average person being able to see how the tools work.”

Rood said that there’s “two sorts of people”: Those who know AI is developing with haste and are preparing for incoming regulation, and those who will be caught by surprise “as these tools come online much more rapidly.”

So, whether you are deploying AI tools now, plan to in the future, or aren’t sure, preparation is necessary, especially in hiring. Rood suggested that HR and TA teams think about AI in the context of three pillars when putting AI into practice ethically and legally: transparency, explainability, and bias.

Transparency. HR needs to be transparent when hiring that an algorithm is helping them along the hiring process.

“Our candidates—in my opinion—deserve to know how their résumés are being read and ultimately how their candidacies are being ranked,” Root said. “The laws that are coming online in New York and other places…all, to my knowledge, include some discussion of transparency about the use of a particular algorithm.”

Explainability. HR pros, Root said, should seek to understand how and why an algorithm is making its decisions and know that those decisions are for the right reasons.

“We feed [AI] data. It trains on that data, then we feed it your résumé [or] my résumé, and then it makes the decision,” Root said. “How it makes that decision is really important, particularly in the HR context, because we have so much regulation around protected classes, around reducing discrimination [and] reducing bias.”

Bias. HR teams may be on the hook to make sure that the systems don’t violate the rights of protected classes, so how developers build and train the AI algorithms is incredibly important.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

“The challenge with AI is that we’re feeding these systems massive amounts of data,” he said. “If that bias is contained in the training data, we expect that the decisions the algorithm will make will replicate that bias.”

How to do this right. While bias may be unintentional, companies can still be held accountable, according to recent guidance from the US Equal Employment Opportunity Commission. HR teams are on the hook to make sure that the systems don’t violate the rights of protected classes.

This month, New York can begin enforcing Local Law 144. The measure requires employers that use automated employment decision tools to conduct bias audits annually and post the findings publicly.

Even if your business is not operating in NYC, it may be time to consider the issues at hand, Root said. He expects a number of states and local governments to have similar AI measures on the books in the next 18 months, covering about half the population

HR teams using algorithmic hiring may use audits to make sure their hiring practices are ethical and legal.

“The people that make the algorithms, and then, ultimately, the employers that use those algorithms have to be thoughtful about removing that bias,” Root said, noting that some developers are better than others when it comes to popping the hood of their systems.

“If you’re using a vendor and they have some sort of AI or component, you absolutely want to make sure you see whatever material they have about bias mitigation,” he said.

What about ChatGPT? Rood said that whether or not the organization is adopting tools that rely on AI in the HR department, employees are already using applications like ChatGPT in their work.

How the company governs that use will likely involve the HR team, and they may work cross-functionally with executive leadership, legal, and even PR.

Guidance may include information on whether generative AI is permitted at work, what content employees can share with the tool, and when disclosure is required.

“The employee handbook has always fallen in the HR purview, and now every employer needs to be thinking about AI [and] what about generative AI in particular goes into that handbook,” Rood said.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.