There are less than three weeks until January 1, 2023—otherwise known as D-Day for New York City’s first-of-its-kind law requiring audits of the automated technology used to make employment decisions.
But as it happens, the city needs more time to iron out the details.
The Department of Consumer and Worker Protections (DCWP), the agency tasked with creating the final rules, announced on Monday it will not enforce the law—and HR teams with operations in the Big Apple won’t have to audit their tech—until April 15, 2023.
To recap…In December 2021, New York City lawmakers passed the automated employment decision tools law, which will require employers to conduct third-party bias audits of hiring technology before using it, as well as inform candidates or employees when they do use AI in employment decisions.
What does it mean? Good question. When the law passed, it was generally received as being well-intentioned. But as HR has prepared to comply, some have been pulling at their hair trying to decipher what compliance might entail.
Questions abound over many aspects of the law, from which tools fall under the umbrella of “automatic decision tools,” to who qualifies as an independent auditor, to how to conduct the audit in the first place—a question which, as HR Brew has reported, currently lacks consensus.
This fall, the DCWP attempted to clarify by releasing proposed rules—but this resulted in more confusion and a “high volume” of comments, including some from SHRM’s chief of staff Emily Dickens.
“The purpose of this law will be served best where employers have clear guidance as to when the law is triggered—which tools, processes, or systems are covered—and what, specifically, employers need to do to comply,” Dickens wrote in a comment to the department.
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
Due to the feedback, the agency will hold a second public hearing and postpone final rulemaking.
Why it matters. Nearly 1 in 4 US employers, and 42% of employers at organizations with over 5,000 employees, surveyed by SHRM in early 2022 said they used some form of AI in their employment processes.
Though the practice is relatively common, regulation is not. That’s a problem, said Ryan Carrier, executive director of ForHumanity, a nonprofit developing an independent auditing system for AI and one of the original advocates of NYC’s law. Evidence has suggested that some AI tools can produce biased outcomes on the basis of race, gender, or disability.
“We know there’s bias in [the tools]—plain and simple,” Carrier said. Demonstrating such bias is usually left up to candidates or employees, but this law would flip the script and require employers to prove their technology is unbiased.
Get it right. Carrier doesn’t see the postponement as a major setback. He applauded the spirit of the law and said that the original timeline of January 1 was “aggressive.”
“I’ll be pleasantly surprised if they make the April 15 deadline,” he said. “It could shift until later in the year, and, honestly, we’re okay with that. We’d rather see a good set of rules implemented, and be actionable and well understood by a large portion of [the] marketplace.”—SV
Do you work in HR or have information about your HR department we should know? Email [email protected]. For completely confidential conversations, ask Susanna for her number on Signal.