Skip to main content
Compliance

What “No robo bosses” could mean for employee data and privacy security

A California bill is primed to add to the patchwork of state rules governing AI at work.

A robot dressed in a business suit with a grid of binary code behind it

Francis Scialabba

4 min read

A California Democrat last month introduced the No Robo Bosses Act, SB7, to the statehouse’s upper chamber requiring human oversight of automated decision-making systems (ADS) in the workplace. The move comes as state and local governments and businesses alike grapple with a federal vacuum on AI legislation amid uncertainty in Washington.

The move in the Golden State, sponsored by state Sen. Jerry McNerney, would prevent employers from relying solely on automated decision-making systems when making “hiring, promotion, discipline, or termination decisions” without human oversight. The bill also prohibits AI that uses a worker’s personal information for predictive analytics, a key measure applauded by data privacy pros.

“Businesses are increasingly using AI to boost efficiency and productivity in the workplace. But there are currently no safeguards to prevent machines from unjustly or illegally impacting workers’ livelihoods and working conditions,” Sen. McNerney said of the announcement in a press release. “SB 7 does not prohibit ADS in the workplace, rather it establishes guardrails to ensure that California businesses are not operated by robo bosses…”

Colorado, Illinois, and New York City have already passed state or local AI measures that include rules on using ADS. If passed, California’s law would add to the patchwork of regulations governing AI use by small- and medium-sized businesses and corporate America.

AI experts say many protective measures included in the bill align with early industry best practices:

  • Protect against bias
  • Deploy with transparency
  • Protect privacy and data
  • Include human oversight
  • Outline review policies

“To a lot of my peers in privacy and AI governance more generally [the measure is] just really common sense,” said Ron De Jesus, field chief privacy officer at Transcend, a privacy compliance software company. “The laws aren’t trying to impede innovation. They’re not requiring extremely burdensome administrative exercises to be done.”

Privacy and data transparency. The measure would also require companies to be transparent in the AI decision-making systems and would require an employer to allow a worker to access data collected or used by an ADS and to correct errors in data, according to the bill.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

“Employees should know how these technologies are going to be affecting their privacy, and how they are assessed in the workplace, and they need to have choices around that,” De Jesus said, pointing to the “common sense” requirement for an appeals process.

Salary history, performance reviews, and private information could all be used to train these AI systems, De Jesus added. Employees should have the right to know what information of theirs is used in AI systems, especially if the use case of that information is a different purpose than what an employee might have consented to originally.

He also suggested that employees provide employers with SSN, bank information, employment history, skills, and education for the purposes of gaining employment or to facilitate receiving benefits. But if that data is used for a different purpose, training some new in-house AI system, perhaps, companies should be having these discussions now.

De Jesus recommended that HR teams as well as IT and HRIS leaders teams ask for a privacy impact assessment before procuring new AI tools that will impact workers to understand the risks involved when it comes to implementing that system.

The No Robo Bosses Act also explicitly prohibits AI systems from learning or obtaining a worker’s “immigration status, veteran status, ancestral, history, religious or political beliefs, health or reproductive status, history, or plan, emotional or psychological state, neural data, sexual or gender orientation, disability, criminal record, credit history, or statuses protected under Section 12940 of the Government Code,” information that bias could impact.

“I think the general attitude in companies is: the more data the better, and that’s something that we need to start having a conversation with our executive teams around,” De Jesus said. “What’s the balance between respecting employee and consumer privacy [and] something that might be making a lot of money and profit?”

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.