AI is ripe for regulation, especially in the workplace. If only we could agree on what exactly needs to be regulated.
A lack of understanding of AI in the workplace. Two-thirds of US adults say they would not want to apply for a job if artificial intelligence was used in the hiring process, according to recent data from Pew Research. However, the report also found low awareness among the general public around the current usage of AI, with 61% not having heard about AI being used in this way.
Last month, Sundar Pichai appeared on 60 Minutes in an attempt to explain the potential dangers of AI.
“All of us in the field call it…a ‘black box,’” he said. “You can’t quite tell why it said this, or why it got [something] wrong. We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.”
Pichai defended ongoing progress in the space.
“The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it,” he said.
“If [I] take a 10-year outlook, it is so clear to me, we will have some form of very capable intelligence that can do amazing things. And we need to adapt as a society for it,” he later stated.
Some linguistics professors and AI experts have criticized Pichai’s treatment of AI as inevitable, arguing that it implies that it is society’s job to figure out what to do with this emerging technology and its potential threats.
Opposition of many kinds. In March, a group that includes Apple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp, and Elon Musk, along with a slew of professors and researchers, released the “AI pause” letter, warning that AI presents significant risks to humanity and calling for a six-month moratorium on “the training of AI systems more powerful than GPT-4.”
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
Timnit Gebru, the former Google AI ethics researcher now working in public-interest research, agrees that AI poses risks to society but argued in Current Affairs that the letter runs the risk of “longtermism,” the tendency to focus on hypothetical threats instead of current harms.
Gebru’s organization, the Distributed AI Research Institute (DAIR), released a response to the pause letter, calling for greater accountability and attention to the negative impact of AI tools that are currently in use.
“The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability, and preventing exploitative labor practices,” said the authors in the DAIR letter.
“It’s important that the onus be on the corporations to show us these things before deployment, rather than understaffed agencies auditing or inquiring about them after the fact,” Gebru told Politico.
Regulation is coming. Aiming to address the fears of AI and its potential problems, the EEOC has launched an “algorithmic fairness” initiative, and published guidance for employers on staying compliant with Americans with Disabilities Act (ADA) regulations.
Meanwhile, the City of New York has set a date for enforcement of its AI transparency laws. Some states have mentioned AI in consumer privacy laws, and the White House has requested public input around the use of AI in the workplace.
“When I read these laws…I think it’s a call from the government for responsibility. What [they] want is to prevent the situation where these black-box solutions come in and make life-changing decisions,” Shay David, co-founder, chairman, and CEO of software company Retrain.ai, said in a webinar. “When they go unchecked, some of these systems make errors, and some of the errors are not explainable…A big part of what this law and others are trying to achieve is a level of transparency.”—AK