Tech

HR plays a critical role in creating AI policy and deploying the tech at work

People leaders should be helping guide how businesses automate tasks and guide employees on using the technology.
article cover

Amelia Kinsinger

5 min read

Much like welcoming a firstborn into the world, some things are hard to fully prepare for, and there’s myriad competing recommendations and advice about how to do it right.

Unlike childbirth, AI technology hasn’t been around since before the protohuman, so it’s unclear if we even know what is right. That hasn’t stopped people pros from being tapped to articulate policy on using AI at work or to work on a plan for how to incorporate these new tools into workflows.

“Almost every CEO, HR director, CTO is being asked by their boards of directors and their colleagues: What is our GenAI game plan? They’re all scrambling to develop one,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab at the Stanford Institute for Human Centered AI and the Department of Economics. “I see a chaotic landscape out there that’s filled with confusion and buzzwords, and that’s a tough situation to be making decisions.”

The AI task analysis

Brynjolfsson, who also co-founded Workhelix, a tech company that works with employers to identify opportunities for AI adoption, told HR Brew that these decisions must be cross-functional. HR pros should work with IT leaders and other senior execs. AI technology affects every part of the organization, he said, so decisions should also be data-driven.

“We developed what we call the task-based analysis,” Brynjolfsson said of Workhelix. “Take any organization, you can look at the occupations in that organization and then you look at each occupation, and an occupation is a bundle of tasks.”

Workhelix identifies and bundles more than 200,000 tasks associated with different occupations and found that “upwards of 40% of the tasks in most organizations have a potential to be significantly augmented with generative AI or other tools.” The evaluation considers tasks augmentable if it more than doubles the productivity when relying on AI.

What results could be a roadmap for employers to look for automation opportunities. The process can help them identify where to put resources amid the AI transformation. According to Brynjolfsson, Workhelix’s task-based analysis can tap into a company’s HRIS or HCM platforms to run its analysis, but the models also rely on other data sources like LinkedIn and job postings.

Where does AI fit in?

When looking to figure out the role AI is going to play in a company, Brynjolfsson recommends avoiding a one-size-fits-all approach.

“You really want to know where your relative strengths and weaknesses are and where your biggest opportunities are,” he said.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

The tech stack isn’t the only consideration when thinking through the implications of AI at work. Compliance concerns and industry best practices also need to be factored into decision-making and policy.

“I think especially smaller employers are nervous about using the [generative AI] tools because they’re not sure how they can do it without running afoul of the legal landscape,” Gunderson Dettmer labor attorney Natalie Pierce told HR Brew in December.

Major HR tech companies ADP, Indeed, LinkedIn, and Workday teamed up with the Future of Privacy Forum last fall to outline six “cornerstones” for AI deployment.

Principles that companies and their HR teams should be mindful of, according to the framework, include:

  • Prioritizing non-discrimination
  • Responsible governance
  • Transparency
  • Data security and privacy
  • Human oversight
  • Review procedures

Staying compliant with AI

Back in December, Pierce also pointed to new laws and regulations governing AI’s use that might impact how companies use and make sure the new tech is compliant.

Pierce then suggested that when leveraging generative AI in work settings, users should use neutral prompts, avoid indicating preferences, and guide users to ask open-ended questions; Pierce also recommended prompting generative AI tools to perform their own blind assessments, and telling it to avoid bias in its responses.

“[AI] is changing fundamentally the nature of the skills and the tasks that people are doing. So many of them now are being augmented by these technologies, and it requires a very different kind of skill set,” Brynjolfsson said. “It just ultimately changes to work, so this is a deeper, more lasting transformation.”

Brynjolfsson said this is a transition to the economy that will reveal “winners and losers,” so tech procurement and people policy needs to be developed carefully.

Cassie Kozyrkov, CEO of Data Scientific, suggested asking three basic questions of any new tech:

  • What is the AI tool going to be used for?
  • What data will it be relying on to accomplish that?
  • How will you know that it’s working?

“Not all experts will agree, but in my opinion, the safety nets of your system are more important than the algorithm’s quality. So, focus a lot on safety nets, then at least you’ll throttle it before it does something bad if the quality is not so good,” Kozyrkov said.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

H
B