Bringing Responsible AI to the Hiring Process
Realizing the benefits of artificial intelligence requires awareness of its risks—and a plan to address them.
FairnesS DRIVES OUTCOMES
Despite these risks, research suggests that many companies are already embracing AI in their human resources departments. The Indeed Global AI Survey, conducted in late 2023, found that 87% of HR and talent acquisition employees were using AI tools to some degree, and the majority of them said they believed the technology would improve the hiring process and make them more productive. At the same time, more than half (53%) were concerned it could lead to bias.
But running AI recruitment tools unchecked can hurt qualified, skilled job seekers, says LaFawn Davis, chief people and sustainability officer at Indeed. For example, this can happen when employers use AI-driven screening tools that emphasize qualifications like four-year degrees—while this is a common requirement, it may filter out capable candidates because a degree doesn’t always indicate ability. Davis felt the impact of this practice firsthand early in her career after getting laid off from a tech startup during the dot-com bust in the early 2000s.
“I spent six months applying to jobs, knowing I had the relevant skills and experience, but I was turned away again and again because I didn’t have a college degree,” she says.
According to Davis, a fairer, more impactful way for employers to use AI would be to employ skills-first hiring that focuses on candidates’ abilities and relevant work performed, rather than the level of higher education they’ve attained or years of professional experience. Doing so helps everybody—both job seekers and employers—by increasing the talent pool and enabling people who’ve traditionally been disadvantaged in the hiring process to find work.
“If we let AI-based hiring systems run unchecked, we create more barriers to entry for disadvantaged groups—resulting in less inclusion and diversity in the workforce, more limited perspectives and weaker business performance,” Davis says.
There are many reasons employers should strive to remove biases from their hiring practices, says Miriam Vogel, chairperson of the National AI Advisory Committee and president of EqualAI, a Washington, D.C.-based organization that guides employers on using the technology ethically and responsibly. For one, biased AI can lead to hiring people with the same profile, background and experience—resulting in an unintentionally homogenous workforce. Multiple studies have found that gender- and racially diverse companies financially outperform those that are less so, she says.
Adopting a Humanized Approach
To instill responsible AI, businesses should create a framework around its use and appoint someone in senior leadership who’s accountable for it, Vogel says. “If you don’t have a game plan laid out ahead of time, it’ll be too late by the time an incident occurs or harm develops.”
Human intervention and decision-making are also key. Before any system or tool is deployed, leaders must understand the context of the source data so that existing favoritism is not built into the model. Biases like preferring candidates who live in certain ZIP codes can create issues of equitability in the hiring process. Broadening the training data to promote inclusivity across the talent pool is one way to reduce this risk, Noble says.Trey Causey, Indeed’s senior director of responsible technology, says that while companies must address concerns about AI’s use as an employment tool, those that take a thoughtful approach stand to see significant benefits and productivity gains. The Indeed survey found HR and talent leaders’ biggest hope was that AI would free them to spend more time on complex and important work. Seventy-two percent said they were optimistic AI tools would allow them to focus on the more human elements of their job, while 75% said the technology would limit the need for them to carry out redundant or mundane tasks.
To help propel the technology within its own workforce in a thoughtful yet impactful way, Indeed created its own AI principles in 2020 to continuously evaluate and update its hiring processes through a lens of inclusion. “Responsible AI is all about recognizing that we all have a role to play in how AI is developed and used—we don’t have to just let technology happen to us,” Causey says.
Companies must not allow the technology’s limitations to become a barrier to adoption. Rather, they should embrace AI’s value while understanding the important role humans play in overseeing it.
“When humans work together with AI systems,” Causey says, “they are much more successful than either in isolation.”
Indeed makes hiring faster, simpler and more human.
Learn more
Custom Content from WSJ is a unit of The Wall Street Journal Advertising Department. The Wall Street Journal news organization was not involved in the creation of this content.
Responsible AI is all about recognizing that we all have a role to play in how AI is developed and used—we don’t have to just let technology happen to us.
Trey Causey, Senior Director of Responsible Technology, Indeed
A fairer, more impactful way for employers to use AI would be to employ skills-first hiring that focuses on candidates’ abilities and relevant work performed.
Miriam Vogel, President and CEO, EqualAI
Who’s more optimistic about AI, HR leaders or job seekers?
Tap to reveal
It’s a close call, but HR leaders are currently ahead in terms of buy-in.
Job seekers
54%
46%
Artificial intelligence is supercharging the world of hiring, helping employers more efficiently and effectively write job descriptions, review applications and resumes—and ultimately, identify the right candidates.
However, as AI use becomes widespread, employers must tread carefully: Without proper human oversight and controls in place, the technology can create a number of unintended consequences, like introducing bias into the hiring process and overlooking other qualified applicants.
“AI models are typically trained on past employment data,” says Safiya Noble, faculty director of the Center on Race and Digital Justice at the University of California, Los Angeles. “Any discrimination that's happened in the past that’s fed into the training model may be reproduced.”
For example, she says, men have traditionally been promoted more often than women into management positions. Such past trends can produce AI models that continue to recommend the hiring and promotion of males over their female colleagues. “We also know that people of color are often the last hired and first fired,” Noble says. “So that can end up being included while training your model too.”
SCROLL
Source: Global AI Survey, Indeed, October, 2023.
54%
46%
Learn more
Learn more
Tap to reveal
HR leaders
BACK TO TOP
