While more than half of U.S. adults agree that artificial intelligence technology could help improve racial bias in the hiring process, Black Americans are more skeptical, a recent report found.
About 47% of Black people who see racial bias in hiring as an issue agree that AI could improve the process, versus 64% of Asian Americans and 54% of white and Hispanic people, according to the Pew Research Center study. While only 13% of U.S. adults say it would get worse, 20% of Black people — the highest among other racial groups — say the same.
In the workplace, Black Americans are more likely to say racial bias and unfair treatment is a major problem. Two-third of Americans say they wouldn’t apply to employers who use AI in hiring, the Pew study found.
Monica Anderson, associate director of research at the Pew Research Center, told Capital B that on both sides of the AI issue, people felt racial bias would play a role in their decision on whether they would apply to an employer who uses AI or if the technology improves racial bias.
“There were some respondents that spoke to the idea that they themselves have been victims of racial discrimination, so they see AI as a way that would actually be more neutral and actually be more fair and balanced,” she said. “We do see people who wouldn’t want to apply for a job using AI talk about how biases are already programmed into AI … and that AI cannot be the thing that solves things like racism or sexism because it just reflects what’s already a part of our communities.”
Studies show that AI does perpetuate biases across gender, age, race, and people with disabilities and dialectic or regional differences of speech. This, in conflation with the negative outcomes algorithmic methods have on Black lives, including credit scores and loan applications, also contribute to the hesitancy among Black Americans, said Fay Cobb Payton, emeritus professor of information technology and business analytics at North Carolina State University.
Over the past 30 years, hiring discrimination remained unchanged, particularly for Black Americans, according to a meta-analysis of field experiments. On average, white applicants received 36% more callbacks for jobs than Black people and 24% more callbacks than Latinos — despite identical resumes and similar qualifications.
In another study, the National Bureau of Economic Research submitted 83,000 fake applications to entry-level job openings for 108 Fortune 500 companies. Applications with distinctive Black names — such as Aisha, Ebony, Darnell, and Hakim — were less likely to receive a response from employers compared to white names like Heather, Jennifer, Bradley, and Nathan.
In a similar experiment conducted by Payton, students changed their names to more generic names on applications to increase their likelihood of getting an interview. One issue, though, is that it forces students to be someone else to try to get a job, she said.
“When you’re doing all that self-manipulation, part of yourself is not being authentic to the process. And that says, ‘Can I bring my whole self to the organization?’” Payton said. “What you get is confirmation biases that can creep up in the process.”
Given the decades of discrimination in the workplace against people of color, businesses pledged to use AI to help to reduce unconscious biases in hiring and decision-making processes. Nationwide, employers have used AI to automate and simplify tasks and processes such as analyzing resumes, creating assessments, transcribing recorded statements to text, or monitoring job performance to determine salaries and promotion.
In some instances, AI has helped. One study found that a candidate for a software engineer job picked by an AI machine was 14% more likely to pass an interview and receive a job offer.
Payton raised concerns that as AI-assisted tools become more popular, the software may activate more biases, and companies shouldn’t be off the hook for the discrimination.
“We’ve got to think about fairness, equity, accountability, and transparency. Where does the accountability lie within the organization?” Payton said. “There’s always this rapid fire when it comes to technology development. Regulations and laws don’t necessarily keep up because a lot of times … the bias can be encoded into the very processes and systems that may take place inside of an organization.”
Lauren Rhue, assistant professor of Information Systems at the University of Maryland, said there should be a balance between the concern about AI bias and optimism about the potential of the technology. There’s some good that comes from using AI because it’s easier to diagnose and fix versus a biased hiring manager, Rhue said.
“If you can create a machine learning algorithm to make recommendations as opposed to a human making those decisions, you see more diversity just because the criteria is consistently applied. We can benefit from just having consistent criteria,” she said.
But, companies must have the “political will” at every stage of the process to minimize the bias, she added.
“There’s cause for hope in the use of these technologies. We just need to have the will to try to increase diversity at every single stage,” Rhue said. “If we can get that political will, then a lot of the issues with AI bias will fall away, and a lot of the promise of the technology will come to the forefront.”
This story has been updated.