Not surprisingly, ChatGPT is still not a reliable alternative to human recruiters and recruiters.
A recently published study from the University of Washington found that intelligent AI chatbots repeatedly rated job applications that included disability-related honors and qualifications lower than applications with similar accomplishments but that didn’t mention a disability. The study tested several different keywords, including deafness, blindness, cerebral palsy, autism, and the general term “disability.”
The researchers used one of the authors’ publicly available resumes as a baseline, then created an enhanced version of the resume that included various disability-alluding awards and honors, such as the “Tom Wilson Disability Leadership Award” and a seat on a DEI panel. The researchers then asked ChatGPT to rank the applicants.
Over 60 trials, the original CV ranked first 75 percent of the time.
Mashable Lightspeed
Related article: Why AI assistants are attracting attention
“While AI-based resume ranking is becoming more common, there hasn’t been much research into whether it’s safe and effective,” said Kate Grazko, lead author of the study and a graduate student in computer science and engineering. “For job seekers with disabilities, the question always arises of whether they should list their disability qualifications when submitting a resume. We think people with disabilities would take that into consideration even when a human is reviewing it.”
ChatGPT also allegedly “hallucinates” ableist reasoning about why certain mental and physical illnesses would hinder a candidate’s ability to perform on the job, according to the researchers.
“Part of the GPT narrative is to color your entire resume based on your disability, arguing that DEI and your involvement with disabilities can detract from other parts of your resume,” Grazko wrote.
But the researchers also found that by using the GPT Editor feature to input disability rights and DEI principles into ChatGPT and instructing it not to be ableist, they were able to curb some of the ableist features of concern. The enhanced resumes outperformed the original resumes by more than half, but results still varied depending on the disability implied in the resume.
OpenAI’s chatbots have shown similar bias in the past: In March, a Bloomberg investigation found that the company’s GPT 3.5 model showed clear racial favoritism toward job applicants, replicating known discriminatory hiring practices, as well as repeating stereotypes of both race and gender. In response, OpenAI said that these tests did not reflect the actual use of AI models in the workplace.