SAN FRANCISCO: Researchers found that OpenAI’s artificial intelligence (AI) chatbot ChatGPT consistently rated resumes and CVs that listed disability-related accolades and certifications, such as the ‘Tom Wilson Disability Leadership Award’, lower than the same resumes that didn’t list such accolades and certifications, a new study has found.
In the study, researchers from the University of Washington (UW) in the US asked for an explanation for the rankings and found that the system showed a biased perception of people with disabilities.
For example, they claim that CVs that have received an Autism Leadership Award “lack emphasis on leadership roles”, implying the stereotype that autistic people are not good leaders.
However, when researchers customized the tool with written instructions to avoid being ableist, this bias was reduced for all but one of the disabilities tested.
“Five of the six implied disabilities – hearing impairment, visual impairment, cerebral palsy, autism, and the general term ‘disability’ – improved, but only three were ranked higher than CVs that made no mention of a disability,” the researchers noted.
The researchers used a roughly 10-page resume made public by one of the study authors, then created six revised resumes, each suggesting a different disability, by adding four disability-related qualifications: scholarships, awards, a seat on a diversity, equity, and inclusion (DEI) committee, and membership in a student organization.
The researchers then used ChatGPT’s GPT-4 model to compare these amended resumes with the original versions for actual “student researcher” positions at a major US software company.
Each comparison was performed 10 times, and out of 60 trials, the enhanced resume, which was identical except for the implied disability, was ranked first by the system in only a quarter of cases.
“Part of the GPT narrative is that it colors your entire resume based on your disability, and that DEI and your involvement with disabilities can detract from other parts of your resume,” said Kate Grazko, a doctoral student at the University of Washington’s Paul G. Allen School of Computer Science & Engineering.
“When using AI for these real-world tasks, people need to be aware of biases in the system, otherwise recruiters using ChatGPT wouldn’t be able to make these corrections or realize that biases may remain even with instructions,” she added.