iNDICA News Bureau
According to the report, researchers found that OpenAI’s artificial intelligence (AI) chatbot ChatGPT consistently rated resumes and CVs with disability-related accolades or certifications, such as the “Tom Wilson Disability Leadership Award,” lower than those without such accolades or certifications.
When researchers at the University of Washington (UW) asked for an explanation for the rankings, they found that the system showed bias against people with disabilities. For example, a resume that received an Autism Leadership Award claimed that “leadership roles are less valued,” implying the stereotype that autistic people are not good leaders.
However, when researchers customized the tool with written instructions to avoid being ableist, this bias was reduced for all but one of the disabilities tested.
“Five of the six implied disabilities – hearing impairment, visual impairment, cerebral palsy, autism, and the general term ‘disability’ – improved, but only three were ranked higher than CVs that made no mention of a disability,” the researchers noted.
The researchers used a roughly 10-page resume made public by one of the study authors, then created six revised resumes, each suggesting a different disability, by adding four disability-related qualifications: scholarships, awards, a seat on a diversity, equity, and inclusion (DEI) committee, and membership in a student organization.
The researchers then used ChatGPT’s GPT-4 model to compare these amended resumes with the original versions for actual “student researcher” positions at a major US software company.
Each comparison was performed 10 times, and out of 60 trials, the enhanced resume, which was identical except for the implied disability, was ranked first by the system in only a quarter of cases.
“Part of the GPT narrative is that it colors your entire resume based on your disability, and that DEI and your involvement with disabilities can detract from other parts of your resume,” said Kate Grazko, a doctoral student at the University of Washington’s Paul G. Allen School of Computer Science & Engineering.
“When using AI for these real-world tasks, people need to be aware of biases in the system, otherwise recruiters using ChatGPT wouldn’t be able to make these corrections or realize that biases may remain even with instructions,” she added.