The researchers found that ChatGPT was biased against resumes with qualifications that implied a disability.


Other Science and Technology

The researchers found that ChatGPT was biased against resumes with qualifications that implied a disability.

In a new study, researchers found that OpenAI’s artificial intelligence (AI) chatbot ChatGPT consistently rated resumes and CVs that listed disability-related honors or certifications, such as the “Tom Wilson Disability Leadership Award,” lower than the same resumes that didn’t list those honors or certifications.

In a new study, researchers found that OpenAI’s artificial intelligence (AI) chatbot ChatGPT consistently rated resumes and CVs that listed disability-related honors or certifications, such as the “Tom Wilson Disability Leadership Award,” lower than the same resumes that didn’t list those honors or certifications.

In the study, researchers from the University of Washington (UW) in the US asked for an explanation for the rankings and found that the system showed a biased perception of people with disabilities.

For example, they claim that CVs that have received an Autism Leadership Award “lack emphasis on leadership roles”, implying the stereotype that autistic people are not good leaders.

However, when researchers customized the tool with written instructions to ensure it was not biased, the tool reduced this bias for all but one of the disorders tested.

“Five of the six implied disabilities – hearing impairment, visual impairment, cerebral palsy, autism, and the general term ‘disability’ – improved, but only three were ranked higher than CVs that made no mention of a disability,” the researchers noted.

The researchers used a resume of about 10 pages that was publicly available from one of the study authors.

They then created six revised resumes, each adding four disability-related qualifications to suggest a different disability: scholarship, award, seat on a diversity, equity, and inclusion (DEI) panel, and membership in a student organization.

The researchers then used ChatGPT’s GPT-4 model to compare these amended resumes with the original versions for actual “student researcher” positions at a major US-based software company.

Each comparison was performed 10 times, and out of 60 trials, the enhanced resume, which was identical except for the implied disability, was ranked first by the system in only a quarter of cases.

“Part of the GPT narrative is that it colors your entire resume based on your disability, and that DEI and your engagement with disabilities can detract from other parts of your resume,” said Kate Grazko, a doctoral student in the University of Washington’s Paul G. Allen School of Computer Science & Engineering.

“When using AI for these real-world tasks, people need to be aware of biases in the system.

Otherwise, recruiters using ChatGPT may not be able to make these corrections or may not realise that bias may persist even with instructions,” she added.



Source link