The future of AI is dire for the disability community


In December, the US Census Bureau proposed changes to how it classifies disabilities that, if implemented, would significantly reduce the number of Americans counted as disabled, at a time when experts say people with disabilities are already undercounted.

The Census Bureau opened the proposal to public comment — anyone can submit their own comments on federal agency rulemaking — but in this particular case, there were more obstacles for those most affected by the proposal to speak out.

“It was really important to me to think about how we can empower these people to write and submit comments,” says Matthew Cortland, a senior researcher at Data for Progress. With that in mind, they created a GPT-4 bot assistant for people who want to submit comments themselves. While Cortland has run comment campaigns targeting disability-related regulations before, this is the first time he’s used AI to help him.

“Thank you, this allowed me to write a comment I’ve always wanted to write,” one person said. “Right now my mind is too confused to write a comment.”

Depending on who you count, either 12.6% or 25% of the population have a disability. Disability itself is defined in different ways, but broadly includes physical, intellectual, and cognitive impairments and chronic illnesses. A physically disabled person might use a wheelchair, but a severe, energy-limiting illness like long-Covid might make it hard for them to manage the tasks of daily life.

AI, whether in the form of natural language processing, computer vision, or generative AI like GPT-4, has the potential to have a positive impact on the disability community, but in general, the future of AI and disability looks pretty bleak.

“The way we treat and use AI is essentially like phrenology with mathematics,” says Joshua Earle, an assistant professor at the University of Virginia who connects the history of eugenics and technology. People unfamiliar with disabilities have negative perceptions shaped by the media, pop culture, regulatory frameworks, and the people around them, and see disability as a flaw rather than a cultural identity. A system that devalues ​​the lives of people with disabilities by convention or design will continue to repeat those mistakes in technology products.

“The way AI is handled and used is essentially like phrenology with mathematics.”

This attitude is clearly visible in the debate over healthcare rationing during the peak of the COVID-19 pandemic. It also manifests itself in the form of quality-adjusted life years (QALYs), an AI-assisted “cost-benefit” tool used in healthcare to judge “quality of life” by external indicators, not based on the intrinsic value of a person’s life. For example, being unable to leave the house might count as a disadvantage, as would a degenerative disease that limits physical activity or employment. A low score could lead to a cost-benefit analysis denying certain medical interventions. Why provide expensive treatment to someone whose disability is likely to shorten their lifespan?

The promise of AI is that automation will make jobs easier, but what exactly is easier? In 2023, a ProPublica investigation revealed that insurance giant Cigna was using in-house algorithms that automatically flagged claims. This allowed doctors to approve a large number of claim denials, unfairly targeting disabled people with complex medical needs. The health system is not the only place where algorithmic tools and AI may work against people with disabilities. This is becoming increasingly common in the hiring sector, where tools that screen job applicants can introduce bias. So are logic puzzles and games used by recruiters, and eye-gaze and facial expression tracking done in interviews. More generally, says Ashley Xu, an associate professor at Virginia Tech who specializes in disability and technology, “it’s leading to additional surveillance of people with disabilities” through technology that targets them.

Such technologies are often based on two assumptions: first, that fraud prevention is important because many people fake or exaggerate their disabilities, and second, that a life with a disability is not worth living. Therefore, decisions about resource allocation and social inclusion, such as home care services, access to the workplace, or the ability to reach people on social media, should not treat people with disabilities on an equal footing with those without. This attitude is reflected in the artificial intelligence tools that society builds.

It doesn’t have to be this way.

Courtland’s creative use of GPT-4 to enable people with disabilities to participate in the political process is a great example of how AI can be a valuable accessibility tool when used by the right people. There are countless examples of this if you look in the right places. For example, in early 2023, Midjourney released features such as: Generate alternative text for imagesImprove accessibility for the blind and partially sighted.

Amy Gaeta, a poet and scholar of human-technology interaction, also sees the potential for AI to “do some of the really boring stuff for us.” [disabled people] It would automate tasks such as filling out forms or practicing conversations for job interviews or social occasions for people who are “already overworked and exhausted.” The same technology could also be used for activities such as fighting insurance companies over wrongful denials.

“The people who use AI are going to be the people best placed to understand when it’s doing something wrong,” Earle said of technology developed around or for, but not with, people with disabilities. For AI to have a truly bright future, the tech community must embrace people with disabilities from the start as innovators, programmers, designers, creators, and of course, users who can physically shape the technology that mediates the world around them.





Source link