The disability community has long struggled with “helpful” technology; lessons we can all learn when dealing with AI


You may have heard that artificial intelligence will revolutionize everything, save the world and give everyone superhuman powers, or you may have heard that it will take away your job, make you lazy and stupid and turn the world into a cyberpunk dystopia.

Let’s look at AI from a different perspective: as an assistive technology, something that helps us function.

In this light, consider also the community of professionals in providing and receiving assistance: the disability community. Many people with disabilities make extensive use of both specialized assistive technologies, such as wheelchairs, and general-purpose technologies, such as smart home devices.

Similarly, many disabled people receive professional or everyday help from other people, and, contrary to stereotypes to the contrary, many disabled people regularly give help to disabled and able-bodied people around them.

People with disabilities have extensive experience receiving and providing social and technical assistance, making them a valuable source of insight into how all people will interact with AI systems in the future. This potential is a key driver of my work as a person with a disability and an AI and robotics researcher.

Learning to Live with Positive Support

Nearly everyone values ​​independence, but no one is completely independent. We each depend on others to grow food, to care for us when we are sick, to give us advice and emotional support, and to help us in thousands of interconnected ways. Being disabled means that we have support needs that go beyond the normal range, and therefore those needs are much more visible. For this reason, the disability community is more aware than most able-bodied people of what it means to need support to live.

This perspective from the disability community is invaluable as we approach new technologies that can help both disabled and non-disabled people. Pretending to be disabled can never replace the experience of actually being disabled, but accessibility benefits everyone.

The curb-cutting effect of how technology made for people with disabilities can benefit everyone is a principle of good design.

This is sometimes known as the curb-cut effect, because installing a ramp at a curb to make sidewalks easier for wheelchair users also benefits people with strollers, wheeled suitcases and bicycles.

Partnership in Support

You’ve probably had the experience of someone trying to help you without listening to your real needs. For example, a parent or friend might “help” you clean, but instead hide all the things you need.

Disability advocates have long fought this kind of well-intentioned but intrusive assistance, from attaching nails to wheelchair handles to prevent people from pushing them when they aren’t asked, to advocating for services that allow disabled people to operate themselves.

Instead, the disability community offers a model of advocacy as a collaborative effort that, when applied to AI, can ensure that new AI tools support human autonomy rather than take away it.

A primary goal of my lab is to develop AI-powered assistive robots that treat users as equal partners. We have shown that this model is not only valuable, but essential. For example, most people find it difficult to move a robotic arm using a joystick. A joystick can only move forwards, backwards and left and right, but an arm can move in many different ways, almost like a human arm.

The authors discuss their research into robots designed to help people.

The AI ​​can predict what someone is going to do with the robot and move the robot accordingly. Previous studies assumed that people would ignore this assistance, but we found that people quickly understood that the system was doing something, actively tried to understand what it was doing, and tried to cooperate with the system to get it to do what they wanted.

For most AI systems, this isn’t easy, but a new approach to AI in my lab is now allowing humans to influence the robot’s behavior. We’ve seen this lead to better interaction in creative tasks like painting. We’ve also begun to investigate how to use this control to solve problems outside of what the robot was designed to do. For example, a robot trained to carry a cup of water could be used to pour it and water plants instead.

Training AI based on human diversity

The disability-centric perspective also raises concerns about the massive datasets that power AI. The very nature of data-driven AI is to look for common patterns. Generally, the better something is represented in the data, the better the model will perform.

If disability means being physically or mentally outside the normal range, it means disability is under-represented in the data. Whether it’s an AI system designed to detect exam cheating rather than detecting a student’s disability, or a robot that doesn’t take wheelchair users into account, the interactions of disabled people and AI show how fragile those systems are.

One of my goals as an AI researcher is to make AI more sensitive and adaptive to changes in real humans, especially in AI systems that learn directly from human interactions. We developed a framework to test how robust those AI systems are to real human instruction, and explored ways to help robots better learn from human teachers, even when the teachers change over time.

Thinking of AI as assistive technology and learning from the disability community will ensure that future AI systems are human-driven and responsive to people’s needs.



Source link