An illustration of a sphere made up of white lines and red dots, representing the interlinked data set, with spotlights showing different aspects of disability data. There are five spotlights, including a person using a computer in a power wheelchair, an ear representing a person with hearing impairment, six dots representing Braille, a person with a limb disability using crutches, and a brain. In white text against a dark blue background, the report’s title reads, “Reducing disability bias in tech starts with disability data.”
The report was also written by Bonnielynn Swenor, director of the Johns Hopkins University Center for Disability and Health Research.
introduction
When people with disabilities use technology, they are at risk of discriminatory impacts in a number of important and significant situations, including employment, benefits and health care.
For example, many jobs use automated employment decision tools as part of the hiring process, including resume screeners and video interview tools that use algorithms to analyze things like speech rhythm and eye movements. These tools may unfairly screen out applicants with disabilities, for example, by flagging the abnormal eye movements of blind or low vision individuals and consequently excluding them from the applicant pool.
When algorithms are embedded in benefit decision systems, such as systems that determine how many hours of home care a person with a disability can receive through Medicaid, they can prevent people with disabilities from receiving benefits. This impacts their ability to live independently. Algorithms are also embedded in health care decision-making systems, such as those that play a role in determining who is admitted or discharged from the hospital and who receives opioids as part of post-operative care. When these algorithmic systems produce biased outcomes, this can lead to poorer health outcomes for people with disabilities. And the effects of discrimination facilitated by this technology can be amplified against people with disabilities who are multiply marginalized, including people with disabilities of color and LGBTQ+ people with disabilities.
Disability rights and disability justice activists have a long history of fighting discrimination that affects people with disabilities. Technology-facilitated ableism may be a new form of old injustice, but it’s not going away. In fact, as algorithms and other technologies continue to be integrated into everyday life and people with disabilities have more opportunities to interact with them, the impact of various issues will only increase in both frequency and severity.
While it is tempting to dismiss this bias as the so-called “black box” of algorithms, algorithmic inconsistencies and discriminatory outcomes are often due to issues with the data used to train the models; better data tends to lead to better outcomes. Moreover, incomplete or erroneous data sets don’t just impact technology. The data collected and used to quantify and generate insights about disability also informs disability advocacy efforts, such as demonstrating the need for and helping to create disability-inclusive policies, allocating public benefits, and upholding disability-related civil rights laws. To address technology-enabled disability discrimination and improve the lives of people with disabilities in general, we must first understand and then mitigate the problems inherent in disability-related data.
In this paper, we identify a variety of ways that data sets may exclude, inaccurately count, or be unrepresentative of people with disabilities. We elucidate factors that lead to the collection and use of underrepresentative data sets and provide recommendations to mitigate these concerns, collectively referring to this as a “disability data justice” approach.
We highlight several recommendations, including:
Disability data should be collected in all contexts where other demographic data is collected. Data should be collected and stored in a manner that respects personal information and data privacy. New and more inclusive ways of both defining disability and collecting disability data should be developed. Practitioners should adopt a growth mindset with respect to disability data. People with disabilities should be included in the creation, deployment, procurement, and audit of all technologies. People with disabilities, particularly disability leaders and those with expertise on technology, disability rights, and disability justice, should be central to the creation and implementation of technology and AI policy. Data should be collected and stored in a manner that is accessible to individuals with disabilities.
Although designing an algorithmic system comprehensively would require significant changes in data collection, these changes are both possible and necessary.
Read the full report.