Editor’s note: Today we are further strengthening our long-standing partnership with the University of Cambridge, signing a multi-year research collaboration agreement and providing a Google grant to the university’s new Centre for Human-Inspired AI to support bold, responsible and collaborative advances in AI that benefit everyone. Our grant funds students from underrepresented groups to pursue their PhD at CHIA, and Aleesha is one of those students.
Five years ago, my cousin, a beautiful young woman at the peak of her life, faced a horrific ordeal. She was brutally attacked and suffered a traumatic brain injury and severe physical disabilities. Miraculously, she survived, but her life was changed forever. She was suddenly paralyzed and unable to speak. As she slowly regained her cognitive functions, we needed to establish some means of communication with her in order to understand her needs, thoughts and feelings.
The first ray of hope came from her eyes: she was able to look up and signal “yes.” Although her neck muscles were weak, she gradually started to direct her gaze purposefully to communicate to us what she wanted. At this stage, she was introduced to a computer equipped with gaze interaction technology: eye tracking allowed her to type words by looking at specific letters on the on-screen keyboard. But this was time-consuming and tiring. Advances in AI have great potential to change this situation by making gaze detection faster and more accurate.
The road to effective communication was never an easy one. It was often a frustrating and heartbreaking process. For the technology to work, she had to concentrate on each letter for a period of time, and she would often lose focus or lose her neck stability. The process was slow and fraught with errors, and many attempts ended in hardship.
My cousin’s struggles are not unique. For many people like her who have lost motor skills due to injury, or who have neurological conditions such as cerebral palsy or multiple sclerosis, eye contact is the only possible means of effective communication. Assistive technologies such as eye typing have the potential to be life-changing, but currently, even the best eye typing systems report relatively slow text typing rates of 7-20 words per minute compared to typical speaking rates of 125-185 words per minute. This is an alarming gap and highlights the need for continued improvements in assistive technology to enhance quality of life and empower all people who rely on assistive technology to communicate.
This is the purpose of my research. The goal is to make communication efficient and accessible to the countless people with motor disabilities for whom these technologies can be a life-changing reality. By understanding how to use AI most effectively, I hope to rethink how users can type efficiently with their eyes.
I have been incredibly fortunate to be able to pursue this research with the support of Google and the Centre for Human Inspired Artificial Intelligence (CHIA) at the University of Cambridge. I began my PhD studies earlier this year under the supervision of Professor Per-Ola Christensson, whose seminal work on an AI-driven technique called “dwell-free” eye typing has opened up the possibility of a paradigm shift in how these systems are designed.
A notable gap in the progress of iTyping systems has been the lack of direct engagement with the end users themselves. To understand their needs, desires, and barriers, I began interviewing non-speech, motor-impaired people who rely on iTyping for everyday communication, enabling us to design technology that makes it easier for iTyping users to achieve their goals. This reflects the approach CHIA takes to AI innovation, putting the people most affected by AI at the center of our development process.
By enhancing gaze technology with AI, we hope to empower people like my cousin to express themselves, connect with the world, and reclaim their independence.