Top of page
Technology

Artificial Intelligence reduces communication gap for nonverbal people

Artificial Intelligence

The University of Cambridge and the University of Dundee have developed a new artificial intelligence context-aware method that reduces the communication gap by eliminating between 50% and 96% of the keystrokes the person has to type to communicate.

Nonverbal people with motor disabilities often use a computer with speech output to communicate with others. However, even without a physical disability that affects the typing process, these communication aids are too slow and error-prone for meaningful conversation: typical typing rates are between five and 20 words per minute, while a usual speaking rate is in the range of 100 to 140 words per minute.

“This difference in communication rates is referred to as the communication gap,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, the study’s lead author. “The gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.”

The method developed by Kristensson and his colleagues uses artificial intelligence to allow a user to quickly retrieve sentences they have typed in the past. Prior research has shown that people who rely on speech synthesis, just like everyone else, tend to reuse many of the same phrases and sentences in everyday conversation. However, retrieving these phrases and sentences is a time-consuming process for users of existing speech synthesis technologies, further slowing down the flow of conversation.

In the new system, as the person is typing, the system uses information retrieval algorithms to automatically retrieve the most relevant previous sentences based on the text typed and the context the conversation the person is involved in. Context includes information about the conversation such as the location, time of day, and automatic identification of the speaking partner’s face.

The other speaker is identified using a computer vision algorithm trained to recognise human faces from a front-mounted camera.
The system was developed using design engineering methods typically used for jet engines or medical devices. The researchers first identified the critical functions of the system, such as the word auto-complete function and the sentence retrieval function. After these functions had been identified, the researchers simulated a nonverbal person typing a large set of sentences from a sentence set representative of the type of text a nonverbal person would like to communicate.

You might also like

Rear view of person with blindness disability using computer keyboard and braille display Rear view of person with blindness disability using computer keyboard and braille display

Software enables users with vision disabilities to create interactive charts

More and more tools are emerging to help users create…

Assoc Prof Suranga Nanayakkara (left) with NUS student Mark Myres (right), who tested AiSee as a blind user. Assoc Prof Suranga Nanayakkara (left) with NUS student Mark Myres (right), who tested AiSee as a blind user.

Researchers create AI-Powered ‘AiSee’ enabling blind people to identify objects

Shopping for groceries is a common activity for many of…

Deep Brain's AI website screenshot Deep Brain's AI website screenshot

Empowering the Future: Deep Brain’s AI in Virtual Reality for Youth Development

In the ever-expanding landscape of technology, the convergence of artificial…

Three children standing with their arms around each other's shoulders. The child in the middle has a prosthetic leg. Three children standing with their arms around each other's shoulders. The child in the middle has a prosthetic leg.

Global initiative launched to enhance access to assistive technology

A new global campaign, backed by the Honourable First Lady…