Dundee and Cambridge universities are teaming up on a £1 million project to help people with no speech or complex disabilities converse with others.
Computer-based systems – called Voice Output Communication Aids (VOCAs) – use word prediction to speed up typing, a feature similar to that used on mobile phones or tablets for texting and emailing.
However, for those with complex disabilities, like Professor Stephen Hawking, using typing to communicate can still be extremely slow.
With as little as two words per minute generated, face-to-face conversation can be very difficult. Even with an average computer-aided communication rate of about 15 words per minute, conversations do not compare to the speaking rate which is 10 times faster.
It is estimated that more than a quarter of a million people in the UK alone are at risk of isolation because they are unable to speak and are in need of some form of augmentative or alternative communication (AAC) to support them with a severe communication difficulty.
Now researchers at Dundee hope to aid communication by developing a system that will create a bank of stories and experiences and then predict when they will be used in a conversation.
“Despite four decades of VOCA development, users seldom go beyond basic needs-based utterances as communication rates remain, at best, ten times slower than natural speech,” said Rolf Black, one of the project investigators at the University of Dundee.
“This makes conversation almost impossible and is immensely frustrating for both the user and the listener. We want to improve that situation considerably by developing new systems which go far beyond word prediction.”
The team want to produce the first VOCA system which will not only predict words and phrases but will provide access to extended conversation by predicting narrative text elements tailored to an ongoing conversation.
Dundee’s Professor Annalu Waller, who is lead investigator for this research project, said: “In current systems users sometimes pre-store monologue ‘talks’, but sharing personal experiences and stories interactively using VOCAs is rare.
“Being able to relate experience enables us to engage with others and allows us to participate in society. In fact, the bulk of our interaction with others is through the medium of conversational narrative such as sharing personal stories.”
By harnessing recent progress in machine learning and computer vision, they plan to build a VOCA that gives its non-speaking user quick access to speech tailored to the current conversation.
In order to predict what a person might want to say, this VOCA will learn from information it gathers automatically about conversational partners, previous conversations and events, and the locations in which these take place.
Mr Black added, “This does not mean that the computer will speak for a person. It will be more like a companion who, being familiar with aspects of your life and experiences, has some idea of what you might choose to say in a certain situation.”
The research project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC).