Bedri, A., Sahni, H., Thukral, P., Starner, T., Byrd, D., Presti, P., Reyes, G., Ghovanloo, M. and Guo, Z., 2015. Toward silent-speech control of consumer wearables. Computer, 48(10), pp.54-62.

Not being able to speak because of conditions like dysarthria and aphonia can be substantial communication barriers. People with such conditions can use Augmentative and Alternative Communication (AAC) systems, but such systems are much slower than speech. For a person with good mouth articulation, silent speech systems can be a much faster way to communicate. Silent speech can be extended for places with high noise, like firefighters, public transport, combat soldiers. It can also be used the other way around, like jaw gestures for acknowledging a message in stealth environments. The authors realised a vital requirement for silent systems. Such systems have to be unobtrusive. The aim is to make a system that is indistinguishable from devices like earphones.

The authors first explored the feasibility of the already existing TDS(Tongue Driven System). They achieved 96 per cent accuracy on the test and realised it is possible to work on such systems. Inspired from these interfaces, the authors came up with Tongue-Magnet-Interface (TMI) and Outer-Ear-Interface (OEI). TMI detects tongue movements using google glass with a magnet placed on the tongue. OEI detects jaw movements and can be installed in any ordinary headphone. The researchers tried to conduct various experiments with the TMI and OEI. In the first experiment, they used TMI and OEI together, and they achieved an average user-dependent recognition accuracy of 90 per cent. They then considered experimenting with the OEI alone, as that would eliminate tongue piercing. With the OEI alone they realised that TMI and OEI did not recognise the same phrases, realising they need to design the phrases more carefully. Then they experimented with improved OEI and Simple jaw gestures in which they achieved 84 per cent overall classification accuracy, user-dependent results had even better results with 97 per cent accuracy. The experiments were helping them in constructing the phrases and well as the design of the wearable. In the last investigation, they explore how heart rate can be used to calibrate the sensors for better performance.

With the evolution of Deep Learning in the previous five years, one can now use Deep Learning models for the same tasks. It might help in improving accuracy and maybe even help us in noticing some exciting features of the data we get from the sensors. The authors conducted the study of OEI with only a single subject. They concluded that there is a need to carefully design the phrases which will be used for silent speech recognition as OEI and TMI recognises them differently. So a study with more subjects and the improved OEI system can be conducted to determine how these phrases are needed to be designed to help the practitioners who are actually going to use this research and implement it into a solution.