One system, being developed
DARPA by Rick Brown of Worcester Polytechnic Institute in Massachusetts,
relies on a sensor worn around the neck
tuned electromagnetic resonator collar (TERC). Using sensing techniques
developed for magnetic resonance imaging, the collar detects changes in
capacitance caused by movement of the vocal cords, and is designed to
speech to be heard above loud background noise.
DARPA is also pursuing
approach first developed at NASA's Ames
lab, which involves placing electrodes
electromyographic sensors on the neck, to detect changes in impedance
speech. A neural network processes the data and identifies the pattern
words. The sensor can even detect subvocal or silent speech. The speech
is sent to a computerised voice generator that recreates the speaker's
DARPA envisages issuing
technology to soldiers on covert missions, crews in noisy vehicles or
working underwater. But one day civilians might use a refined version
heard over the din of a factory or engine room, or a loud bar or party.
importantly, perhaps, the technology would allow people to use phones
such as train carriages, cinemas or libraries without disturbing
has produced a TERC prototype, and an electromyographic prototype is
However, both systems
come at a
cost. Because the words are produced by a computer, the receiver of the
would hear the speaker talking with an artificial voice. But for some
be a price worth paying for a little peace and quiet.
scientists have begun to computerize human,
silent reading using nerve signals in the throat that control speech.
preliminary experiments, NASA scientists found that small, button-sized
sensors, stuck under the chin and on either side of the ‘Adam’s apple,’
gather nerve signals, send them to a processor and then to a computer
that translates them into words.
is analyzed is silent, or sub-auditory,
speech, such as when a person silently reads or talks to himself," said
Chuck Jorgensen, a scientist whose team is developing
subvocal speech recognition at NASA Ames Research Center in
more about what is in the
of the nerve signals that control vocal chords, muscles and tongue
NASA Ames scientists are studying the complex nerve signal patterns.
use an amplifier to strengthen the electrical nerve signals. These are
processed to remove noise, and then we process them to see useful parts
signals to show one word from another," Jorgensen said.
the signals are amplified,
software ‘reads’ the signals to recognize each word and sound. "We use
neural network software to learn and classify the words," Jorgensen
"It’s recognizing the pattern of a word in the signal."
In their first experiment, scientists ‘trained’ special software to
six words and 10 digits that the researchers ‘repeated’ subvocally.
word recognition results were an average of 92 percent accurate. The
sub-vocal words the system ‘learned’ were ‘stop,’ ‘go,’ ‘left,’
‘alpha’ and ‘omega’ and the digits ‘zero’ through ‘nine.’ Silently
these words, scientists conducted simple searches on the Internet by
number chart that represents the alphabet to control a Web browser
the alphabet and put it
matrix -- like a calendar. We numbered the columns and rows, and we
identify each letter with a pair of single-digit numbers," Jorgensen
"So we silently spelled out ‘NASA’ and then submitted it to a
Web search engine. We electronically numbered the Web pages that came
up as search
results. We used the numbers again to choose Web pages to examine. This
we could browse the Web without touching a keyboard," Jorgensen
and his team developed a
that captures and converts nerve signals in the vocal chords into
speech. It is hoped that the technology will help those who have lost
ability to speak, as well as improve interface communications for
working in spacesuits and noisy environments.
The work is similar
cochlear implants work. These implants capture acoustic information for
hearing impaired. In Jorgensen’s experiment the neural signals that
vocal chords how to move are intercepted and rerouted. Cochlear
implants do it
the other way round, by converting acoustic information into neural
that the brain can process. Both methods capitalize on the fact that
signals provide a link to the analog environment in which we live.