Lip Reading Translation
  Consuelo Gonzalez, professional lip reader
  Medical • Legal • Archival
  Live Translation, Video Transcription
logo

Frequently Asked Questions


1. What’s the difference between a lip reader and a speechreader?

Just terminology. The term ”speechreading” was coined to give a clearer picture of how lipreading actually works, because we’re never reading just the lips. People who rely on speechreading to communicate are “reading” the whole face: the lips, the eyes and eyebrows, the cheeks, the jaws, and, of course, facial expressions. This is why it’s often harder to lipread people who are wearing dark glasses, or those who don’t (or can’t) show much facial expression when they talk.


2. If you’re at a restaurant, can you lip read the people at the next table and understand their conversation?

Probably not. Lipreading from the side (i.e. lipreading half of the face), is usually only effective in close proximity, and with full knowledge of the subject matter. In live situations, even a little distance and facial obscurity, coupled with ignorance of the subject matter, will most likely render lipreading ineffective.


3. Can you lip read a video that doesn’t have sound?

Yes. Unlike a live conversation, a video recording gives the opportunity to replay a conversation numerous times, in order to further refine comprehension. Lipreading translation of a video (“video transcription”) has applications in the fields of historical and archival film documents, police and security videos, and in some detective and investigation work. In these situations, comprehension depends heavily on the quality of the video and what is known of the subject matter.

For more information see the Video Transcription page.


4. What sorts of things make it harder or easier to lip read someone?

There are many factors that can affect any lipreading situation, potentially making comprehension more or less successful. These factors fall into four general categories:

The Speaker: Does the speaker have an accent? Braces? Dark glasses? A speech impediment? Does the speaker move a lot? Hold his hands in front of his face? Speak very rapidly?

The Message: Does the message contain highly technical language or specific jargon? Are there foreign language words or phrases? Is it rambling or truncated? Is it grammatically correct? Is there slang or a specific dialect?

The Environment: How far away is the speaker from the lip reader? Is there enough light? Is there anything obstructing the line of sight? Is there any distracting background movement?

The Lip Reader: Is the lip reader skilled at lip reading? Is she fatigued? Is she familiar with the subject and vocabulary of the message? Does she have good eyesight?


5. I’ve read somewhere that even the best lip readers can only understand 30 percent of the spoken word. Is this true?

No, it is not true. The fact is that any speechreader, in any instance, will understand anywhere from 0 to 100 percent of what is said, depending on the circumstances. There is no statistic or percentage regarding comprehension that can be applied to all speechreaders, in all situations.

So, where does this “30 percent fact” come from?

This so-called “fact” is a fascinating urban myth that has spread with the increased use of the Internet and World Wide Web, and with the relaxed documentation of sources.

During the 1950s, a study was conducted at the John Tracy Clinic in Los Angeles, California. In the study, speech pathology students – all with normal hearing – were asked to lip-read isolated phonemes (isolated vowels or consonants of the English language such as “sh,” or “oo,” or “r”). Of those phonemes, these students were able to correctly identify (through visual cues alone) approximately one-third. The finding of the study was that, in general, no more than about one-third (or "30 percent") of the individual phonemes of the English language are visible on the lips.

It's important to note that this study made no measurement of and no claims regarding the comprehension abilities of speechreaders, and that speechreaders do not read individual phonemes, but whole words in the context of sentences.

By the 1970s, the "30 percent fact" began appearing in print, but it had now been changed (either mistakenly or deliberately) to the erroneous claim that “even the best speechreaders can understand no more than 30 percent of what people say using speechreading alone.” Other percentages have variously been claimed, however, regardless of the percentage, any such claim is a distortion of the original study's finding, which had nothing to do with speechreading.

Over the past 30-plus years, hundreds of books and articles have included such erroneous claims as fact, many of which can now be found on the Internet.

In 2002, Daniel Greene, a student at National University, conducted an extensive Internet research project on this very topic *, in which he contacted the writers and editors of scores of articles that included a claim that speechreaders can understand no more than "X percent" of the spoken word. Not one of them could identify what their specific source for the claim had been. This is not surprising, as there is no source to support it – the original study made no such claim.

* From unpublished research paper, “Collective Truths: Unquestioned Statistics Regarding Speechreading” December 2002, Daniel Greene.


Have more questions? Feel free to contact me.