» Contact me with your questions, or to request assistance.
Just terminology. The term "speechreading" was coined to give a clearer picture of how lip reading actually works, because we're never reading just the lips. People who rely on speechreading to communicate are "reading" the whole face: the lips, the eyes and eyebrows, the cheeks, the jaws, the neck, breathing patterns, and, of course, facial expressions. This is why it's often harder to lipread people who are wearing dark glasses, or those who don't (or can't) show much facial expression when they talk.
An Oral Interpreter, or Oral Transliterator, is a skilled professional whose job is to listen to auditory information (speech, environmental sounds), and to relay this information to people who rely on lip reading for communication. Oral Interpreters are trained to "speak" silently and to articulate specifically so that lip readers can easily understand them. Most Oral Interpreters have excellent hearing, and may or may not be skilled at reading lips themselves.
A Lip Reading Translator is a skilled professional whose job is to translate non-auditory spoken language into auditory spoken language, or to transcribe non-auditory spoken language into written language. The level of skill in lip reading that this requires usually means that the translator has a long history of relying on lip reading for communication.
Maybe, maybe not. Lip reading in profile is usually only effective in close proximity, and with full knowledge of the subject matter. In live situations, even a little distance and facial obscurity, coupled with ignorance of the subject matter, will most likely render lip reading ineffective.
Yes. Unlike a live conversation, a video recording gives the opportunity to replay a conversation numerous times, in order to further refine comprehension. Lip reading translation of a video ("video transcription") has applications in the fields of historical and archival film documentation, police and security videos, and in some detective and investigation work. In these situations, salvageability depends heavily on the quality of the video and what is known of the context / subject matter.
For more information see the Video Transcription page.
There are many factors that can affect any lip reading situation, potentially making comprehension more or less successful. These factors fall into four general categories:
• Factors having to do with The Speaker: Does the speaker have an accent? Braces? Dark glasses? A speech impediment? Does the speaker move a lot? Hold his hands in front of his face? Speak very rapidly?
• Factors having to do with The Message: Does the message contain highly technical language or industry specific jargon? Are there foreign language words or phrases? Is it rambling or truncated? Is it grammatically correct? Is there slang or a specific dialect?
• Factors having to do with The Environment: How far away is the speaker from the lip reader? Is there enough light? Is there anything obstructing the line of sight? Is there any distracting background movement? Does the camera move?
• Factors having to do with The Lip Reader: Is the lip reader skilled at lip reading? Is she familiar with the subject and vocabulary of the message? Does she have good eyesight?
No, it is not true. The fact is that any speechreader, in any instance, will understand anywhere from 0 to 100 percent of what is said, depending on the circumstances. There is no statistic or percentage regarding comprehension that can be applied to all speechreaders, in all situations.
So, where does this "30 percent fact" come from?
This so-called "fact" is a fascinating urban myth that has spread with the increased use of the Internet and World Wide Web, and with the relaxed documentation of sources.
During the 1950s, a study was conducted at the John Tracy Clinic in Los Angeles, California. In the study, speech pathology students — all with normal hearing — were asked to lip-read isolated phonemes (isolated vowels or consonants of the English language such as "sh," or "oo," or "r"). Of those phonemes, these students were able to correctly identify (through visual cues alone) approximately one-third. The finding of the study was that, in general, no more than about one-third (or "30 percent") of the individual phonemes of the English language are visible on the lips.
It's important to note that this study made no measurement of and no claims regarding the comprehension abilities of speechreaders, and that speechreaders do not read individual phonemes, but whole words in the context of sentences.
By the 1970s, the "30 percent fact" began appearing in print, but it had now been changed (either mistakenly or deliberately) to the erroneous claim that "even the best speechreaders can understand no more than 30 percent of what people say using speechreading alone." Other percentages have variously been claimed, however, regardless of the percentage, any such claim is a distortion of the original study's finding, which had nothing to do with speechreading.
Over the past 30-plus years, hundreds of books and articles have included such erroneous claims as fact, many of which can now be found on the Internet.
In 2002, Daniel Greene, a student at National University, conducted an extensive Internet research project on this very topic*, in which he contacted the writers and editors of scores of articles that included a claim that speechreaders can understand no more than "X percent" of the spoken word. Not one of them could identify what their specific source for the claim had been. This is not surprising, as there is no source to support it — the original study made no such claim.
* From unpublished research paper, "Collective Truths: Unquestioned Statistics Regarding Speechreading" December 2002, Daniel Greene.