- Augmenting Conversation between the Deaf and Hearing Community
Telesign ASL Translator
CopyCat ASL Game
Telesign attempts to create a mobile American Sign Language (ASL) to English translator that can aid face-to-face communication between a
hearing and deaf person when an interpreter is not feasible (for example, after a car accident, transferring between gates at an
airport, searching for an apartment, etc.). The project is inspired by phrase books designed for travelers visiting countries where they
do not know the local language. The traveler searches the phrase book for a given scenario (for example, asking for the nearest restroom)
and speaks the appropriate phrase using a phonetic transcription. The phrase is designed to elicit non-verbal gestures from the traveler's
conversation partner (e.g. "Can you point in the direction of the nearest restroom?") so that the traveler can understand the response.
Telesign provides support for similar scenarios but automates the process of finding the correct English phrase. The user signs the
phrase she wants translated, and the system tracks the signer's hands and recognizes the ASL. The system determines which pre-programmed
English phrases the sign most closely matches and presents a list to the user in a head-up display. The signer selects the English phrase
he desires, and the system speaks the phrase aloud.
Telesign uses a combination of accelerometers embedded in wrist bracelets and a camera in a hat to track the signer's hands. With the
current system, the signer uses the jog wheel from a small computer mouse attached to one of the accelerometer bracelets to control the
interface. When the user wants to sign a phrase, she presses the jog wheel button, and the system begins capturing data from the camera and
the accelerometers. When the phrase is finished, the signer presses the job wheel again, and the recognizer processes the captured data.
We use our hidden Markov model-based Georgia Tech Gesture Toolkit (GT2K) to recognize the signed phrase. We have recently demonstrated a
phrase-level continous recognizer for the system that uses a vocabulary of 78 words in conjunction with the phrases it recognizes.
Once the sign is recognized, the system presents the user with candidate translations in English. In the future we may use sign
icons depending on the signer's level of familiarity with written English.
American Sign Language (ASL) is a distinct language from English, involving a significantly different grammar and lexicon. ASL is also
the preferred language of the deaf community, as it is as fast as spoken language and conveys many nuances that are critical in
communication. Yet few hearing people sign, and often the deaf must communicate with hearing people through slow, hand-written notes in
English, which is a second language for them. We are beginning studies to compare the effectiveness of communicating between a
hearing and a deaf person using a phrase level translator, handwritten notes, and typing on a small PDA.
90% of deaf children are born to hearing parents who do not know sign language or have low levels of proficiency. Unlike hearing children
of English-speaking parents or deaf children of signing parents, these children often lack the serendipitous access to language at home which
is necessary in developing linguistic skills during the "critical period" of language development. Often these children's only
exposure to language is from signing at school. CopyCat is a game that uses our sign language recognition system to augment early
classroom teaching for developing American Sign Language (ASL) skills in young deaf children.
CopyCat is designed both as a platform to collect gesture data for our ASL recognition system and as a practical application which helps deaf
children acquire language skills while they play the game. The system uses a video camera and wrist mounted accelerometers as the primary
sensors. In CopyCat, the user and the character of the game, Iris the cat, communicate with ASL. With the help of ASL linguists and
educators, the game is designed with a limited, age-appropriate phrase set. For example, the child will sign to Iris, "you go play
balloon" (glossed from ASL). If the child signs poorly, Iris looks puzzled, and the child is encouraged to attempt the phrase again. If
the child signs clearly, Iris frolics and plays with a red balloon. If the child cannot remember the correct phrase to direct Iris, she
can click on a button bearing the picture of the object with which she would like Iris to play. The system shows a short video with a teacher
demonstrating the correct ASL phrase. The child can then mimic the teacher to communicate with Iris.
Gesture-based interaction expands the possibilities for deaf educational technology by allowing children to interact with the
computer in their native language. An initial goal of the system, suggested by our partners at the Atlanta Area School for the Deaf, is
to elicit phrases which involve three and four signs from children who normally sign in phrases with one or two signs. This task encourages
more complex sign construction and helps develop short term memory. In the current game there are 8 phrases per level, and the child must
correctly sign each phrase before moving on to the next level.
To date, CopyCat has used a "Wizard of Oz" approach where an interpreter simulates the computer recognizer. This method allows
research into the development of an appropriate game interface as well as data collection to train our hidden Markov model (HMM) based ASL
recognition system. Preliminary off-line tests of the recognition system have shown promising results for user-independent recognition
of data from our ten-year-old subjects, and we hope to perform experiments with a live recognition system soon. In addition, our
pilot studies have allowed us to create a compelling game for the students, who often ask to continue playing even after they have
completed all levels of the game.