3D Modeling of the Mexican Sign Language for a Speech-to-Sign Language System
Abstract
There are many people with communication impairments, being deafness one of the most common. Deaf people use Sign Language (SL) to communicate, and translation systems (Speech/Text-to-SL) have been developed to assist this task. However, because SLs are dependent of countries and cultures, there are differences between grammars, vocabularies, and signs, even if these come from places with similar spoken languages. In Mexico, work is very limited in this field, and any development must consider the characteristics of the Mexican-Sign-Language (LSM). In this paper, we present our approach for a Mexican Speech-to-SL system, integrating 3D modeling of the LSM with a multi-user Automatic Speech Recogniser (ASR) with dynamic adaptation. The 3D models (avatar) were developed by means of motion capture of a LSM performer. Kinect was used as the 3D sensor for the motion capture process, and DAZ Studio 4 was used for its animation. The multi-user ASR was developed using the HTK and Matlab as programming platform for a Graphical User Interface (GUI). Experiments with a vocabulary set of 199 words were performed to validate the system. An accuracy of 96.2% was achieved for the ASR and interpretation into LSM of 70 words and 20 spoken sentences. The 3D avatar presented clearer realizations than those of standard video recordings of a human LSM performer.
Keywords
Mexican Sign Language; Automatic Speech Recognition; Human-Computer Interaction