Simulation of speech by identifying and classifying dynamic gestures

Abstract
A reliable system which identifies the gestures of sign language is currently non-existent. Most common alternative for these speech and hearing impaired people is the utilization of an interpreter who voices out their opinions as these underprivileged people make gestures. Although there are devices such as Text to Speech Converters and Speech Generating Devices, they are restricted by their absence of speed which mostly results in prerecorded speeches being simulated rather than real time speeches. With the need for a system that processes gestures in real time for deployment in environments like auditoriums and classrooms being justified, the system under proposal hopes to achieve the same. To bring about such a system with an acceptable level of accuracy, the current proposal advocates the utilization of Microsoft Kinect, a depth sensor, introduced initially for entertainment purposes like gaming but now being deployed in large various industrial and medical applications after its game changing potential has been spotted. For identifying gestures in real time, the proposed system utilizes context specific and trigger based mapping for identifying dynamic gestures in real time.

This publication has 11 references indexed in Scilit: