Visual speech recognition by recurrent neural networks

Abstract
One of the major drawbacks of current acoustically based speech recognizers is that their performance deteriorates drastically with noise. Our focus is to develop a computer system that performs speech recognition based on visual information concerning the speaker. The system automatically extracts visual speech features through image-processing techniques that operate on facial images taken in a normally illuminated environment. To cope with the dynamic nature of change in speech patterns with respect to time as well as the spatial variations in the individual patterns, the proposed recognition scheme uses a recurrent neural network architecture. By specifying a certain behavior when the network is presented with exemplar sequences, the recurrent network is trained with no more than feedforward complexity. The network’s desired behavior is based on characterizing a given word by well-defined segments. Adaptive segmentation is employed to segment the training sequences of a given class. This technique iterates the execution of two steps. First, the sequences are segmented individually. Then, a generalized version of dynamic time warping is used to align the segments of all sequences. At each iteration, the weights of the distance functions used in the two steps are updated in a way that minimizes a segmentation error. The system is implemented and tested on a few words. The results are satisfactory. In particular, the system is able to distinguish between words with common segments. Moreover, it tolerates to a great extent variable-duration words of the same class. © 1998 SPIE and IS&T.