Advances in robust multimodal interface design
- 15 September 2003
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Computer Graphics and Applications
- Vol. 23 (5), 62-68
- https://doi.org/10.1109/mcg.2003.1231179
Abstract
The author discusses enhanced robustness for three multimodal interface types: speech and pen, speech and lip movements, and multibiometric (physiological and behavioral) input.Keywords
This publication has 20 references indexed in Scilit:
- Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research DirectionsHuman–Computer Interaction, 2000
- Taming recognition errors with a multimodal interfaceCommunications of the ACM, 2000
- Audio-visual speech modeling for continuous speech recognitionIEEE Transactions on Multimedia, 2000
- Multimodal integration-a statistical viewIEEE Transactions on Multimedia, 1999
- Integrating faces and fingerprints for personal identificationIEEE Transactions on Pattern Analysis and Machine Intelligence, 1998
- Speech Recognition and Sensory IntegrationAmerican Scientist, 1998
- Biological and cognitive foundations of intelligent sensor fusionIEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 1996
- Person identification using multiple cuesIEEE Transactions on Pattern Analysis and Machine Intelligence, 1995
- Hearing lips and seeing voicesNature, 1976
- Visual Contribution to Speech Intelligibility in NoiseThe Journal of the Acoustical Society of America, 1954