A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition
- 13 June 2019
- journal article
- research article
- Published by Elsevier BV in Information Fusion
- Vol. 53, 209-221
- https://doi.org/10.1016/j.inffus.2019.06.019
Abstract
No abstract availableKeywords
Funding Information
- Deanship of Scientific Research at King Saud University (318)
This publication has 79 references indexed in Scilit:
- A framework for collaborative computing and multi-sensor data fusion in body sensor networksInformation Fusion, 2015
- A Review and Meta-Analysis of Multimodal Affect Detection SystemsACM Computing Surveys, 2015
- Feature Extraction and Selection for Emotion Recognition from EEGIEEE Transactions on Affective Computing, 2014
- Power-Aware Activity Monitoring Using Distributed Wearable SensorsIEEE Transactions on Human-Machine Systems, 2014
- BodyCloud: A SaaS approach for community Body Sensor NetworksFuture Generation Computer Systems, 2014
- Static and dynamic 3D facial expression recognition: A comprehensive surveyImage and Vision Computing, 2012
- IEMOCAP: interactive emotional dyadic motion capture databaseLanguage Resources and Evaluation, 2008
- The emotional brainNature Reviews Neuroscience, 2004
- Automatic facial expression analysis: a surveyPattern Recognition, 2003
- Fusion of audio and video information for multi modal person authenticationPattern Recognition Letters, 1997