Fusing audio, visual and textual clues for sentiment analysis from multimodal content
Top Cited Papers
- 1 January 2016
- journal article
- Published by Elsevier BV in Neurocomputing
- Vol. 174, 50-59
- https://doi.org/10.1016/j.neucom.2015.01.095
Abstract
No abstract availableKeywords
This publication has 43 references indexed in Scilit:
- Acoustic template-matching for automatic emergency state detection: An ELM based algorithmNeurocomputing, 2015
- EmoSenticSpace: A novel framework for affective common-sense reasoningKnowledge-Based Systems, 2014
- Circular-ELM for the reduced-reference assessment of perceived image qualityNeurocomputing, 2013
- Sentic Computing: Exploitation of Common Sense for the Development of Emotion-Sensitive SystemsLecture Notes in Computer Science, 2010
- Multimodal information fusion application to human emotion recognition from face and speechMultimedia Tools and Applications, 2009
- Bi-modal emotion recognition from expressive face and body gesturesJournal of Network and Computer Applications, 2007
- Multimodal emotion recognition from expressive faces, body gestures and speechPublished by Springer Science and Business Media LLC ,2006
- Brain-computer interaction research at the computer vision and multimedia laboratory, University of GenevaIEEE Transactions on Neural Systems and Rehabilitation Engineering, 2006
- Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotionThe Journal of the Acoustical Society of America, 1993
- More evidence for the universality of a contempt expressionMotivation and Emotion, 1992