A multimodal convolutional neuro-fuzzy network for emotion understanding of movie clips
- 1 October 2019
- journal article
- research article
- Published by Elsevier BV in Neural Networks
- Vol. 118, 208-219
- https://doi.org/10.1016/j.neunet.2019.06.010
Abstract
No abstract availableKeywords
Funding Information
- Institute for Information & communications Technology Promotion (IITP), South Korea (R7124-16-0004)
- National Research Foundation of Korea (NRF), South Korea (NRF-2016R1A2A2A05921679)
This publication has 22 references indexed in Scilit:
- Fusing audio, visual and textual clues for sentiment analysis from multimodal contentNeurocomputing, 2016
- Joint Visual-Textual Sentiment Analysis with Deep Neural NetworksPublished by Association for Computing Machinery (ACM) ,2015
- DevNet: A Deep Event Network for multimedia event detection and evidence recountingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Deep Learning: Methods and ApplicationsFoundations and Trends® in Signal Processing, 2014
- On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cuesJournal on Multimodal User Interfaces, 2009
- A new SVM based emotional classification of imageJournal of Electronics (China), 2005
- Affective computing: challengesInternational Journal of Human-Computer Studies, 2003
- Defect detection in textured materials using Gabor filtersIEEE Transactions on Industry Applications, 2002
- The min-max composition rule and its superiority over the usual max-min composition ruleFuzzy Sets and Systems, 1998
- Silhouettes: A graphical aid to the interpretation and validation of cluster analysisJournal of Computational and Applied Mathematics, 1987