Real Time Facial Expression Recognition Using Webcam and SDK Affectiva
Open Access
- 1 January 2018
- journal article
- research article
- Published by Universidad Internacional de La Rioja in International Journal of Interactive Multimedia and Artificial Intelligence
- Vol. 5 (1), 7-15
- https://doi.org/10.9781/ijimai.2017.11.002
Abstract
Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process) is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman's classification of 6 emotional expressions (or 7 - neutral expression) as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik's Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK). In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That's why we've decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK) from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7) facial expressions, namely anger, disgust, fear, happiness, sadness, surprise and neutral. We do not want to point only to the percentage of success of our solution. We want to point out the way we have determined this measurements and the results we have achieved and how these results have significantly influenced our future research direction.Keywords
This publication has 15 references indexed in Scilit:
- Evaluating the Emotional State of a User Using a WebcamInternational Journal of Interactive Multimedia and Artificial Intelligence, 2016
- Human Activity Recognition in Real-Times Environments using Skeleton JointsInternational Journal of Interactive Multimedia and Artificial Intelligence, 2016
- Facial Action Unit Detection Using Active Learning and an Efficient Non-linear Kernel ApproximationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Coupled hidden conditional random fields for RGB-D human action recognitionSignal Processing, 2015
- Towards multimodal emotion recognition in e-learning environmentsInteractive Learning Environments, 2014
- Seamless Tracing of Human Behavior Using Complementary Wearable and House-Embedded SensorsSensors, 2014
- Modeling observer stress for typical real environmentsExpert Systems with Applications, 2014
- Controllability Metrics, Limitations and Algorithms for Complex NetworksIEEE Transactions on Control of Network Systems, 2014
- Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected "In-the-Wild"Published by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- Objective measures, sensors and computational techniques for stress recognition and classification: A surveyComputer Methods and Programs in Biomedicine, 2012