Refine Search

New Search

Results: 186

(searched for: Review on Emotion Recognition Using Facial Expressions)
Save to Scifeed
Page of 4
Articles per Page
by
Show export options
  Select all
Published: 1 June 2021
Animals, Volume 11; doi:10.3390/ani11061643

Abstract:
Automated recognition of human facial expressions of pain and emotions is to a certain degree a solved problem, using approaches based on computer vision and machine learning. However, the application of such methods to horses has proven difficult. Major barriers are the lack of sufficiently large, annotated databases for horses and difficulties in obtaining correct classifications of pain because horses are non-verbal. This review describes our work to overcome these barriers, using two different approaches. One involves the use of a manual, but relatively objective, classification system for facial activity (Facial Action Coding System), where data are analyzed for pain expressions after coding using machine learning principles. We have devised tools that can aid manual labeling by identifying the faces and facial keypoints of horses. This approach provides promising results in the automated recognition of facial action units from images. The second approach, recurrent neural network end-to-end learning, requires less extraction of features and representations from the video but instead depends on large volumes of video data with ground truth. Our preliminary results suggest clearly that dynamics are important for pain recognition and show that combinations of recurrent neural networks can classify experimental pain in a small number of horses better than human raters.
Neta Yitzhak, Yoni Pertzov,
Social and Personality Psychology Compass; doi:10.1111/spc3.12621

The publisher has not yet granted permission to display this abstract.
Chengchen Lyu, Hui Chen, Xiaolan Peng, Tong Xu, Hongan Wang
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems; doi:10.1145/3411763.3451578

The publisher has not yet granted permission to display this abstract.
, Adrien Pizano, Manuel Bouvard, Anouck Amestoy
Frontiers in Psychiatry, Volume 12; doi:10.3389/fpsyt.2021.634756

Abstract:
The ability to recognize and express emotions from facial expressions are essential for successful social interactions. Facial Emotion Recognition (FER) and Facial Emotion Expressions (FEEs), both of which seem to be impaired in Autism Spectrum Disorders (ASD) and contribute to socio-communicative difficulties, participate in the diagnostic criteria for ASD. Only a few studies have focused on FEEs processing and the rare behavioral studies of FEEs in ASD have yielded mixed results. Here, we review studies comparing the production of FEEs between participants with ASD and non-ASD control subjects, with a particular focus on the use of automatic facial expression analysis software. A systematic literature search in accordance with the PRISMA statement identified 20 reports published up to August 2020 concerning the use of new technologies to evaluate both spontaneous and voluntary FEEs in participants with ASD. Overall, the results highlight the importance of considering socio-demographic factors and psychiatric co-morbidities which may explain the previous inconsistent findings, particularly regarding quantitative data on spontaneous facial expressions. There is also reported evidence for an inadequacy of FEEs in individuals with ASD in relation to expected emotion, with a lower quality and coordination of facial muscular movements. Spatial and kinematic approaches to characterizing the synchrony, symmetry and complexity of facial muscle movements thus offer clues to identifying and exploring promising new diagnostic targets. These findings have allowed hypothesizing that there may be mismatches between mental representations and the production of FEEs themselves in ASD. Such considerations are in line with the Facial Feedback Hypothesis deficit in ASD as part of the Broken Mirror Theory, with the results suggesting impairments of neural sensory-motor systems involved in processing emotional information and ensuring embodied representations of emotions, which are the basis of human empathy. In conclusion, new technologies are promising tools for evaluating the production of FEEs in individuals with ASD, and controlled studies involving larger samples of patients and where possible confounding factors are considered, should be conducted in order to better understand and counter the difficulties in global emotional processing in ASD.
Sunita Sahu, Ekta Kithani, Manav Motwani, Sahil Motwani, Aadarsh Ahuja
Advances in Intelligent Systems and Computing pp 143-153; doi:10.1007/978-981-33-4367-2_15

The publisher has not yet granted permission to display this abstract.
Machine Learning and Knowledge Extraction, Volume 3, pp 414-434; doi:10.3390/make3020021

Abstract:
Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.
Abozar Atya Mohamed Atya, Khalid Hamid Bilal
European Journal of Electrical Engineering and Computer Science, Volume 5, pp 1-4; doi:10.24018/ejece.2021.5.3.322

Abstract:
The advent of artificial intelligence technology has reduced the gap between humans and machines as equips man to create more near-perfect humanoids. Facial expression is an important tool to communicate one’s emotions as a non-verbally overview of emotion recognition using facial expressions. A remarkable advantage of such a technique recently improved public security through tracking and recognizing, thus led to the high attention to keep up the scientific research in the field. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Artificial Neural Network (ANN), Convolution Neural Network (CNN), Active Appearance and Machine learning which all used to classify emotions based on certain parts of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. By comparison, the reviews have shown that the average accuracy of the basic emotion ranged from 51% up to 100%, whereas carrying through 7% to 13% in the compound emotions, hence indicated that the indispensable emotion is much comfortable to recognize.
Sharmeen M.Saleem Abdullah Abdullah, Siddeeq Y. Ameen Ameen, Mohammed A. M. Sadeeq, Subhi Zeebaree
Journal of Applied Science and Technology Trends, Volume 2, pp 52-58; doi:10.38094/jastt20291

Abstract:
New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.
Sharmeen M. Saleem Abdullah, Adnan Mohsin Abdulazeez
Journal of Soft Computing and Data Mining, Volume 2; doi:10.30880/jscdm.2021.02.01.006

The publisher has not yet granted permission to display this abstract.
Freddy Alejandro Castro Salinas, Geovanny Genaro Reivan Ortiz, Pedro Carlos Martínez Suarez
South Florida Journal of Development, Volume 2, pp 2102-2118; doi:10.46932/sfjdv2n2-076

The publisher has not yet granted permission to display this abstract.
, Arafat Rahman, Atiqur Rahman Ahad
Springer Texts in Business and Economics pp 237-269; doi:10.1007/978-3-030-68590-4_9

The publisher has not yet granted permission to display this abstract.
Xiaohui Yuan, Sos S. Agaian, Wencheng Wang, Mohamed Elhoseny
Journal of Electronic Imaging, Volume 30; doi:10.1117/1.jei.30.3.031201

Abstract:
The guest editors introduce the Special Section on Advances in Urban Imaging and Applications. Nowadays, a new generation of imaging, depth, and ultrasonic sensors and platforms is embedded in modern-day cities’ infrastructures. Massive volume data from surveillance cameras, social media, and remote/proximate sensing platforms are currently harvested and stored that scaffold the transform of urban life. Using such rich data to support intelligent and efficient urban affairs management is a pressing demand to improve livability and accessibility for our citizens. The rapid growth of cities and the associated human activities lead to various urban problems, such as congestion, imbalanced resource allocation, and pollution. Urban imaging’s interest increases day by day, especially since the COVID-19 pandemic and current world economic situation. On the other hand, the rapid advance of computing techniques for learning from massive data has profoundly changed our world in many aspects that significantly improve the performance of analysis of big data from our daily lives. Yet, the development and application of advanced computing methods for big urban data (text, images, and videos) analysis and management are limited, and intelligent technologies are still finding their feet in the fields of smart communities. New ideas that leverage the vast amount of diverse data from numerous sources and cutting-edge computing power to address emerging needs are urgently sought. In addition, urban intelligence leverages advanced sensing and network infrastructure and focuses on advancing algorithms and applications in artificial intelligence and data sciences to revolutionize our cities and communities. It serves as the underpinning driving force to the development and implementation of smart cities. It also integrates the science and technology transformation in computer sciences, engineering, health, transportation, energy, public safety, etc. The world population is growing, and it is estimated to double by 2050. Therefore, these expectations produce new challenges and opportunities for cities and communities and urban imaging and its applications.This Special Section on Advances in Urban Imaging and Applications presents the latest research in urban intelligence fields, emphasizing imaging and applications The International Conference on Urban Intelligence and Applications 2020 hosted presentations of many innovative ideas and new applications. Out of many papers recommended by the conference committee, six are accepted and included in this special section after thorough reviews and revisions. The included papers cover exciting topics of urban imaging, including novel algorithms and applications.Maharjan et al. present a probabilistic non-rigid point set registration method to deal with large and uneven deformations, an ill-posed and highly challenging problem. By enforcing landmark correspondences, the proposed method preserves the point set’s global shape with significant deformations. In addition, the stochastic neighbor embedding is used to protect local neighborhood structure, which penalizes incoherent transformation within a neighborhood.Zhong et al. propose a fast and rotation-robust local difference binary descriptor using polar location. The method is based on a binary test of average intensity, radial gradient, and tangent gradient of grid cells on multiple log-polar grids. The computational cost is significantly reduced using a lookup table mapping discrete polar coordinates with image pixel locations and rebuilding the integral image in polar coordinates.Fan et al. devise a track recognition algorithm based on a semi-supervised generative adversarial network that learns a robust model from a few examples distorted by outliers. The method identifies and eliminates the outliers and extracts strong flight features for deep network-based model creation.Zhong et al. propose a facial expression recognition method based on the facial part attention that extracts emotional rich, local features. The proposed method leverages a cluster-based facial landmarks selection method and a convolution neural network. The convolutional neural network includes an object network for extracting facial features and an attention network for obtaining local features. The global and local features are integrated into the facial expression recognition model.Xu et al. investigate the active balancing control with an aim towards real-time management of imaging platforms in response to external events. The method circumvents the inevitable delay in the high-speed systems. The method introduces an integral transformation term to convert a time-delay system into a dynamic model without delaying active balancing control.Wang et al. study the synchronization of virtual camera motion attitude for improved accuracy in the multiplayer motion capture and virtual augmentation. The proposed system leverages the Unity3D platform and optical positioning technology to locate the position of the camera and virtual roles. The inertial attitude sensing technology is used to obtain multiple targets’ attitude, which achieves an integration of the optical positioning and the inertial attitude sensor.BiographyXiaohui Yuan is an associate professor in the Department of Computer Science and Engineering, University of North Texas. His research interests include computer vision, data mining, machine learning, and artificial intelligence. His research is supported by Air Force Lab, National Science Foundation, Texas Advanced Research Program, Oak Ridge Associated Universities, and UNT. His research findings are reported in more than 180 peer-reviewed papers. He is a recipient of the Ralph E. Powe Junior Faculty Enhancement award in 2008 and the Air Force Faculty Fellowship in 2011, 2012, and 2013. He also received two research awards and a teaching award from UNT in 2007, 2008, and 2012, respectively. He served on...
IEEE Transactions on Pattern Analysis and Machine Intelligence, pp 1-1; doi:10.1109/tpami.2021.3067464

The publisher has not yet granted permission to display this abstract.
, Keerthana Dungala, Keerthi Mandapati, Mahitha Pillodi, Sumasree Reddy Vanga
Algorithms for Intelligent Systems pp 319-326; doi:10.1007/978-981-33-4046-6_31

The publisher has not yet granted permission to display this abstract.
Wen-Jing Yan, Shan Li, Chengtao Que, Jiquan Pei,
Transactions on Petri Nets and Other Models of Concurrency XV pp 68-82; doi:10.1007/978-3-030-69544-6_5

The publisher has not yet granted permission to display this abstract.
Mohammad Ariff Rashidan, Shahrul Na'im Sidek, Hazlina Md. Yusof, Madihah Khalid, , Aimi Shazwani Ghazali, Sarah Afiqah Mohd Zabidi, Faizanah Abdul Alim Sidique
IEEE Access, Volume 9, pp 33638-33653; doi:10.1109/access.2021.3060753

Abstract:
The information about affective states in individuals with autism spectrum disorder (ASD) is difficult to obtain as they usually suffer from deficits in facial expression. Affective state conditions of individuals with ASD were associated with impaired regulation of speech, communication, and social skills leading towards poor socio-emotion interaction. It is conceivable that the advance of technology could offer a psychophysiological alternative modality, particularly useful in persons who cannot verbally communicate their emotions as affective states such as individuals with ASD. The study is focusing on the investigation of technology-assisted approach and its relationship to affective states recognition. A systematic review was executed to summarize relevant research that involved technology-assisted implementation to identify the affective states of individuals with ASD using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) approach. The output from the online search process obtained from six publication databases on relevant studies published up to 31 July 2020 was analyzed. Out of 391 publications retrieved, 20 papers met the inclusion and exclusion criteria set in prior. Data were synthesized narratively despite methodological and heterogeneity variations. In this review, some research methods, systems, equipment and models to address all the related issues to the technology-assisted and affective states concerned were presented. As for the consequence, it can be assumed that the emotion recognition with assisted by technology, for evaluating and classifying affective states could help to improve efficacy in therapy sessions between therapists and individuals with ASD. This review will serve as a concise reference for providing general overviews of the current state-of-the-art studies in this area for practitioners, as well as for experienced researchers who are searching for a new direction for future works.
Brindahini Vimaleswaran, Gayashini Shyanka Ratnayake
2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV) pp 950-955; doi:10.1109/icicv50876.2021.9388447

The publisher has not yet granted permission to display this abstract.
Hang Pan, , Zhiliang Wang, Bin Liu, Minghao Yang, Jianhua Tao
Virtual Reality & Intelligent Hardware, Volume 3, pp 1-17; doi:10.1016/j.vrih.2020.10.003

Abstract:
Facial micro-expressions are short and imperceptible expressions that involuntarily reveal the true emotions that a person may be attempting to suppress, hide, disguise, or conceal. Such expressions can reflect a person's real emotions and have a wide range of application in public safety and clinical diagnosis. The analysis of facial micro-expressions in video sequences through computer vision is still relatively recent. In this research, a comprehensive review on the topic of spotting and recognition used in microexpression analysis databases and methods, is conducted, and advanced technologies in this area are summarized. In addition, we discuss challenges that remain unresolved alongside future work to be completed in the field of micro-expression analysis.
Published: 14 January 2021
Sensors, Volume 21; doi:10.3390/s21020553

Abstract:
Understanding animal emotions is a key to unlocking methods for improving animal welfare. Currently there are no ‘benchmarks’ or any scientific assessments available for measuring and quantifying the emotional responses of farm animals. Using sensors to collect biometric data as a means of measuring animal emotions is a topic of growing interest in agricultural technology. Here we reviewed several aspects of the use of sensor-based approaches in monitoring animal emotions, beginning with an introduction on animal emotions. Then we reviewed some of the available technological systems for analyzing animal emotions. These systems include a variety of sensors, the algorithms used to process biometric data taken from these sensors, facial expression, and sound analysis. We conclude that a single emotional expression measurement based on either the facial feature of animals or the physiological functions cannot show accurately the farm animal’s emotional changes, and hence compound expression recognition measurement is required. We propose some novel ways to combine sensor technologies through sensor fusion into efficient systems for monitoring and measuring the animals’ compound expression of emotions. Finally, we explore future perspectives in the field, including challenges and opportunities.
, Shruti Japee
Neuroscience & Biobehavioral Reviews, Volume 120, pp 75-77; doi:10.1016/j.neubiorev.2020.10.016

The publisher has not yet granted permission to display this abstract.
, Daiene De Morais Fabrício, Marcos Hortes N. Chagas
Archives of Gerontology and Geriatrics, Volume 92; doi:10.1016/j.archger.2020.104277

The publisher has not yet granted permission to display this abstract.
Santosh Kumar Uppada, Dani Prakash Esukapalli, B Sivaselvan
2020 IEEE 4th Conference on Information & Communication Technology (CICT) pp 1-5; doi:10.1109/cict51604.2020.9312107

The publisher has not yet granted permission to display this abstract.
Om M. Rajpurkar, Siddesh S. Kamble, Jayram P. Nandagiri, Pramod J. Bide
2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA) pp 905-911; doi:10.1109/iceca49313.2020.9297656

The publisher has not yet granted permission to display this abstract.
Mohammad Soleymani
Companion Publication of the 2020 International Conference on Multimodal Interaction; doi:10.1145/3395035.3425321

The publisher has not yet granted permission to display this abstract.
, Swathi Lenka
Inventive Computation and Information Technologies pp 231-244; doi:10.1007/978-981-15-5397-4_24

The publisher has not yet granted permission to display this abstract.
Published: 30 September 2020
Neurocomputing, Volume 408, pp 231-245; doi:10.1016/j.neucom.2019.08.110

The publisher has not yet granted permission to display this abstract.
Mohammed Alharbi, Shihong Huang
Proceedings of the 2020 The 2nd World Symposium on Software Engineering; doi:10.1145/3425329.3425343

The publisher has not yet granted permission to display this abstract.
Joshua A Caine, Britt Klein, Stephen L Edwards
Published: 24 September 2020
Abstract:
BACKGROUND Impaired facial emotion expression recognition (FEER) has typically been considered a correlate of Autism Spectrum Disorder (ASD). Now, the alexithymia hypothesis is suggesting that this emotion processing problem is instead related to alexithymia which frequently co-occurs with ASD. By combining predictive coding theories of ASD and simulation theories of emotion recognition, it is suggested that facial mimicry may improve the training of FEER in ASD and alexithymia. OBJECTIVE The current study aims to evaluate a novel mimicry task to improve FEER in adults with, and without ASD and alexithymia. Additionally, this study will aim to determine the contributions of alexithymia and ASD to FEER ability and assess which of these two populations benefit from this training task. METHODS Recruitment will primarily take place through an ASD community group with emphasis put on snowball recruiting. Included will be N=64 consenting adults equally divided between participants without an ASD, and participants with an ASD. Participants will be screened online using the K-10 (cut off score of 22), ASQ-10 and TAS-20 followed by a clinical interview with a provisional psychologist at the Federation University psychology clinic. The clinical interview will include assessment of ability, anxiety and depression as well as discussion of past ASD diagnosis and confirmatory administration of the Autism Mental Status Exam (AMSE). Following the clinical interview, the participant will complete the Bermond-Vorst Alexithymia Questionnaire (BVAQ) and then undertake a baseline assessment of FEER. Consenting participants will then be assigned using a permuted blocked randomisation method into either the control task condition or the mimicry task condition. A brief measure of satisfaction of the task and a debriefing session will conclude the study. RESULTS The study has Federation University Human Research Ethics Committee approval and is registered with the Australian New Zealand Clinical Trials. Participant recruitment is predicted to begin in quarter one of 2021. CONCLUSIONS This study will be the first to evaluate the use of a novel facial mimicry task condition to increase FEER in adults with ASD and alexithymia. If efficacious, this task could prove useful as a cost-effective adjunct intervention which could be used at home and thus remove barriers to entry. This study will also explore the unique effectiveness of this task in people without an ASD, with an ASD and with alexithymia. CLINICALTRIAL Registration: Australian New Zealand Clinical Trial Registry (ACTRN: ACTRN12619000705189, https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377455).
Emmanuel Dufourq
Conference of the South African Institute of Computer Scientists and Information Technologists 2020; doi:10.1145/3410886.3410891

The publisher has not yet granted permission to display this abstract.
Devashi Choudhary, Jainendra Shukla
2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM) pp 125-133; doi:10.1109/bigmm50055.2020.00027

The publisher has not yet granted permission to display this abstract.
Wenjing Yan, Shan Li, Chengtao Que, Jiquan Pei, Weihong Deng
Published: 12 August 2020
by ArXiv
Abstract:
Much of the work on automatic facial expression recognition relies on databases containing a certain number of emotion classes and their exaggerated facial configurations (generally six prototypical facial expressions), based on Ekman's Basic Emotion Theory. However, recent studies have revealed that facial expressions in our human life can be blended with multiple basic emotions. And the emotion labels for these in-the-wild facial expressions cannot easily be annotated solely on pre-defined AU patterns. How to analyze the action units for such complex expressions is still an open question. To address this issue, we develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild. We first reviewed the annotation methods in existing databases and identified crowdsourcing as a promising strategy for labeling in-the-wild facial expressions. Then, RAF-AU was finely annotated by experienced coders, on which we also conducted a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions. Finally, we provided a baseline for AU recognition in RAF-AU using popular features and multi-label learning methods.
Khadija Slimani, Mohamed Kas, Youssef El Merabet, Yassine Ruichek, Rochdi Messoussi
International Journal of Electrical and Computer Engineering (IJECE), Volume 10, pp 4080-4092; doi:10.11591/ijece.v10i4.pp4080-4092

Abstract:
Notwithstanding the recent technological advancement, the identification of facial and emotional expressions is still one of the greatest challenges scientists have ever faced. Generally, the human face is identified as a composition made up of textures arranged in micro-patterns. Currently, there has been a tremendous increase in the use of local binary pattern based texture algorithms which have invariably been identified to being essential in the completion of a variety of tasks and in the extraction of essential attributes from an image. Over the years, lots of LBP variants have been literally reviewed. However, what is left is a thorough and comprehensive analysis of their independent performance. This research work aims at filling this gap by performing a large-scale performance evaluation of 46 recent state-of-the-art LBP variants for facial expression recognition. Extensive experimental results on the well-known challenging and benchmark KDEF, JAFFE, CK and MUG databases taken under different facial expression conditions, indicate that a number of evaluated state-of-the-art LBP-like methods achieve promising results, which are better or competitive than several recent state-of-the-art facial recognition systems. Recognition rates of 100%, 98.57%, 95.92% and 100% have been reached for CK, JAFFE, KDEF and MUG databases, respectively.
Xiaoguang Lin, Xueling Zhang, Qinqin Liu, Panwen Zhao, Hui Zhang, Hongsheng Wang,
Medicine, Volume 99; doi:10.1097/md.0000000000021154

Abstract:
Background: Traumatic brain injury (TBI) refers to head injuries that disrupt normal function of the brain. TBI commonly lead to a wide range of potential psychosocial functional deficits. Although psychosocial function after TBI is influenced by many factors, more and more evidence shows that social cognitive skills are critical contributors. Facial emotion recognition, one of the higher-level skills of social cognition, is the ability to perceive and recognize emotional states of others based on their facial expressions. Numerous studies have assessed facial emotion recognition performance in adult patients with TBI. However, there have been inconsistent findings. The aim of this study is to conduct a meta-analysis to characterize facial emotion recognition in adult patients with TBI. Methods: A systematic literature search will be performed for eligible studies published up to March 19, 2020 in three international databases (PubMed, Web of Science and Embase). The work such as article retrieval, screening, quality evaluation, data collection will be conducted by two independent researchers. Meta-analysis will be conducted using Stata 15.0 software. Results: This meta-analysis will provide a high-quality synthesis from existing evidence for facial emotion recognition in adult patients with TBI, and analyze the facial emotion recognition performance in different aspects (i.e., recognition of negative emotions or positive emotions or any specific basic emotion). Conclusions: This meta-analysis will provide evidence of facial emotion recognition performance in adult patients with TBI. INPLASY registration number: INPLASY202050109.
, E. Lyros, , K. Fassbender, M. M. Unger
Published: 17 July 2020
Behavioural Neurology, Volume 2020, pp 1-18; doi:10.1155/2020/4329297

Abstract:
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is an effective therapy for Parkinson’s disease (PD). Nevertheless, DBS has been associated with certain nonmotor, neuropsychiatric effects such as worsening of emotion recognition from facial expressions. In order to investigate facial emotion recognition (FER) after STN DBS, we conducted a literature search of the electronic databases MEDLINE and Web of science. In this review, we analyze studies assessing FER after STN DBS in PD patients and summarize the current knowledge of the effects of STN DBS on FER. The majority of studies, which had clinical and methodological heterogeneity, showed that FER is worsening after STN DBS in PD patients, particularly for negative emotions (sadness, fear, anger, and tendency for disgust). FER worsening after STN DBS can be attributed to the functional role of the STN in limbic circuits and the interference of STN stimulation with neural networks involved in FER, including the connections of the STN with the limbic part of the basal ganglia and pre- and frontal areas. These outcomes improve our understanding of the role of the STN in the integration of motor, cognitive, and emotional aspects of behaviour in the growing field of affective neuroscience. Further studies using standardized neuropsychological measures of FER assessment and including larger cohorts are needed, in order to draw definite conclusions about the effect of STN DBS on emotional recognition and its impact on patients’ quality of life.
Aliki Economides, Yiannis Laouris, ,
Blockchain Technology and Innovations in Business Processes pp 435-475; doi:10.1007/978-981-15-5093-5_39

The publisher has not yet granted permission to display this abstract.
Regular, Volume 9, pp 405-408; doi:10.35940/ijitee.i7076.079920

The publisher has not yet granted permission to display this abstract.
, Zhong Yin, Peng Chen, Stefano Nichele
Published: 1 July 2020
Information Fusion, Volume 59, pp 103-126; doi:10.1016/j.inffus.2020.01.011

The publisher has not yet granted permission to display this abstract.
Published: 1 July 2020
Journal of Adolescence, Volume 82, pp 1-10; doi:10.1016/j.adolescence.2020.04.010

The publisher has not yet granted permission to display this abstract.
Christian Padilla-Navarro, Carlos Zarate-Trejo, Georges Khalaf, Pascal Fallavollita
Journal of Scientific and Technical Applications pp 14-17; doi:10.35429/jsta.2020.17.6.14.17

The publisher has not yet granted permission to display this abstract.
, Héctor J. Pijeira-Díaz, Marta Sobocinski, Muhterem Dindar, Sanna Järvelä, Paul A. Kirschner
Education and Information Technologies, Volume 25, pp 5499-5547; doi:10.1007/s10639-020-10229-w

Abstract:
This systematic review on data modalities synthesises the research findings in terms of how to optimally use and combine such modalities when investigating cognitive, motivational, and emotional learning processes. ERIC, WoS, and ScienceDirect databases were searched with specific keywords and inclusion criteria for research on data modalities, resulting in 207 relevant publications. We provide findings in terms of target journal, country, subject, participant characteristics, educational level, foci, type of data modality, research method, type of learning, learning setting, and modalities used to study the different foci. In total, 18 data modalities were classified. For the 207 multimodal publications, 721 occurrences of modalities were observed. The most popular modality was interview followed by survey and observation. The least common modalities were heart rate variability, facial expression recognition, and screen recording. From the 207 publications, 98 focused exclusively on the cognitive aspects of learning, followed by 27 publications that only focused on motivation, while only five publications exclusively focused on emotional aspects. Only 10 publications focused on a combination of cognitive, motivational, and emotional aspects of learning. Our results plea for the increased use of objective measures, highlight the need for triangulation of objective and subjective data, and demand for more research on combining various aspects of learning. Further, rather than researching cognitive, motivational, and emotional aspects of learning separately, we encourage scholars to tap into multiple learning processes with multimodal data to derive a more comprehensive view on the phenomenon of learning.
International Journal for Research in Engineering Application & Management pp 09-13; doi:10.35291/2454-9150.2020.0353

The publisher has not yet granted permission to display this abstract.
BSSS Journal of Computer; doi:10.51767/jc1108

Abstract:
Facial expression is a primitive element for human interactions. To understand human behavior or mood, it is essential to analyze human facial expression from multidimensional sensitive and feeling image data. Various Artificial Intelligence based techniques are used for facial expression evaluation. In this paper an attempt has been done to Facial expression recognition & emotion evaluation. Previous and recent researches have been investigated to find out the related effective method
Shivani Patil, Amit Joshi, Gaurav Deore, Anuj Taley, Suraj Sawant
Published: 15 May 2020
SSRN Electronic Journal; doi:10.2139/ssrn.3645477

The publisher has not yet granted permission to display this abstract.
Sukhpreet Kaur, Nilima Kulkarni
Published: 14 May 2020
SSRN Electronic Journal; doi:10.2139/ssrn.3647958

The publisher has not yet granted permission to display this abstract.
Published: 23 April 2020
Applied Sciences, Volume 10; doi:10.3390/app10082924

Abstract:
Over recent years, robots are increasingly being employed in several aspects of modern society. Among others, social robots have the potential to benefit education, healthcare, and tourism. To achieve this purpose, robots should be able to engage humans, recognize users’ emotions, and to some extent properly react and "behave" in a natural interaction. Most robotics applications primarily use visual information for emotion recognition, which is often based on facial expressions. However, the display of emotional states through facial expression is inherently a voluntary controlled process that is typical of human–human interaction. In fact, humans have not yet learned to use this channel when communicating with a robotic technology. Hence, there is an urgent need to exploit emotion information channels not directly controlled by humans, such as those that can be ascribed to physiological modulations. Thermal infrared imaging-based affective computing has the potential to be the solution to such an issue. It is a validated technology that allows the non-obtrusive monitoring of physiological parameters and from which it might be possible to infer affective states. This review is aimed to outline the advantages and the current research challenges of thermal imaging-based affective computing for human–robot interaction.
Payal Pandey, Divyansh Thakur, Bishan Thakur
Published: 11 April 2020
SSRN Electronic Journal; doi:10.2139/ssrn.3573551

The publisher has not yet granted permission to display this abstract.
, Debashis Das Chakladar, Tanmoy Dasgupta
Advances in Intelligent Systems and Computing pp 399-410; doi:10.1007/978-981-15-2188-1_32

The publisher has not yet granted permission to display this abstract.
Karpov Alexey A Elena V. Ryumina, A.A. Karpov
Scientific and Technical Journal of Information Technologies, Mechanics and Optics, Volume 20, pp 163-176; doi:10.17586/2226-1494-2020-20-2-163-176

Abstract:
Recognition of human emotions by facial expressions is an important research problem that covers many areas and disciplines, such as computer vision, artificial intelligence, medicine, psychology and security. This paper provides an analytical overview of video facial expression databases and approaches to recognition emotions by facial expressions, which include three main stages of image analysis, such as pre-processing, feature extraction and classification. The paper presents both traditional approaches to recognition of human emotions by visual facial features, and approaches based on deep learning using deep neural networks. We give the current results of some existing algorithms. In the review of scientific and technical literature we empathized mainly the sources containing theoretical and research information of the methods under consideration, as well as comparison of traditional methods and methods based on deep neural networks, which were supported by experimental studies. Analysis of scientific and technical literature describing methods and algorithms for study and recognition of facial expressions, as well as the results of world scientific research, have shown that traditional methods for classification of facial expressions are second in speed and accuracy to artificial neural networks. The main contribution of this review is providing a common understanding of modern approaches to recognition of facial expressions, which will enable new researchers to understand the main components and trends in the field of recognition of facial expressions. Moreover, comparison of world scientific findings has shown that a combination of traditional approaches and approaches based on deep neural networks achieves better classification accuracy, but artificial neural networks are the best classification methods. The paper may be useful to specialists and researchers in the field of computer vision.
, , Alice Pitt, Daniela Strelchuk, Sarah Sullivan,
Published: 1 April 2020
Schizophrenia Research, Volume 218, pp 7-13; doi:10.1016/j.schres.2019.12.031

The publisher has not yet granted permission to display this abstract.
Page of 4
Articles per Page
by
Show export options
  Select all
Back to Top Top