Multimodal Emotion Recognition Using Visual, Vocal and Physiological Signals: A Review

Author:

Udahemuka Gustave1ORCID,Djouani Karim12ORCID,Kurien Anish M.1ORCID

Affiliation:

1. Department of Electrical Engineering, French South African Institute of Technology, Tshwane University of Technology, Private Bag X680, Pretoria 0001, Gauteng, South Africa

2. Laboratoire Images, Signaux et Systèmes Intelligents (LiSSi), Université de Paris-Est Créteil (UPEC), 94000 Créteil, France

Abstract

The dynamic expressions of emotion convey both the emotional and functional states of an individual’s interactions. Recognizing the emotional states helps us understand human feelings and thoughts. Systems and frameworks designed to recognize human emotional states automatically can use various affective signals as inputs, such as visual, vocal and physiological signals. However, emotion recognition via a single modality can be affected by various sources of noise that are specific to that modality and the fact that different emotion states may be indistinguishable. This review examines the current state of multimodal emotion recognition methods that integrate visual, vocal or physiological modalities for practical emotion computing. Recent empirical evidence on deep learning methods used for fine-grained recognition is reviewed, with discussions on the robustness issues of such methods. This review elaborates on the profound learning challenges and solutions required for a high-quality emotion recognition system, emphasizing the benefits of dynamic expression analysis, which aids in detecting subtle micro-expressions, and the importance of multimodal fusion for improving emotion recognition accuracy. The literature was comprehensively searched via databases with records covering the topic of affective computing, followed by rigorous screening and selection of relevant studies. The results show that the effectiveness of current multimodal emotion recognition methods is affected by the limited availability of training data, insufficient context awareness, and challenges posed by real-world cases of noisy or missing modalities. The findings suggest that improving emotion recognition requires better representation of input data, refined feature extraction, and optimized aggregation of modalities within a multimodal framework, along with incorporating state-of-the-art methods for recognizing dynamic expressions.

Funder

National Research Foundation (NRF) of South Africa

Publisher

MDPI AG

Reference260 articles.

1. Passions: Emotion and Socially Consequential Behavior;Kavanaugh;Emotion: Interdisciplinary Perspectives,1996

2. Pawlik, K., and Rosenzweig, M.R. (2000). Emotions. The International Handbook of Psychology, Sage Publications.

3. Personality theory: Birth, death, and transfiguration;Kavanaugh;Emotion: Interdisciplinary Perspectives,1996

4. Keltner, D., Oatley, K., and Jenkins, J.M. (2014). Understanding Emotions, Wiley.

5. Hewstone, M., and Stroebe, W. (2001). Emotion. Introduction to Social Psychology: A European perspective, Blackwell Publishing Ltd.. [3rd ed.]. Chapter 6.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3