A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique

Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from v...

Full description

Saved in:
Bibliographic Details
Published in2022 7th International Conference on Data Science and Machine Learning Applications (CDMA) pp. 145 - 150
Main Authors Karatay, Busra, Bestepe, Deniz, Sailunaz, Kashfia, Ozyer, Tansel, Alhajj, Reda
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.03.2022
Subjects
Online AccessGet full text
DOI10.1109/CDMA54072.2022.00029

Cover

More Information
Summary:Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset.
DOI:10.1109/CDMA54072.2022.00029