A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique
Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from v...
Saved in:
Published in | 2022 7th International Conference on Data Science and Machine Learning Applications (CDMA) pp. 145 - 150 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.03.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/CDMA54072.2022.00029 |
Cover
Summary: | Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset. |
---|---|
DOI: | 10.1109/CDMA54072.2022.00029 |