Emotion Recognition Based on Facial Gestures and Convolutional Neural Networks

Humans express emotions verbally and non-verbally through their voice, facial expressions, and body language. Facial expression recognition systems can identify the emotional state of any person by using different intelligent algorithms, such as Support Vector Machines, Hidden Markov Models, and Con...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE 3rd Conference on Information Technology and Data Science (CITDS) pp. 1 - 6
Main Authors Sanchez-Callejas, Francisco Emiliano, Cruz-Albarran, Irving A., Morales-Hernandez, Luis A.
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.08.2024
Subjects
Online AccessGet full text
DOI10.1109/CITDS62610.2024.10791356

Cover

More Information
Summary:Humans express emotions verbally and non-verbally through their voice, facial expressions, and body language. Facial expression recognition systems can identify the emotional state of any person by using different intelligent algorithms, such as Support Vector Machines, Hidden Markov Models, and Convolutional Neural Networks, among others. This study focuses on facial expression recognition using eye and mouth regions of images from the FER-2013 dataset by training convolutional neural network (CNN) models. Seven emotional states - happy, sad, fear, anger, disgust, surprise and neutral - were identified. The methodology included segmenting and concatenating the images to form three CNN models. The best-performing model, a four-layer CNN with 8, 16, 32, and 64 filters, achieved remarkable results: 99.05% accuracy, 100.00% precision, 93.75% recall, 96.77% Fl-score, 95.95% validation accuracy, and a 0.15 validation loss with a processing time of 3.03 minutes. It was possible to develop a CNN model capable of identifying seven emotional states from only the data of the eye and mouth region using concatenated images.
DOI:10.1109/CITDS62610.2024.10791356