Optical-Flow Based Symmetric Feature Extraction for Facial Expression Recognition

Facial expression analysis is one of the most essential tools for behavior interpretation and emotion modeling in Intelligent Human-Computer Interaction (HCI). Although humans can easily interpret facial emotions, computers have great difficulty doing so. Analyzing changes and deformations in the fa...

Full description

Saved in:
Bibliographic Details
Published inInformation technology and control Vol. 54; no. 1; pp. 44 - 63
Main Authors Zeraatkar, Mohammad, Joloudari, Javad, Rajesh, Kandala N. V. P. S., Gaftandzhieva, Silvia, Hussain, Sadiq
Format Journal Article
LanguageEnglish
Published Kaunas University of Technology 01.04.2025
Subjects
Online AccessGet full text
ISSN1392-124X
2335-884X
2335-884X
DOI10.5755/j01.itc.54.1.36444

Cover

More Information
Summary:Facial expression analysis is one of the most essential tools for behavior interpretation and emotion modeling in Intelligent Human-Computer Interaction (HCI). Although humans can easily interpret facial emotions, computers have great difficulty doing so. Analyzing changes and deformations in the face is one of the methods through which machines can interpret facial expressions. However, maintaining great precision while being accurate, stable, and quick is still challenging in this field. To address this issue, this research presents an innovative and novel method to fully automatically extract critical features from a face during a facial expression. Various machine learning models are used on these features to analyze emotions. We used the optical flow algorithm to extract motion vectors divided into sections on the subject’s face. Finally, each section and its symmetric section were used to calculate a new vector. The final features produce a state-of-the-art accuracy of over 98% in emotion classification in the Extended Cohen-Kanade (CK+) facial expression dataset. Furthermore, we proposed an algorithm to filter the most important features with an SVM classifier and achieved an accuracy of over 97 % by only looking at 15% of the face area.
ISSN:1392-124X
2335-884X
2335-884X
DOI:10.5755/j01.itc.54.1.36444