Novel SR-RNN Classifier for Accurate Emotion Detection in Facial Analysis

Facial Expression Recognition (FER) is crucial for understanding human emotions in fields like human-computer interaction and psychology. Despite advances in deep learning (DL), existing FER methods often struggle with noise, lighting variations, and inter-subject variability, leading to inaccurate...

Full description

Saved in:
Bibliographic Details
Published inStatistics, optimization & information computing Vol. 13; no. 4; pp. 1557 - 1577
Main Authors Bedre, Jyoti S., P. Lakshmi Prasanna
Format Journal Article
LanguageEnglish
Published 2025
Online AccessGet full text
ISSN2311-004X
2310-5070
2310-5070
DOI10.19139/soic-2310-5070-2142

Cover

More Information
Summary:Facial Expression Recognition (FER) is crucial for understanding human emotions in fields like human-computer interaction and psychology. Despite advances in deep learning (DL), existing FER methods often struggle with noise, lighting variations, and inter-subject variability, leading to inaccurate emotion classification. This paper addresses these challenges by proposing a novel SwikyRelu Recurrent Neural Network (SR-RNN) classifier. The aim is to enhance FER accuracy while reducing computational complexity. The methodology involves a multi-step process starting with image pre-processing using an Adaptive Mode Guided Filter (AMGF) and Contrast Limited Adaptive Histogram Equalization (CLAHE). Key facial features are extracted using the Generative Additive Active Shape Model (GAASM) and clustered into subgraphs using Radial Basis K-Medoids Clustering (RBKMC). Feature selection is optimized through the Chaotic Ternary Remora Optimization (CTRO) algorithm, with the selected features fed into the SR-RNN classifier for emotion categorization. Results from extensive testing on the CK+, FER-2013, and RAF-DB dataset shows that the proposed SR-RNN classifier significantly outperforms conventional models, achieving 98.85\%, 91.79\%, and 89.28\% accuracy, respectively. The conclusion highlights the model's ability to enhance FER performance by effectively handling noise, illumination differences, and inter-subject variability.
ISSN:2311-004X
2310-5070
2310-5070
DOI:10.19139/soic-2310-5070-2142