Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods
Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using...
Saved in:
Published in | Informatics in medicine unlocked Vol. 20; p. 100372 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
2020
Elsevier |
Subjects | |
Online Access | Get full text |
ISSN | 2352-9148 2352-9148 |
DOI | 10.1016/j.imu.2020.100372 |
Cover
Summary: | Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level of p < 0.01. Additionally, the fourteen signals collected from the EEG signal reader (EPOC+) channels are used as features for emotional classification using EEG signals. Finally, the features are cross-validated using fivefold cross-validation and given to the LSTM and CNN classifiers. We achieved a maximum recognition rate of 99.81% using CNN for emotion detection using facial landmarks. However, the maximum recognition rate achieved using the LSTM classifier is 87.25% for emotion detection using EEG signals.
•Classify emotional expressions based on facial landmarks and EEG signals.•The system allows real-time monitoring of physically disabled patients.•The system works effectively in uneven lighting and various skin tones. |
---|---|
ISSN: | 2352-9148 2352-9148 |
DOI: | 10.1016/j.imu.2020.100372 |