Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods

Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using...

Full description

Saved in:
Bibliographic Details
Published inInformatics in medicine unlocked Vol. 20; p. 100372
Main Authors Hassouneh, Aya, Mutawa, A.M., Murugappan, M.
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 2020
Elsevier
Subjects
Online AccessGet full text
ISSN2352-9148
2352-9148
DOI10.1016/j.imu.2020.100372

Cover

Abstract Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level of p < 0.01. Additionally, the fourteen signals collected from the EEG signal reader (EPOC+) channels are used as features for emotional classification using EEG signals. Finally, the features are cross-validated using fivefold cross-validation and given to the LSTM and CNN classifiers. We achieved a maximum recognition rate of 99.81% using CNN for emotion detection using facial landmarks. However, the maximum recognition rate achieved using the LSTM classifier is 87.25% for emotion detection using EEG signals. •Classify emotional expressions based on facial landmarks and EEG signals.•The system allows real-time monitoring of physically disabled patients.•The system works effectively in uneven lighting and various skin tones.
AbstractList Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level of p < 0.01. Additionally, the fourteen signals collected from the EEG signal reader (EPOC+) channels are used as features for emotional classification using EEG signals. Finally, the features are cross-validated using fivefold cross-validation and given to the LSTM and CNN classifiers. We achieved a maximum recognition rate of 99.81% using CNN for emotion detection using facial landmarks. However, the maximum recognition rate achieved using the LSTM classifier is 87.25% for emotion detection using EEG signals. •Classify emotional expressions based on facial landmarks and EEG signals.•The system allows real-time monitoring of physically disabled patients.•The system works effectively in uneven lighting and various skin tones.
Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf, dumb, and bedridden) and Autism children's emotional expressions based on facial landmarks and electroencephalograph (EEG) signals using a convolutional neural network (CNN) and long short-term memory (LSTM) classifiers by developing an algorithm for real-time emotion recognition using virtual markers through an optical flow algorithm that works effectively in uneven lightning and subject head rotation (up to 25°), different backgrounds, and various skin tones. Six facial emotions (happiness, sadness, anger, fear, disgust, and surprise) are collected using ten virtual markers. Fifty-five undergraduate students (35 male and 25 female) with a mean age of 22.9 years voluntarily participated in the experiment for facial emotion recognition. Nineteen undergraduate students volunteered to collect EEG signals. Initially, Haar-like features are used for facial and eye detection. Later, virtual markers are placed on defined locations on the subject's face based on a facial action coding system using the mathematical model approach, and the markers are tracked using the Lucas-Kande optical flow algorithm. The distance between the center of the subject's face and each marker position is used as a feature for facial expression classification. This distance feature is statistically validated using a one-way analysis of variance with a significance level of p < 0.01. Additionally, the fourteen signals collected from the EEG signal reader (EPOC+) channels are used as features for emotional classification using EEG signals. Finally, the features are cross-validated using fivefold cross-validation and given to the LSTM and CNN classifiers. We achieved a maximum recognition rate of 99.81% using CNN for emotion detection using facial landmarks. However, the maximum recognition rate achieved using the LSTM classifier is 87.25% for emotion detection using EEG signals.
ArticleNumber 100372
Author Mutawa, A.M.
Murugappan, M.
Hassouneh, Aya
Author_xml – sequence: 1
  givenname: Aya
  surname: Hassouneh
  fullname: Hassouneh, Aya
  email: aia.hassouneh@gmail.com
  organization: Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, Kuwait
– sequence: 2
  givenname: A.M.
  surname: Mutawa
  fullname: Mutawa, A.M.
  organization: Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, Kuwait
– sequence: 3
  givenname: M.
  surname: Murugappan
  fullname: Murugappan, M.
  organization: Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Doha, Kuwait
BookMark eNp9kctu3CAUhlGVSk3TPEB3vICnYIzB6qpKnYsUKVKbrBGGw4SpDSMgafIWfeTiTFVVXWR1rt-vo_O_R0chBkDoIyUbSmj_abfxy8OmJe1aEybaN-i4ZbxtBtrJo3_yd-g05x0hhIqeccGP0a-v8Ahz3C8QCo4Oa_wN9Nzc-gXwuMTiY6gdE7fBv-Tfn3OBBd9lH7b4XBuvZzw-7RPkXMcZ62DxOF7gSWewuAKLNvc-AJ5Bp7BC64YF2OMAD6nSAcrPmH7gBcp9tPkDeuv0nOH0TzxBd-fj7dllc31zcXX25boxrKVtA3zqpslaJznnDLhgThA72N5yJulEBOOT7ZkjXHZUdgR6aaVrhbR6GNhk2Qm6OujaqHdqn_yi07OK2quXRkxbpVPxZgZlhADDJ2eJY52DThJjgPXgGO2NGXTVogctk2LOCdxfPUrU6pDaqeqQWh1SB4cqI_5jjC96_XFJ2s-vkp8PJNT3PHpIKhsPwYD1CUyp9_tX6N-h1q7e
CitedBy_id crossref_primary_10_28978_nesciences_1159248
crossref_primary_10_3390_a17070285
crossref_primary_10_1016_j_fraope_2024_100200
crossref_primary_10_3389_fpsyg_2023_1126994
crossref_primary_10_1007_s44196_024_00729_9
crossref_primary_10_1002_cav_2230
crossref_primary_10_1016_j_bspc_2024_106096
crossref_primary_10_1155_2022_8379202
crossref_primary_10_3390_s21051870
crossref_primary_10_36548_jiip_2024_1_003
crossref_primary_10_1007_s12145_024_01331_5
crossref_primary_10_1109_ACCESS_2023_3276244
crossref_primary_10_1016_j_ypmed_2023_107590
crossref_primary_10_1016_j_suscom_2022_100677
crossref_primary_10_1016_j_procs_2024_02_169
crossref_primary_10_1007_s42979_025_03783_y
crossref_primary_10_1016_j_dscb_2024_100121
crossref_primary_10_2478_amns_2024_3430
crossref_primary_10_1007_s00500_023_09029_4
crossref_primary_10_1007_s11042_022_13149_8
crossref_primary_10_1007_s11334_022_00437_7
crossref_primary_10_1016_j_chaos_2021_110671
crossref_primary_10_1051_e3sconf_202235101033
crossref_primary_10_1109_JSEN_2022_3168572
crossref_primary_10_1016_j_cmpb_2022_106646
crossref_primary_10_1080_03772063_2024_2414834
crossref_primary_10_1007_s13278_023_01035_6
crossref_primary_10_1155_2020_8875426
crossref_primary_10_21105_joss_04045
crossref_primary_10_1007_s00034_024_02961_2
crossref_primary_10_1016_j_comcom_2022_07_031
crossref_primary_10_1002_cpe_7373
crossref_primary_10_3390_app14020534
crossref_primary_10_1109_JSEN_2023_3265688
crossref_primary_10_1007_s11042_024_20119_9
crossref_primary_10_1007_s42044_022_00103_y
crossref_primary_10_3390_robotics12040099
crossref_primary_10_1063_5_0123238
crossref_primary_10_1007_s10462_023_10690_2
crossref_primary_10_2139_ssrn_4180761
crossref_primary_10_1109_TNSRE_2022_3225948
crossref_primary_10_1117_1_JEI_32_4_040901
crossref_primary_10_1016_j_imavis_2022_104572
crossref_primary_10_3390_app15052328
crossref_primary_10_3390_diagnostics13050977
crossref_primary_10_38124_ijisrt_IJISRT24MAR1662
crossref_primary_10_3389_fnsys_2021_729707
crossref_primary_10_1016_j_eswa_2023_123022
crossref_primary_10_53623_gisa_v3i1_229
crossref_primary_10_1142_S0219265921440059
crossref_primary_10_3390_app14178071
crossref_primary_10_3390_s22134939
crossref_primary_10_1038_s41598_023_43763_x
crossref_primary_10_3389_fpsyg_2022_864047
crossref_primary_10_1007_s11277_024_11188_y
crossref_primary_10_1166_jmihi_2022_3938
crossref_primary_10_1007_s11042_024_20428_z
crossref_primary_10_1007_s11042_024_20572_6
crossref_primary_10_1371_journal_pone_0247131
crossref_primary_10_1007_s40593_023_00357_y
crossref_primary_10_1080_03772063_2023_2220691
crossref_primary_10_1109_ACCESS_2024_3407885
crossref_primary_10_1007_s12597_024_00892_9
crossref_primary_10_35940_ijrte_E6762_0110522
crossref_primary_10_1007_s11135_025_02098_7
crossref_primary_10_1007_s40747_021_00295_z
crossref_primary_10_17350_HJSE19030000277
crossref_primary_10_1016_j_bspc_2022_103877
crossref_primary_10_1007_s12369_021_00830_5
crossref_primary_10_3390_educsci13090914
crossref_primary_10_1080_10447318_2024_2432455
crossref_primary_10_1016_j_aej_2024_07_081
crossref_primary_10_1145_3476049
crossref_primary_10_1007_s00521_022_07292_4
crossref_primary_10_59400_cai_v2i1_1388
crossref_primary_10_1016_j_eswa_2023_121097
crossref_primary_10_1109_ACCESS_2023_3284457
crossref_primary_10_1145_3495002
crossref_primary_10_1109_ACCESS_2021_3060753
crossref_primary_10_1007_s00521_022_08161_w
crossref_primary_10_1016_j_irbm_2024_100836
crossref_primary_10_1109_ACCESS_2021_3051083
crossref_primary_10_1016_j_engappai_2022_105486
crossref_primary_10_1007_s10462_023_10606_0
crossref_primary_10_21869_2223_1536_2024_14_2_72_80
crossref_primary_10_32604_cmc_2023_031924
crossref_primary_10_1109_ACCESS_2024_3427822
crossref_primary_10_46460_ijiea_904838
crossref_primary_10_1007_s42979_023_02519_0
crossref_primary_10_1080_21681163_2023_2299096
crossref_primary_10_3390_s23249619
crossref_primary_10_1007_s13278_023_01181_x
crossref_primary_10_1016_j_heliyon_2024_e31485
Cites_doi 10.3390/s18124270
10.1037/h0077722
10.1371/journal.pone.0148959
10.1007/978-3-319-09333-8_35
10.1016/j.heliyon.2019.e01802
10.1109/TPAMI.2015.2439281
10.1016/j.cviu.2015.09.015
10.1037/h0030377
10.1186/s13073-016-0388-7
10.1088/1742-6596/1187/3/032084
ContentType Journal Article
Copyright 2020 The Authors
Copyright_xml – notice: 2020 The Authors
DBID 6I.
AAFTH
AAYXX
CITATION
DOA
DOI 10.1016/j.imu.2020.100372
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
DeliveryMethod fulltext_linktorsrc
EISSN 2352-9148
ExternalDocumentID oai_doaj_org_article_c77ec5bfd0f34fe480cce36ef316cc9a
10_1016_j_imu_2020_100372
S235291482030201X
GroupedDBID 0R~
0SF
6I.
AACTN
AAEDW
AAFTH
AALRI
AAXUO
ABMAC
ACGFS
ADBBV
AEXQZ
AFTJW
AITUG
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
BCNDV
EBS
EJD
FDB
GROUPED_DOAJ
KQ8
M41
M~E
NCXOZ
O9-
OK1
RIG
ROL
SSZ
AAYWO
AAYXX
ACVFH
ADCNI
ADVLN
AEUPX
AFJKZ
AFPUW
AIGII
AKBMS
AKYEP
APXCP
CITATION
ID FETCH-LOGICAL-c3212-e5b4bbddf85553e573f70d9d6d5381b0735bd63f05841840e68d8f278da993bd3
IEDL.DBID DOA
ISSN 2352-9148
IngestDate Wed Aug 27 01:25:06 EDT 2025
Thu Apr 24 23:12:50 EDT 2025
Wed Oct 01 02:12:18 EDT 2025
Sun Apr 21 12:55:14 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords LSTM
Face emotion recognition
Virtual markers
EEG emotion Detection
Language English
License This is an open access article under the CC BY-NC-ND license.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3212-e5b4bbddf85553e573f70d9d6d5381b0735bd63f05841840e68d8f278da993bd3
OpenAccessLink https://doaj.org/article/c77ec5bfd0f34fe480cce36ef316cc9a
ParticipantIDs doaj_primary_oai_doaj_org_article_c77ec5bfd0f34fe480cce36ef316cc9a
crossref_primary_10_1016_j_imu_2020_100372
crossref_citationtrail_10_1016_j_imu_2020_100372
elsevier_sciencedirect_doi_10_1016_j_imu_2020_100372
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020
2020-00-00
2020-01-01
PublicationDateYYYYMMDD 2020-01-01
PublicationDate_xml – year: 2020
  text: 2020
PublicationDecade 2020
PublicationTitle Informatics in medicine unlocked
PublicationYear 2020
Publisher Elsevier Ltd
Elsevier
Publisher_xml – name: Elsevier Ltd
– name: Elsevier
References Ekman (bib11) 2006
Viola, Jones (bib23) 2001
Weber, Mandl, Kohane (bib5) 2014; 311
Bahreini, van der Vegt, Westera (bib37) 2019
Zhang, Yin, Cheng, Nichele (bib19) 2020
Ekman, Friesen, Ancoli (bib15) 1980; 39
Sally, Paul (bib24) 2007
bib31
Xie (bib2) 2019; 1187
Dada, Bassi, Chiroma, Abdulhamid, Adetunmbi, Ajibuwa (bib1) 2019; 5
Loconsole, Chiaradia, Bevilacqua, Frisoli (bib6) 2014
Krizhevsky, Sutskever, Hinton (bib28) 2012
Lang, Bradley, Cuthbert (bib29) 2008
Palestra, Pettinicchio, Del Coco, Carcagn, Leo, Distante (bib18) 2015
Ouyang, Wang, Zeng, Qiu, Luo, Tian, Li, Yang, Wang, Loy, Tang, Dong, Loy, He (bib26) 2015
Vassilis, Herrmann (bib9) 1997
Tang (bib27) 2016; 38
Ekman, Friesen (bib12) 1971; 17
Nguyen, Trinh, Phan, Nguyen (bib16) 2017
Das, Behera, Pradhan, Tripathy, Jena (bib22) 2015; vol. 2
Bhattacharya, Lindsen (bib30) 2016; 11
Ekman, Friesen, Ancoli (bib14) 1980; 39
Keltiner, Ekrman (bib10) 2000
Magdin, Prikler (bib36) 2017; 1
Sun, Chen, Wang, Tang (bib25) 2014
Huang, Kortelainen, Zhao, Li, Moilanen, Seppänen, Pietikäinen (bib7) 2016; 147
Raheel, Majid, Anwar (bib8) 2019
Lucey (bib38) 2010
Suk, Prabhakaran (bib34) 24–27 June 2014
Loconsole, Miranda, Augusto, Frisoli, Orvalho (bib17) 2014
Wilson, Fernandez (bib20) 2006; 4
Sangaiah, Arumugam, Bian (bib32) 2019
Beckmann, Lew (bib4) 2016; 8
Ekman (bib13) 2006
Jeong, Ko (bib35) 2018; 18
Hegazy, Soliman, Salam (bib3) 2014; 4
Michel, El Kaliouby (bib33) 2003
Zhao, Pietikäinen (bib21) 2007; vol. 4358
Dada (10.1016/j.imu.2020.100372_bib1) 2019; 5
Raheel (10.1016/j.imu.2020.100372_bib8) 2019
Loconsole (10.1016/j.imu.2020.100372_bib17) 2014
Xie (10.1016/j.imu.2020.100372_bib2) 2019; 1187
Nguyen (10.1016/j.imu.2020.100372_bib16) 2017
Das (10.1016/j.imu.2020.100372_bib22) 2015; vol. 2
Tang (10.1016/j.imu.2020.100372_bib27) 2016; 38
Weber (10.1016/j.imu.2020.100372_bib5) 2014; 311
Lang (10.1016/j.imu.2020.100372_bib29) 2008
Bahreini (10.1016/j.imu.2020.100372_bib37) 2019
Michel (10.1016/j.imu.2020.100372_bib33) 2003
Ekman (10.1016/j.imu.2020.100372_bib13) 2006
Suk (10.1016/j.imu.2020.100372_bib34) 2014
Zhang (10.1016/j.imu.2020.100372_bib19) 2020
Ouyang (10.1016/j.imu.2020.100372_bib26) 2015
Huang (10.1016/j.imu.2020.100372_bib7) 2016; 147
Ekman (10.1016/j.imu.2020.100372_bib14) 1980; 39
Beckmann (10.1016/j.imu.2020.100372_bib4) 2016; 8
Viola (10.1016/j.imu.2020.100372_bib23) 2001
Bhattacharya (10.1016/j.imu.2020.100372_bib30) 2016; 11
Krizhevsky (10.1016/j.imu.2020.100372_bib28) 2012
Keltiner (10.1016/j.imu.2020.100372_bib10) 2000
Wilson (10.1016/j.imu.2020.100372_bib20) 2006; 4
Sangaiah (10.1016/j.imu.2020.100372_bib32) 2019
Zhao (10.1016/j.imu.2020.100372_bib21) 2007; vol. 4358
Loconsole (10.1016/j.imu.2020.100372_bib6) 2014
Ekman (10.1016/j.imu.2020.100372_bib12) 1971; 17
Hegazy (10.1016/j.imu.2020.100372_bib3) 2014; 4
Jeong (10.1016/j.imu.2020.100372_bib35) 2018; 18
Palestra (10.1016/j.imu.2020.100372_bib18) 2015
Lucey (10.1016/j.imu.2020.100372_bib38) 2010
Sun (10.1016/j.imu.2020.100372_bib25) 2014
Magdin (10.1016/j.imu.2020.100372_bib36) 2017; 1
Sally (10.1016/j.imu.2020.100372_bib24) 2007
Vassilis (10.1016/j.imu.2020.100372_bib9) 1997
Ekman (10.1016/j.imu.2020.100372_bib11) 2006
Ekman (10.1016/j.imu.2020.100372_bib15) 1980; 39
References_xml – volume: 8
  start-page: 134
  year: 2016
  end-page: 139
  ident: bib4
  article-title: Reconciling evidence-based medicine and precision medicine in the era of big data: challenges and opportunities
  publication-title: Genome Med
– volume: 147
  start-page: 114
  year: 2016
  end-page: 124
  ident: bib7
  article-title: Multi-modal emotion analysis from facial expressions and electroencephalogram
  publication-title: Comput Vis Image Understand
– volume: 17
  start-page: 124
  year: 1971
  ident: bib12
  article-title: Constants across cultures in the face and emotion
  publication-title: J Pers Soc Psychol
– year: 2019
  ident: bib32
  article-title: An intelligent learning approach for improving ECG signal classification and arrhythmia analysis
  publication-title: Artif Intell Med
– volume: 311
  start-page: 2479
  year: 2014
  end-page: 2480
  ident: bib5
  article-title: Finding the missing link for big biomedical data
  publication-title: Jama
– volume: 39
  start-page: 1123
  year: 1980
  end-page: 1134
  ident: bib15
  article-title: Facial signs of emotional experience
  publication-title: J Pers Soc Psychol
– volume: 18
  start-page: 4270
  year: 2018
  ident: bib35
  article-title: Driver's facial expression recognition in real-time for safe driving
  publication-title: Sensors
– start-page: 2403
  year: 2015
  end-page: 2412
  ident: bib26
  article-title: Deepid-net: deformable deep convolutional neural networks for object detection
  publication-title: In proc. IEEE conf. Comput. Vis. Pattern recogn.
– start-page: 258
  year: 2003
  end-page: 264
  ident: bib33
  article-title: Real time facial expression recognition in video using support vector machines
  publication-title: . 5th int. Conf. On multimodal interfaces
– year: 2001
  ident: bib23
  article-title: Rapid object detection using a boosted cascade of simple features
– volume: 1187
  year: 2019
  ident: bib2
  article-title: Development of artificial intelligence and effects on financial system
  publication-title: J Phys Conf
– year: 2007
  ident: bib24
  article-title: "Chapter 3: Pythagorean triples". Roots to research: a vertical development of mathematical problems
– year: 1997
  ident: bib9
  article-title: Where do machine learning and human-computer interaction meet?
– volume: vol. 4358
  year: 2007
  ident: bib21
  article-title: Dynamic Texture Recognition Using Volume Local Binary Patterns
  publication-title: Dynamical Vision. WDV 2006, WDV 2005. Lecture Notes in Computer Science
– year: 2008
  ident: bib29
  article-title: International affective picture system (IAPS): affective ratings of pictures and instruction manual
– start-page: 1
  year: 2019
  end-page: 5
  ident: bib8
  article-title: Facial expression recognition based on electroencephalography
  publication-title: 2019 2nd international conference on computing, mathematics and engineering technologies (iCoMET), Sukkur, Pakistan
– volume: vol. 2
  start-page: 221
  year: 2015
  end-page: 234
  ident: bib22
  article-title: A modified real time A* algorithm and its performance analysis for improved path planning of mobile robot
  publication-title: Computational intelligence in data mining, springer India
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: bib28
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: in Proc. Adv. Neural Inf. Process. Syst.
– start-page: 1973
  year: 2006
  ident: bib13
  article-title: Darwin and facial expression: a century of research in review
– start-page: 236
  year: 2000
  end-page: 249
  ident: bib10
  publication-title: Facial expression of emotion, hand book of emotions
– start-page: 94
  year: 2010
  end-page: 101
  ident: bib38
  article-title: The Extended Cohn-Kanade Dataset (CK+): a complete dataset for action unit and emotion-specified expression
  publication-title: 2010 IEEE computer society conference on computer vision and pattern recognition – workshops
– volume: 4
  year: 2006
  ident: bib20
  article-title: Facial feature detection using Haar classifiers
  publication-title: J. Comput. Small Coll., ročník 21, č.
– volume: 1
  year: 2017
  ident: bib36
  article-title: Real time facial expression recognition using webcam and SDK affectiva
  publication-title: International Journal of Interactive Multimedia and Artificial Intelligence
– volume: 11
  year: 2016
  ident: bib30
  article-title: Music for a brighter world: brightness judgment bias by musical emotion
  publication-title: PloS One
– start-page: 518
  year: 2015
  end-page: 528
  ident: bib18
  article-title: Improved performance in facial expression recognition using 32 geometric features
  publication-title: Proceedings of the 18th international conference on image analysis and processing
– start-page: 320
  year: 2014
  end-page: 331
  ident: bib6
  article-title: Real-time emotion recognition: an improved hybrid approach for classification performance
  publication-title: Intelligent Computing Theory
– start-page: 1973
  year: 2006
  ident: bib11
  article-title: Darwin and facial expression: a century of research in review
– volume: 38
  start-page: 295
  year: 2016
  end-page: 307
  ident: bib27
  article-title: Image super-resolution using deep convolutional networks
  publication-title: IEEE Trans Pattern Anal Mach Intell
– year: 2017
  ident: bib16
  article-title: An efficient real-time emotion detection using camera and facial landmarks
  publication-title: 2017 seventh international conference on information science and technology (ICIST)
– ident: bib31
– start-page: 1
  year: 2019
  end-page: 24
  ident: bib37
  article-title: A fuzzy logic approach to reliable real-time recognition of facial emotions
  publication-title: Multimed Tool Appl
– volume: 4
  start-page: 16
  year: 2014
  end-page: 23
  ident: bib3
  article-title: A machine learning model for stock market prediction
  publication-title: Int J Comput Sci Telecommun
– start-page: 1988
  year: 2014
  end-page: 1996
  ident: bib25
  article-title: Deep learning face representation by joint identification-verification
  publication-title: Proc. Adv. Neural inf. Process. Syst.
– volume: 39
  start-page: 1123
  year: 1980
  end-page: 1134
  ident: bib14
  article-title: Facial signs of emotional experience
  publication-title: J Pers Soc Psychol
– year: 2020
  ident: bib19
  article-title: Emotion recognition using multi-modal data and machine learning techniques: a tutorial and review. Information fusion
– volume: 5
  year: 2019
  ident: bib1
  article-title: Machine learning for email spam filtering: review, approaches and open research problems
  publication-title: Heliyon
– start-page: 132
  year: 24–27 June 2014
  end-page: 137
  ident: bib34
  article-title: Real-time mobile facial expression recognition system—a case study
  publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition workshops; columbus, OH, USA
– start-page: 378
  year: 2014
  end-page: 385
  ident: bib17
  article-title: Real-time emotion recognition novel method for geometrical facial features extraction
  publication-title: Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP)
– year: 2007
  ident: 10.1016/j.imu.2020.100372_bib24
– volume: 18
  start-page: 4270
  year: 2018
  ident: 10.1016/j.imu.2020.100372_bib35
  article-title: Driver's facial expression recognition in real-time for safe driving
  publication-title: Sensors
  doi: 10.3390/s18124270
– volume: 39
  start-page: 1123
  year: 1980
  ident: 10.1016/j.imu.2020.100372_bib15
  article-title: Facial signs of emotional experience
  publication-title: J Pers Soc Psychol
  doi: 10.1037/h0077722
– volume: vol. 2
  start-page: 221
  year: 2015
  ident: 10.1016/j.imu.2020.100372_bib22
  article-title: A modified real time A* algorithm and its performance analysis for improved path planning of mobile robot
– start-page: 236
  year: 2000
  ident: 10.1016/j.imu.2020.100372_bib10
– volume: 4
  year: 2006
  ident: 10.1016/j.imu.2020.100372_bib20
  article-title: Facial feature detection using Haar classifiers
  publication-title: J. Comput. Small Coll., ročník 21, č.
– start-page: 1097
  year: 2012
  ident: 10.1016/j.imu.2020.100372_bib28
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: in Proc. Adv. Neural Inf. Process. Syst.
– year: 2001
  ident: 10.1016/j.imu.2020.100372_bib23
– start-page: 1973
  year: 2006
  ident: 10.1016/j.imu.2020.100372_bib11
– volume: 11
  issue: 2
  year: 2016
  ident: 10.1016/j.imu.2020.100372_bib30
  article-title: Music for a brighter world: brightness judgment bias by musical emotion
  publication-title: PloS One
  doi: 10.1371/journal.pone.0148959
– start-page: 320
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib6
  article-title: Real-time emotion recognition: an improved hybrid approach for classification performance
  publication-title: Intelligent Computing Theory
  doi: 10.1007/978-3-319-09333-8_35
– volume: 5
  issue: 6
  year: 2019
  ident: 10.1016/j.imu.2020.100372_bib1
  article-title: Machine learning for email spam filtering: review, approaches and open research problems
  publication-title: Heliyon
  doi: 10.1016/j.heliyon.2019.e01802
– start-page: 132
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib34
  article-title: Real-time mobile facial expression recognition system—a case study
– start-page: 2403
  year: 2015
  ident: 10.1016/j.imu.2020.100372_bib26
  article-title: Deepid-net: deformable deep convolutional neural networks for object detection
– start-page: 1973
  year: 2006
  ident: 10.1016/j.imu.2020.100372_bib13
– volume: vol. 4358
  year: 2007
  ident: 10.1016/j.imu.2020.100372_bib21
  article-title: Dynamic Texture Recognition Using Volume Local Binary Patterns
– volume: 38
  start-page: 295
  issue: 2
  year: 2016
  ident: 10.1016/j.imu.2020.100372_bib27
  article-title: Image super-resolution using deep convolutional networks
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2015.2439281
– start-page: 1988
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib25
  article-title: Deep learning face representation by joint identification-verification
– volume: 147
  start-page: 114
  year: 2016
  ident: 10.1016/j.imu.2020.100372_bib7
  article-title: Multi-modal emotion analysis from facial expressions and electroencephalogram
  publication-title: Comput Vis Image Understand
  doi: 10.1016/j.cviu.2015.09.015
– start-page: 518
  year: 2015
  ident: 10.1016/j.imu.2020.100372_bib18
  article-title: Improved performance in facial expression recognition using 32 geometric features
– volume: 1
  year: 2017
  ident: 10.1016/j.imu.2020.100372_bib36
  article-title: Real time facial expression recognition using webcam and SDK affectiva
  publication-title: International Journal of Interactive Multimedia and Artificial Intelligence
– year: 2019
  ident: 10.1016/j.imu.2020.100372_bib32
  article-title: An intelligent learning approach for improving ECG signal classification and arrhythmia analysis
  publication-title: Artif Intell Med
– start-page: 1
  year: 2019
  ident: 10.1016/j.imu.2020.100372_bib8
  article-title: Facial expression recognition based on electroencephalography
– volume: 17
  start-page: 124
  issue: 2
  year: 1971
  ident: 10.1016/j.imu.2020.100372_bib12
  article-title: Constants across cultures in the face and emotion
  publication-title: J Pers Soc Psychol
  doi: 10.1037/h0030377
– volume: 39
  start-page: 1123
  year: 1980
  ident: 10.1016/j.imu.2020.100372_bib14
  article-title: Facial signs of emotional experience
  publication-title: J Pers Soc Psychol
  doi: 10.1037/h0077722
– start-page: 1
  year: 2019
  ident: 10.1016/j.imu.2020.100372_bib37
  article-title: A fuzzy logic approach to reliable real-time recognition of facial emotions
  publication-title: Multimed Tool Appl
– volume: 8
  start-page: 134
  issue: 1
  year: 2016
  ident: 10.1016/j.imu.2020.100372_bib4
  article-title: Reconciling evidence-based medicine and precision medicine in the era of big data: challenges and opportunities
  publication-title: Genome Med
  doi: 10.1186/s13073-016-0388-7
– volume: 311
  start-page: 2479
  issue: 24
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib5
  article-title: Finding the missing link for big biomedical data
  publication-title: Jama
– start-page: 378
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib17
  article-title: Real-time emotion recognition novel method for geometrical facial features extraction
  publication-title: Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP)
– start-page: 258
  year: 2003
  ident: 10.1016/j.imu.2020.100372_bib33
  article-title: Real time facial expression recognition in video using support vector machines
– volume: 4
  start-page: 16
  issue: 12
  year: 2014
  ident: 10.1016/j.imu.2020.100372_bib3
  article-title: A machine learning model for stock market prediction
  publication-title: Int J Comput Sci Telecommun
– year: 1997
  ident: 10.1016/j.imu.2020.100372_bib9
– year: 2008
  ident: 10.1016/j.imu.2020.100372_bib29
– start-page: 94
  year: 2010
  ident: 10.1016/j.imu.2020.100372_bib38
  article-title: The Extended Cohn-Kanade Dataset (CK+): a complete dataset for action unit and emotion-specified expression
– year: 2017
  ident: 10.1016/j.imu.2020.100372_bib16
  article-title: An efficient real-time emotion detection using camera and facial landmarks
– volume: 1187
  year: 2019
  ident: 10.1016/j.imu.2020.100372_bib2
  article-title: Development of artificial intelligence and effects on financial system
  publication-title: J Phys Conf
  doi: 10.1088/1742-6596/1187/3/032084
– year: 2020
  ident: 10.1016/j.imu.2020.100372_bib19
SSID ssj0001763575
Score 2.5559268
Snippet Real-time emotion recognition has been an active field of research over the past several decades. This work aims to classify physically disabled people (deaf,...
SourceID doaj
crossref
elsevier
SourceType Open Website
Enrichment Source
Index Database
Publisher
StartPage 100372
SubjectTerms EEG emotion Detection
Face emotion recognition
LSTM
Virtual markers
Title Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods
URI https://dx.doi.org/10.1016/j.imu.2020.100372
https://doaj.org/article/c77ec5bfd0f34fe480cce36ef316cc9a
Volume 20
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAFT
  databaseName: Colorado Digital library
  customDbUrl:
  eissn: 2352-9148
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0001763575
  issn: 2352-9148
  databaseCode: KQ8
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: http://grweb.coalliance.org/oadl/oadl.html
  providerName: Colorado Alliance of Research Libraries
– providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2352-9148
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0001763575
  issn: 2352-9148
  databaseCode: DOA
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2352-9148
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0001763575
  issn: 2352-9148
  databaseCode: M~E
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVLSH
  databaseName: Elsevier Journals
  customDbUrl:
  mediaType: online
  eissn: 2352-9148
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0001763575
  issn: 2352-9148
  databaseCode: AKRWK
  dateStart: 20150101
  isFulltext: true
  providerName: Library Specific Holdings
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA6iFy-iqLi-mIMnodhtkqY9qnQVRQ-Li97KTh6iaBUf4M_wJztJ2qUe1Iu3UvIomUnmS_rlG8b2hkgxh2ZSgpJjIozIE1RaJWWmSn9jRZdBt-DiMj-diLMbedNL9eU5YVEeOA7cgVbKaonOpI4LZ0WRam15bh0f5lqXARpRGOttpsLpStBZkyGznMxoRoui-6UZyF13j--0N8wCSYCr7FtQCtr9vdjUizejZbbUAkU4jB-4wuZss8o-exwfeHIwhTEBvcTf44AqJuSBcUcJoueoRw6BFwCjqT8eh-qj5b42rzBtDFTVCfhYZoAqPAZupYU2mcRtKGGsfQYvfEm1m0gbh5h5-nWNTUbV1fFp0uZUSDSnKJVYiQLRGFdIKbmVijuVmtLkhla-IdKEl2hy7lICJn7zZ_PCFC5ThZkSkkHD19l889TYDQaKlkkUoiQM5ATarHClKjFHg4L2vTIbsLQb1Fq3guM-78VD3THL7muyQ-3tUEc7DNj-rMpzVNv4rfCRt9SsoBfKDi_IferWfeq_3GfARGfnusUcEUtQU3c_9735H31vsUXfZDzQ2Wbzby_vdocgzhvusoXD8_H1-W7w6i9MjPqt
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Development+of+a+Real-Time+Emotion+Recognition+System+Using+Facial+Expressions+and+EEG+based+on+machine+learning+and+deep+neural+network+methods&rft.jtitle=Informatics+in+medicine+unlocked&rft.au=Hassouneh%2C+Aya&rft.au=Mutawa%2C+A.M.&rft.au=Murugappan%2C+M.&rft.date=2020&rft.issn=2352-9148&rft.eissn=2352-9148&rft.volume=20&rft.spage=100372&rft_id=info:doi/10.1016%2Fj.imu.2020.100372&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_imu_2020_100372
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2352-9148&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2352-9148&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2352-9148&client=summon