Action Units and Their Cross-Correlations for Prediction of Cognitive Load during Driving

Driving requires the constant coordination of many body systems and full attention of the person. Cognitive distraction (subsidiary mental load) of the driver is an important factor that decreases attention and responsiveness, which may result in human error and accidents. In this paper, we present...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on affective computing Vol. 8; no. 2; pp. 161 - 175
Main Authors Yuce, Anil, Gao, Hua, Cuendet, Gabriel L., Thiran, Jean-Philippe
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.04.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1949-3045
2371-9850
1949-3045
DOI10.1109/TAFFC.2016.2584042

Cover

More Information
Summary:Driving requires the constant coordination of many body systems and full attention of the person. Cognitive distraction (subsidiary mental load) of the driver is an important factor that decreases attention and responsiveness, which may result in human error and accidents. In this paper, we present a study of facial expressions of such mental diversion of attention. First, we introduce a multi-camera database of 46 people recorded while driving a simulator in two conditions, baseline and induced cognitive load using a secondary task. Then, we present an automatic system to differentiate between the two conditions, where we use features extracted from Facial Action Unit (AU) values and their cross-correlations in order to exploit recurring synchronization and causality patterns. Both the recording and detection system are suitable for integration in a vehicle and a real-world application, e.g., an early warning system. We show that when the system is trained individually on each subject we achieve a mean accuracy and F-score of <inline-formula><tex-math notation="LaTeX">\sim 95</tex-math> <inline-graphic xlink:href="yuce-ieq1-2584042.gif"/> </inline-formula> percent, and for the subject independent tests <inline-formula><tex-math notation="LaTeX">\sim 68</tex-math> <inline-graphic xlink:href="yuce-ieq2-2584042.gif"/> </inline-formula> percent accuracy and <inline-formula><tex-math notation="LaTeX">\sim 66</tex-math> <inline-graphic xlink:href="yuce-ieq3-2584042.gif"/> </inline-formula> percent F-score, with person-specific normalization to handle subject dependency. Based on the results, we discuss the universality of the facial expressions of such states and possible real-world uses of the system.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1949-3045
2371-9850
1949-3045
DOI:10.1109/TAFFC.2016.2584042