Get the FACS fast: Automated FACS face analysis benefits from the addition of velocity

Much progress has been made in automated facial image analysis, yet current approaches still lag behind what is possible using manual labeling of facial actions. While many factors may contribute, a key one may be the limited attention to dynamics of facial action. Most approaches classify frames in...

Full description

Saved in:
Bibliographic Details
Published in2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops Vol. 10-12; no. Sept 2009; pp. 1 - 7
Main Authors Brick, T.R., Hunter, M.D., Cohn, J.F.
Format Conference Proceeding Journal Article
LanguageEnglish
Published United States IEEE 01.09.2009
Subjects
Online AccessGet full text
ISBN9781424448005
142444800X
ISSN2156-8103
2156-8111
DOI10.1109/ACII.2009.5349600

Cover

More Information
Summary:Much progress has been made in automated facial image analysis, yet current approaches still lag behind what is possible using manual labeling of facial actions. While many factors may contribute, a key one may be the limited attention to dynamics of facial action. Most approaches classify frames in terms of either displacement from a neutral, mean face or, less frequently, displacement between successive frames (i.e. velocity). In the current paper, we evaluated the hypothesis that attention to dynamics can boost recognition rates. Using the well-known Cohn-Kanade database and support vector machines, adding velocity and acceleration decreased the number of incorrectly classified results by 14.2% and 11.2%, respectively. Average classification accuracy for the displacement and velocity classifier system across all classifiers was 90.2%. Findings were replicated using linear discriminant analysis, and found a mean decrease of 16.4% in incorrect classifications across classifiers. These findings suggest that information about the dynamics of a movement, that is, the velocity and to a lesser extent the acceleration of a change, can helpfully inform classification of facial expressions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISBN:9781424448005
142444800X
ISSN:2156-8103
2156-8111
DOI:10.1109/ACII.2009.5349600