Eye movement prediction and variability on natural video data sets

We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Intersu...

Full description

Saved in:
Bibliographic Details
Published inVisual cognition Vol. 20; no. 4-5; pp. 495 - 514
Main Authors Dorr, Michael, Vig, Eleonora, Barth, Erhardt
Format Journal Article
LanguageEnglish
Published England Taylor & Francis Group 01.04.2012
Subjects
Online AccessGet full text
ISSN1350-6285
1464-0716
1464-0716
DOI10.1080/13506285.2012.667456

Cover

More Information
Summary:We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Intersubject gaze variability differs significantly between data sets, with variability being lowest for the professional movies. We then evaluate three state-of-the-art saliency models on these data sets. A model that is based on the invariants of the structure tensor and that combines very generic, sparse video representations with machine learning techniques outperforms the two reference models; performance is further improved for two data sets when the model is extended to a perceptually inspired colour space. Finally, a combined analysis of gaze variability and predictability shows that eye movements on the professionally made movies are the most coherent (due to implicit gaze-guidance strategies of the movie directors), yet the least predictable (presumably due to the frequent cuts). Our results highlight the need for standardized benchmarks to comparatively evaluate eye movement prediction algorithms.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1350-6285
1464-0716
1464-0716
DOI:10.1080/13506285.2012.667456