Task-Driven Evaluation of Aggregation in Time Series Visualization

Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the SIGCHI Conference on Human Factors in Computing Systems Vol. 2014; p. 551
Main Authors Albers, Danielle, Correll, Michael, Gleicher, Michael
Format Journal Article Conference Proceeding
LanguageEnglish
Published United States 01.01.2014
Subjects
Online AccessGet full text
DOI10.1145/2556288.2557200

Cover

More Information
Summary:Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness depends on the designs of the displays. In this paper, we explore this relationship between aggregation task and visualization design to provide guidance on matching tasks with designs. We combine prior results from perceptual science and graphical perception to suggest a set of design variables that influence performance on various aggregate comparison tasks. We describe how choices in these variables can lead to designs that are matched to particular tasks. We use these variables to assess a set of eight different designs, predicting how they will support a set of six aggregate time series comparison tasks. A crowd-sourced evaluation confirms these predictions. These results not only provide evidence for how the specific visualizations support various tasks, but also suggest using the identified design variables as a tool for designing visualizations well suited for various types of tasks.
DOI:10.1145/2556288.2557200