Saliency prediction in 360° architectural scenes: Performance and impact of daylight variations

Saliency models are image-based prediction models that estimate human visual attention. Such models, when applied to architectural spaces, could pave the way for design decisions where visual attention is taken into account. In this study, we tested the performance of eleven commonly used saliency m...

Full description

Saved in:
Bibliographic Details
Published inJournal of environmental psychology Vol. 92; p. 102110
Main Authors Karmann, Caroline, Aydemir, Bahar, Chamilothori, Kynthia, Kim, Seungryong, Süsstrunk, Sabine, Andersen, Marilyne
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.12.2023
Subjects
Online AccessGet full text
ISSN0272-4944
DOI10.1016/j.jenvp.2023.102110

Cover

More Information
Summary:Saliency models are image-based prediction models that estimate human visual attention. Such models, when applied to architectural spaces, could pave the way for design decisions where visual attention is taken into account. In this study, we tested the performance of eleven commonly used saliency models that combine traditional and deep learning methods on 126 rendered interior scenes with associated head tracking data. The data was extracted from three experiments conducted in virtual reality between 2016 and 2018. Two of these datasets pertain to the perceptual effects of daylight and include variations of daylighting conditions for a limited set of interior spaces, thereby allowing to test the influence of light conditions on human head movement. Ground truth maps were extracted from the collected head tracking logs, and the prediction accuracy of the models was tested via the correlation coefficient between ground truth and prediction maps. To address the possible inflation of results due to the equator bias, we conducted complementary analyses by restricting the area of investigation to the equatorial image regions. Although limited to immersive virtual environments, the promising performance of some traditional models such as GBVS360eq and BMS360eq for colored and textured architectural rendered spaces offers us the prospect of their possible integration into design tools. We also observed a strong correlation in head movements for the same space lit by different types of sky, a finding whose generalization requires further investigations based on datasets more specifically developed to address this question. •We tested the performance of 11 saliency models on 126 interior scenes with associated head tracking data.•The data was extracted from three experiments conducted in virtual reality.•GBVS360eq and BMS360eq showed a promising performance for colored and textured architectural spaces.•We observed a strong correlation in head movements for the same space rendered for different daylight conditions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0272-4944
DOI:10.1016/j.jenvp.2023.102110