Visual and semantic similarity norms for a photographic image stimulus set containing recognizable objects, animals and scenes

We collected visual and semantic similarity norms for a set of photographic images comprising 120 recognizable objects/animals and 120 indoor/outdoor scenes. Human observers rated the similarity of pairs of images within four categories of stimuli—inanimate objects, animals, indoor scenes and outdoo...

Full description

Saved in:
Bibliographic Details
Published inBehavior research methods Vol. 54; no. 5; pp. 2364 - 2380
Main Authors Jiang, Zhuohan, Sanders, D. Merika W., Cowell, Rosemary A.
Format Journal Article
LanguageEnglish
Published New York Springer US 01.10.2022
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1554-3528
1554-351X
1554-3528
DOI10.3758/s13428-021-01732-0

Cover

More Information
Summary:We collected visual and semantic similarity norms for a set of photographic images comprising 120 recognizable objects/animals and 120 indoor/outdoor scenes. Human observers rated the similarity of pairs of images within four categories of stimuli—inanimate objects, animals, indoor scenes and outdoor scenes—via Amazon's Mechanical Turk. We performed multidimensional scaling (MDS) on the collected similarity ratings to visualize the perceived similarity for each image category, for both visual and semantic ratings. The MDS solutions revealed the expected similarity relationships between images within each category, along with intuitively sensible differences between visual and semantic similarity relationships for each category. Stress tests performed on the MDS solutions indicated that the MDS analyses captured meaningful levels of variance in the similarity data. These stimuli, associated norms and naming data are made available to all researchers, and should provide a useful resource for researchers of vision, memory and conceptual knowledge wishing to run experiments using well-parameterized stimulus sets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
These authors contributed equally to this work.
ISSN:1554-3528
1554-351X
1554-3528
DOI:10.3758/s13428-021-01732-0