2020
DOI: 10.1038/s41562-020-00951-3
|View full text |Cite
|
Sign up to set email alerts
|

Revealing the multidimensional mental representations of natural objects underlying human similarity judgements

Abstract: Objects can be characterized according to a vast number of possible criteria (e.g. animacy, shape, color, function), but some dimensions are more useful than others for making sense of the objects around us. To identify these “core dimensions” of object representations, we developed a data-driven computational model of similarity judgments for real-world images of 1,854 objects. The model captured most explainable variance in similarity judgments and produced 49 highly reproducible and meaningful object dimens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
237
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 148 publications
(247 citation statements)
references
References 53 publications
(35 reference statements)
8
237
2
Order By: Relevance
“…Here, we presented a first foray into this domain by showing that ecoset training leads to an increase in face-selective units in final network layers. Moving further into the domain of behavior, it will be of interest to perform in-depth tests of ecoset-trained networks (supervised or unsupervised) to compare their task performance and error distributions against human behavioral data ( 39 42 ).…”
Section: Resultsmentioning
confidence: 99%
“…Here, we presented a first foray into this domain by showing that ecoset training leads to an increase in face-selective units in final network layers. Moving further into the domain of behavior, it will be of interest to perform in-depth tests of ecoset-trained networks (supervised or unsupervised) to compare their task performance and error distributions against human behavioral data ( 39 42 ).…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, unlike human adults, these models have no knowledge of drawing conventions (i.e. how one might typically draw a bird or a fish) and do not incorporate abstract semantic features into their similarity judgements (Hebart, Zheng, Pereira, & Baker, 2020). This new tool makes it possible to assess developmental changes in the visual features of children's drawings.…”
Section: Introductionmentioning
confidence: 99%
“…Note that Bankson et al (2018) exploited two different datasets which we label with “(1)” and “(2)” in Figure 3. The number of images per dataset are as follows: (Cichy, Pantazis, & Oliva, 2014; Kriegeskorte, Mur, Ruff, et al, 2008; Mur et al, 2013): 92; (Bankson et al, 2018) 84 each; (Cichy, Khosla, Pantazis, Torralba, & Oliva, 2016; Cichy et al, 2019): 118; (Mohsenzadeh et al, 2019): 156; (Hebart et al, 2019, 2020): 1854. For each of these datasets except for Mohsenzadeh et al (2019), we additionally computed RDMs for group averages obtained from behavioral experiments.…”
Section: Applications and Resultsmentioning
confidence: 99%
“…For Mohsenzadeh et al (2019), no behavioral experiments had been conducted. For both datasets in Bankson et al (2018), and for Hebart et al (2020), no fMRI recordings were available. For display purposes, Hebart et al (2020) was downsampled to 200 conditions.…”
Section: Applications and Resultsmentioning
confidence: 99%