2021
DOI: 10.1167/jov.21.2.8
|View full text |Cite
|
Sign up to set email alerts
|

Category systems for real-world scenes

Abstract: Categorization performance is a popular metric of scene recognition and understanding in behavioral and computational research. However, categorical constructs and their labels can be somewhat arbitrary. Derived from exhaustive vocabularies of place names (e.g., Deng et al., 2009 ), or the judgements of small groups of researchers (e.g., Fei-Fei, Iyer, Koch, & Perona, 2007 ), these categories may not correspond with human-preferred taxonomies. Here, we propose clus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(22 citation statements)
references
References 86 publications
0
22
0
Order By: Relevance
“…It has been argued that global scene information extracted using the GIST descriptor predicts human ratings of spatial envelope properties, and that these are in turn informative of scene category (Oliva & Torralba, 2001). However, our findings imply that, when stimuli are reduced to the very rudimentary information that is extracted by the GIST descriptor, human categorization performance is in fact far from perfect, making the GIST descriptor alone an unlikely candidate for the effortless categorization ability of humans (see also Anderson et al, 2021). Furthermore, we use stimuli extracting information from 6 × 6 windows as compared to the 4 × 4 windows typically used with the GIST descriptor.…”
Section: Discussionmentioning
confidence: 81%
“…It has been argued that global scene information extracted using the GIST descriptor predicts human ratings of spatial envelope properties, and that these are in turn informative of scene category (Oliva & Torralba, 2001). However, our findings imply that, when stimuli are reduced to the very rudimentary information that is extracted by the GIST descriptor, human categorization performance is in fact far from perfect, making the GIST descriptor alone an unlikely candidate for the effortless categorization ability of humans (see also Anderson et al, 2021). Furthermore, we use stimuli extracting information from 6 × 6 windows as compared to the 4 × 4 windows typically used with the GIST descriptor.…”
Section: Discussionmentioning
confidence: 81%
“…e behavioural gesture presented in each frame of the video sequence is regarded as an independent behavioural state, and each independent state is combined using probability so that in the state space method, the different motion sequences can be regarded as a continuous behaviour consisting of a sequence of independent states [17]. e probabilistic likelihood of each state is compared, with the standard sequence of behaviours, and after the complete behaviours have been compared, the overall likelihood value of the whole behaviour is calculated and the standard behaviour with the highest overall likelihood value, as shown in Figure 2.…”
Section: A Gaussian High-dimensional Random Matrix Design For Student...mentioning
confidence: 99%
“…This Chapter is published as a research article: Anderson, M. D., Graf, E. W., Elder, J. H., Ehinger, K. A., & Adams, W. J. (2021).…”
Section: B Publication Notementioning
confidence: 99%
“…The assumption that semantic categorization is primarily driven by global image features also has limited support. GIST features are unreliable predictors of spatial structure properties across different image databases (Anderson et al, 2021). Moreover, objects are processed just as quickly as entire scenes (Fabre-Thorpe, 2011; Joubert et al, 2007;Rousselet et al, 2005;VanRullen & Thorpe, 2001), and scene categorization is impaired when embedded objects are incongruent with the scene (e.g., a man-made object in a natural scene, Davenport, 2007;Davenport & Potter, 2004;Joubert et al, 2007;Mack & Palmeri, 2010).…”
Section: Semantic Categorymentioning
confidence: 99%
See 1 more Smart Citation