2023
DOI: 10.1101/2023.05.30.542965
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The timecourse of inter-object contextual facilitation

Abstract: High-level vision is frequently studied at the level of either individual objects or full scenes. An intermediate level of visual organisation that has received less attention is the ″object constellation″, defined here as a familiar configuration of contextually-associated objects (e.g., plate + spoon). Recent work has shown that information from multiple objects can be integrated to support observers′ high-level understanding of a ″scene″. Here we used EEG to test when the visual system integrates informatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 66 publications
0
2
0
Order By: Relevance
“…Its architecture approximates the hierarchical structure of the ventral visual system (layer blocks: V1, V2, V4, IT). We incorporated CORnet-S for two reasons: first, we wanted to account for similarities in low-level visual features between objects of the same phrase or scene 11,20,20,26 . Second, we wanted to know whether a state-of-the-art deep neural network (DNN) trained on object classification would represent scene grammar like structure in complex, high-dimensional visual feature spaces.…”
Section: Cornet-smentioning
confidence: 99%
See 1 more Smart Citation
“…Its architecture approximates the hierarchical structure of the ventral visual system (layer blocks: V1, V2, V4, IT). We incorporated CORnet-S for two reasons: first, we wanted to account for similarities in low-level visual features between objects of the same phrase or scene 11,20,20,26 . Second, we wanted to know whether a state-of-the-art deep neural network (DNN) trained on object classification would represent scene grammar like structure in complex, high-dimensional visual feature spaces.…”
Section: Cornet-smentioning
confidence: 99%
“…Here, we modelled crossclassification confusion matrices from representational dissimilarity matrices (RDMs) obtained from a range of encoding models, each model representing hypotheses about the shared feature space. We included models that quantified similarities in low-level visual features 11,20,20,26 and more abstract, high-level models quantifying semantic and action related similarity between our conditions as well as real-world co-occurrence statistics. We expected high-level features to explain a significant amount of variance in classifier confusions while accounting for similarities in low-level features.…”
Section: Introductionmentioning
confidence: 99%