2019
DOI: 10.1101/509927
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conscious perception of natural images is constrained by category-related visual features

Abstract: Conscious perception is crucial for adaptive behaviour yet access to consciousness varies for different types of objects. The visual system comprises regions with widely distributed category information and exemplar-level representations that cluster according to category. Does this categorical organisation in the brain provide insight into object-specific access to consciousness? We address this question using the Attentional Blink (AB) approach with visual objects as targets. We find large differences across… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 57 publications
(95 reference statements)
3
9
0
Order By: Relevance
“…If this proves to be true, it would suggest that no matter the attentional state or task demands placed on an observer, perceptual awareness is limited by higher-level perceptual features. This prediction is somewhat supported by a recent study that used the attentional blink and also found that higher-level features best predict the magnitude of the attentional blink (Lindh et al, 2019). If instead, however, there were certain cases in which the exact same set of stimuli were used, but a different primary task made it such that earlier layers/features predicted behavior, it would suggest that the limits of perceptual awareness are flexibly dictated by the task an observer is performing at any given moment.…”
Section: Remaining Questions and Limitationssupporting
confidence: 65%
See 1 more Smart Citation
“…If this proves to be true, it would suggest that no matter the attentional state or task demands placed on an observer, perceptual awareness is limited by higher-level perceptual features. This prediction is somewhat supported by a recent study that used the attentional blink and also found that higher-level features best predict the magnitude of the attentional blink (Lindh et al, 2019). If instead, however, there were certain cases in which the exact same set of stimuli were used, but a different primary task made it such that earlier layers/features predicted behavior, it would suggest that the limits of perceptual awareness are flexibly dictated by the task an observer is performing at any given moment.…”
Section: Remaining Questions and Limitationssupporting
confidence: 65%
“…This prediction comes from the fact that many prior studies have found that models of early vision can predict performance on a variety of visual tasks such as crowding (Balas et al, 2009;Freeman and Simoncelli, 2011), visual search (Itti & Koch, 2000;Zhang et al, 2015), scene perception (Oliva & Torralba, 2001;, and even change blindness (Rosenholtz, 2020). Alternatively, higher-level features may best predict behavior, with several prior studies showing a direct relationship between later layers of neural networks and a variety of visual behaviors such as similarity judgements (Kubilius et al, 2016;Jozwik et al, 2013;Cichy et al, 2019), object recognition (Rajalingham et al, 2018), the attentional blink (Lindh et al, 2019), and face perception (Farzmahdi et al, 2016;Jacob et al, 2021). Finally, it is also possible that a combination of both higher-and lower-level features will best predict behavior, with observers using a combination of both higher-or lower-level cues to detect alterations to the periphery depending on the stimuli.…”
Section: Introductionmentioning
confidence: 99%
“…Besides, animate objects are more often consciously perceived than inanimate objects, thus show lower level of AB effect in previous studies [8,9]. For narrower classification, more specifically, difference of some sub-classes like fruits and vegetables, processed foods, objects, scenes, animal bodies, animal faces, human bodies and human faces have been explored in [10].…”
Section: Introductionmentioning
confidence: 94%
“…First of all, inspired by [10] and [11], we propose a two-stage model for predicting ABM values with fMRI data for every particular image. In the first stage, fMRI data collected while subjects were viewing categorical images are correlated with visual image features, which are extracted by a typical convolutional neural network, Alexnet.…”
Section: Model Structure and Trainingmentioning
confidence: 99%
“…One of the major categorical distinctions between objects is animacy. In vision, animate objects offer substantial processing and perceptual advantages over inanimate objects, including being categorized faster, more consciously perceived, and found faster in search tasks (New et al, 2007;Jackson and Calvillo, 2013;Carlson et al, 2014;Ritchie et al, 2015;Lindh et al, 2019). Auditory studies have similarly found faster categorization times for animate objects (Yuval-Greenberg and Deouell, 2009;Vogler and Titchener, 2011).…”
Section: Introductionmentioning
confidence: 99%