2019 Conference on Cognitive Computational Neuroscience 2019
DOI: 10.32470/ccn.2019.1299-0
|View full text |Cite
|
Sign up to set email alerts
|

Shared visual illusions between humans and artificial neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 0 publications
3
16
0
Order By: Relevance
“…1b). This finding recapitulates our preliminary findings and concurrent work of colleagues, and points to an origin in image statistics (Benjamin et al, 2019; Henderson and Serences, 2021). However, we also found that networks trained on rotated images do partially retain sensitivity to cardinal orientations; they do not simply rotate their sensitivity by 45º (SI Fig.…”
Section: Resultssupporting
confidence: 92%
See 1 more Smart Citation
“…1b). This finding recapitulates our preliminary findings and concurrent work of colleagues, and points to an origin in image statistics (Benjamin et al, 2019; Henderson and Serences, 2021). However, we also found that networks trained on rotated images do partially retain sensitivity to cardinal orientations; they do not simply rotate their sensitivity by 45º (SI Fig.…”
Section: Resultssupporting
confidence: 92%
“…We examine two model systems. First, we show that deep artificial networks trained on natural image classification show similar patterns of sensitivity as humans, and that this is a partly a consequence of image statistics (also see Benjamin et al (2019); Henderson and Serences (2021)) but is also partially due to factors inherent in network architecture. We then leverage results from the study of linear networks to mathematically describe how gradient descent naturally causes learned representations to reflect the input statistics.…”
Section: Introductionmentioning
confidence: 85%
“…1). First, reproducing and extending previous results (see 29,30 ), we show that deep artificial networks trained on natural image classification show similar patterns of sensitivity as humans. Then, to understand this effect, we mathematically describe how gradient descent causes learned representations to reflect the input statistics in linear systems.…”
supporting
confidence: 88%
“…Based on the implications of this theory, i.e., that "consciousness arises from specific types of information-processing computations, which are physically realized by the hardware of the brain" (Dehaene et al, 2017), Dehaene argues that a machine endowed with these processing abilities "would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans" (Dehaene et al, 2017). Indeed, it has been demonstrated recently that artificial neural networks trained on image processing can be subject to the same visual illusions as humans (Gomez-Villa et al, 2018;Watanabe et al, 2018;Benjamin et al, 2019) 3.4. Damasio's Model of Consciousness Damasio's model of consciousness was initially published in his popular science book "The feeling of what happens" (Damasio, 1999).…”
Section: The Global Workpace Theorymentioning
confidence: 99%