2014
DOI: 10.1038/508461a
|View full text |Cite
|
Sign up to set email alerts
|

Do you hear what I see?

Abstract: Researchers have found evidence that the representation of auditory and tactile information in the brains of blind people shows strong similarities to the way in which visual information is represented in sighted people.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
2
1
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Moreover, the metamodal hypothesis has led scientists to look for similarities rather than differences between early-blind and sighted individuals. Examining functional responses using a data-driven approach or selecting stimuli and tasks from a nonsighted perspective may yet reveal overlooked differences in the neural representations of blind and sighted individuals (Fine 2014).…”
Section: Metamodal Plasticitymentioning
confidence: 99%
“…Moreover, the metamodal hypothesis has led scientists to look for similarities rather than differences between early-blind and sighted individuals. Examining functional responses using a data-driven approach or selecting stimuli and tasks from a nonsighted perspective may yet reveal overlooked differences in the neural representations of blind and sighted individuals (Fine 2014).…”
Section: Metamodal Plasticitymentioning
confidence: 99%
“…Nearly one-quarter of the brain is normally devoted to processing visual information, for example, reading text 21. Yet in congenitally blind people, most of the ‘visual’ cortex responds strongly to tactile and auditory input instead of visual stimuli, a phenomenon called cross-modal plasticity.…”
Section: Faces and The Blindmentioning
confidence: 99%
“…understand the physical world [13]. In the image captioning area, the texts contained in images are also of critical importance and often provide valuable information [5,19,20,34,41] for caption generation.…”
Section: Introductionmentioning
confidence: 99%
“…In this sense, Sidorov et al [40] propose a fine-grained image captioning task, i.e., text-based image captioning (TextCap), aiming to generate image captions that not only 'describe' visual contents but also 'read' the texts in images, such as billboards, road signs, commodity prices and etc. This task is very practical since the fine-grained image captions with rich text information can aid visually impaired people to comprehensively understand their surroundings [13] Some preliminary tries for the TextCap task seek to directly extend existing image captioning methods [2,19,21] to this new setting. However, such methods usually tend to describe prominent visual objects or overall scenes without considering the texts in images.…”
Section: Introductionmentioning
confidence: 99%