2020
DOI: 10.1016/j.cognition.2019.104092
|View full text |Cite
|
Sign up to set email alerts
|

How optimal is word recognition under multimodal uncertainty?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 50 publications
0
6
0
Order By: Relevance
“…The integration model provides a formal description of the process of information integration, at least at the computational level of analysis (Marr, 1982). As such, our work complements theorizing about information integration in other domains of language comprehension (e.g., Fourtassi & Frank, 2020; McClelland et al, 2006; Smith et al, 2017).…”
Section: Discussionmentioning
confidence: 72%
“…The integration model provides a formal description of the process of information integration, at least at the computational level of analysis (Marr, 1982). As such, our work complements theorizing about information integration in other domains of language comprehension (e.g., Fourtassi & Frank, 2020; McClelland et al, 2006; Smith et al, 2017).…”
Section: Discussionmentioning
confidence: 72%
“…In fact, the role of relative reliability in integrating multimodal inputs has been extensively found. For example, Fourtassi and Frank (2020) reported that the brain optimally weights the auditory and visual cues. And when one modality has noise added to it, the human brain systematically prefers the unperturbed modality.…”
Section: Discussionmentioning
confidence: 99%
“…This aids the rapid generation of largely accurate perceptual experiences (Press et al, 2020). To test this hypothesis, we refer to previous studies to reduce this relative reliability by blurring stimuli (Binur et al, 2022; Fourtassi & Frank, 2020; Rohlf et al, 2020). If the above framework holds, we expected facial expression clarity to modulate the effect of the scene on facial expression representation.…”
Section: Introductionmentioning
confidence: 99%
“…In future work, we seek to build more comprehensive models that integrate multimodal cues -besides verbal language -that likely play a role in signaling communicative intents including vocal and visual cues. Indeed, such cues are picked up on by adults and children and are integrated to optimize language understanding and learning (e.g., Fourtassi & Frank, 2020;Fourtassi et al, 2021). This effort will involve collecting multimodal data of spontaneous child-caregiver conversations as well as the development of machine learning methods for the automatic annotation of speech acts using linguistic, acoustic, and visual features.…”
Section: Discussionmentioning
confidence: 99%