2023
DOI: 10.1371/journal.pone.0280987
|View full text |Cite
|
Sign up to set email alerts
|

Computational modeling of human multisensory spatial representation by a neural architecture

Abstract: Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans’ ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…Research in modeling human behavior is being conducted, including various aspects of natural communication skills and their imitation. This involves not only auditory information for automatic speech recognition [28,29] and speaker analysis (also called Computational Paralinguistics [30], but also the use of visual information like multimodal face [31,32], hand gesturing [33], and body movements [34,35], as well as other sensor information like tactile, olfactory, and gustatory [36], as well as artificial neurons [37] and the whole cognition [38].…”
Section: Personalized Communication Using Smart Conversational Agentsmentioning
confidence: 99%
“…Research in modeling human behavior is being conducted, including various aspects of natural communication skills and their imitation. This involves not only auditory information for automatic speech recognition [28,29] and speaker analysis (also called Computational Paralinguistics [30], but also the use of visual information like multimodal face [31,32], hand gesturing [33], and body movements [34,35], as well as other sensor information like tactile, olfactory, and gustatory [36], as well as artificial neurons [37] and the whole cognition [38].…”
Section: Personalized Communication Using Smart Conversational Agentsmentioning
confidence: 99%