2023
DOI: 10.1101/2023.10.02.559721
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From Sensory to Perceptual Manifolds: The Twist of Neural Geometry

Heng Ma,
Longsheng Jiang,
Tao Liu
et al.

Abstract: To humans, everything is classifiable-big versus small, edible or poisonous, righteous or unjust. Similarly, at the heart of most machine learning tasks lies the fundamental goal of classification, yet the enduring challenge of linear inseparability has plagued artificial neural networks since their inception. Here we asked how biological neural networks tackle this issue by investigating the geometric embedding of neural manifolds in macaques’ V2 during orientation discrimination of motion-induced illusory co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 81 publications
0
3
0
Order By: Relevance
“…In this context, SP and MP neurons do not differ qualitatively; rather, they operate similarly to neurons with mixed selectivity. Recent studies (Kira et al, 2023; Ledergerber et al, 2021; Rigotti et al, 2013) have shown that mixed selectivity plays a key role in constructing higher-dimensional representational spaces (Kriegeskorte and Wei, 2021; Ma et al, 2023; Rigotti et al, 2013; Sussillo and Abbott, 2009), potentially facilitating the integration of interdependent features into a cohesive whole. Therefore, the coding scheme for interdependent features appears to be a hybrid of both dense and sparse coding schemes, aiming at achieving the balance between specificity and interdependency, marking a significant departure from the encoding of independent features and thus advocating future research to delve into this emerging field.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this context, SP and MP neurons do not differ qualitatively; rather, they operate similarly to neurons with mixed selectivity. Recent studies (Kira et al, 2023; Ledergerber et al, 2021; Rigotti et al, 2013) have shown that mixed selectivity plays a key role in constructing higher-dimensional representational spaces (Kriegeskorte and Wei, 2021; Ma et al, 2023; Rigotti et al, 2013; Sussillo and Abbott, 2009), potentially facilitating the integration of interdependent features into a cohesive whole. Therefore, the coding scheme for interdependent features appears to be a hybrid of both dense and sparse coding schemes, aiming at achieving the balance between specificity and interdependency, marking a significant departure from the encoding of independent features and thus advocating future research to delve into this emerging field.…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies have primarily focused on the processing and integration of independent features (Campo et al, 2021; Spence, 2020), such as smell, touch, taste, and sight, to fabricate the unified experience of enjoying a cup of coffee. This integration typically involves neurons that exhibit either pure selectivity, allowing for precise processing and interpretation of specific features (Vaccari et al, 2022; Weinberger, 1995), or mixed selectivity, encoding combinations of multiple features to enhance the brain’s computational flexibility and efficiency (Fusi et al, 2016; Kira et al, 2023; Ledergerber et al, 2021; Ma et al, 2023; Rigotti et al, 2013). Nonetheless, the simultaneous encoding of interdependent features, exemplified by HD and its temporal derivative, AHV, poses a great challenge, which requires a delicate balance between preserving specificity for each individual feature to prevent mutual interference and maintaining their interdependency nature, given the crucial role of AHV in updating HD.…”
Section: Introductionmentioning
confidence: 99%
“…The high dimensionality, furthermore, offers additional flexibility to the neural representations. Without hard dimension reduction, no feature information was completely lost (Flesch et al, 2022; Grand et al, 2022; Ma et al, 2023). The retained feature information could readily serve different demands, had the task changed.…”
Section: Discussionmentioning
confidence: 99%
“…First, our study highlights the critical role of connectivity in forming cognitive modules; however, the underlying mechanism remains unclear. One possibility is that connectivity modulates cognitive modularity by manipulating the dimensionality of neural geometry (Ma et al, 2023). Additionally, future studies should also explore other structural factors, such as neuronal response profiles (e.g., mixed selectivity: Bergoin et al, 2024;Cai et al, 2024;Ostojic & Fusi, 2024), on the modularity of neural networks.…”
Section: Discussionmentioning
confidence: 99%