2020
DOI: 10.1186/s13408-020-00088-7
|View full text |Cite
|
Sign up to set email alerts
|

Neurally plausible mechanisms for learning selective and invariant representations

Abstract: Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. Howeve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…To prove (ii), one needs to show that the elements of W ψ (H R ) are continuous functions, that W ψ (H R ) W ψ W * γ is selfadjoint and idempotent, and that (12). The continuity can be obtained as a consequence of the continuity of the unitary representation (5) and, by the same arguments used to prove (i) we can easily see that 12) can be obtained directly by (6) and the definition of W ψ .…”
Section: Overview Of the Se(2) Transformmentioning
confidence: 81%
See 3 more Smart Citations
“…To prove (ii), one needs to show that the elements of W ψ (H R ) are continuous functions, that W ψ (H R ) W ψ W * γ is selfadjoint and idempotent, and that (12). The continuity can be obtained as a consequence of the continuity of the unitary representation (5) and, by the same arguments used to prove (i) we can easily see that 12) can be obtained directly by (6) and the definition of W ψ .…”
Section: Overview Of the Se(2) Transformmentioning
confidence: 81%
“…A possible interpretation of this iteration with a kernel defined by the SE(2) group as a neural computation in V1 comes from the modeling of the neural connectivity as a kernel operation [53,29,19,43], especially if considered in the framework of a neural system that aims to learn group invariant representations of visual stimuli [7,6]. A direct comparison of the proposed technique with kernel techniques recently introduced with radically different purposes in [44,43] shows however two main differences at the level of the kernel that is used: here we need the dual wavelet to build the projection kernel, and the iteration kernel effectively contains the feature maps.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A possible interpretation of the proposed iteration with a kernel defined by the SE(2) group as a neural computation in V1 comes from the modeling of the neural connectivity as a kernel operation (Wilson and Cowan, 1972;Ermentrout and Cowan, 1980;Citti and Sarti, 2015;Montobbio et al, 2018), especially if considered in the framework of a neural system that aims to learn group invariant representations of visual stimuli (Anselmi and Poggio, 2014;Anselmi et al, 2020). A direct comparison of the proposed technique with kernel techniques recently introduced with radically different purposes in Montobbio et al (2018) and Montobbio et al (2019) shows, however, two main differences at the level of the kernel that is used: here, we need the dual wavelet to build the projection kernel, and the iteration kernel effectively contains the feature maps.…”
Section: Discussionmentioning
confidence: 99%