2017
DOI: 10.1038/s41467-017-00181-8
|View full text |Cite
|
Sign up to set email alerts
|

Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback

Abstract: Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
67
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 71 publications
(73 citation statements)
references
References 56 publications
(163 reference statements)
5
67
1
Order By: Relevance
“…Figure 4 indeed shows that many of the units are positively or negatively correlated with state uncertainty, indicating that an explicit representation of uncertainty is maintained upon which evidence integration may operate. This is in line with recent results which show that rate-based neural networks can maintain explicit representations of probabilistic knowledge [Orhan and Ma, 2016].…”
Section: Analysis Of Neural Network Behavioursupporting
confidence: 92%
See 1 more Smart Citation
“…Figure 4 indeed shows that many of the units are positively or negatively correlated with state uncertainty, indicating that an explicit representation of uncertainty is maintained upon which evidence integration may operate. This is in line with recent results which show that rate-based neural networks can maintain explicit representations of probabilistic knowledge [Orhan and Ma, 2016].…”
Section: Analysis Of Neural Network Behavioursupporting
confidence: 92%
“…The notion of using ANNs as vehicles for solving cognitive tasks that are of interest to cognitive neuroscientists has recently been proposed by various researchers (e.g. [Mante et al, 2013, Song et al, 2016a, Rajan et al, 2016, Sussillo et al, 2015, Orhan and Ma, 2016). Particularly by combining neural networks with reinforcement learning one may potentially solve complex cognitive tasks based on reward signals alone, as shown in [Song et al, 2016b].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, there are population coding mechanisms that could support explicit probabilistic computations (Ma et al, 2006;Zemel and Dayan, 1997;Gershman and Beck, 2016;Eliasmith and Martens, 2011;Rao, 2004;Sahani and Dayan, 2003). Yet it is unclear to what extent and at what levels the brain uses an explicitly probabilistic framework, or to what extent probabilistic computations are emergent from other learning processes (Orhan and Ma, 2016).…”
Section: Active Learningmentioning
confidence: 99%
“…It might be possible to further map the concepts of Lake et al (2016) onto neuroscience via an infrastructure of interacting cost functions and specialized brain systems under rich genetic control, coupled to a powerful and generic neurally implemented capacity for optimization. For example, it was recently shown that complex probabilistic population coding and inference can arise automatically from backpropagation-based training of simple neural networks (Orhan and Ma, 2016), without needing to be built in by hand. The nature of the underlying primitives in the brain, on top of which learning can operate, is a key question for neuroscience.…”
Section: Relationships With Other Cognitive Framework Involving Specmentioning
confidence: 99%
“…Efforts are underway to effectively train spiking neural networks (Gerstner et al, 2014;Gerstner and Kistler, 2002;O'Connor and Welling, 2016;Huh and Sejnowski, 2017) and endow them with the same cognitive capabilities as their rate-based cousins Zambrano and Bohte, 2016;Kheradpisheh et al, 2016;Lee et al, 2016;Thalmeier et al, 2015). In the same vein, researchers are exploring how probabilistic computations can be performed in neural networks (Pouget et al, 2013;Nessler et al, 2013;Orhan and Ma, 2016;Heeger, 2017) and deriving new biologically plausible synaptic plasticity rules (Schiess et al, 2016;Brea and Gerstner, 2016). Biologically-inspired principles may also be incorporated at a more conceptual level.…”
Section: Next-generation Artificial Neural Networkmentioning
confidence: 99%