2007
DOI: 10.1016/s0079-6123(06)65033-4
|View full text |Cite
|
Sign up to set email alerts
|

On the challenge of learning complex functions

Abstract: A common goal of computational neuroscience and of artificial intelligence research based on statistical learning algorithms is the discovery and understanding of computational principles that could explain what we consider adaptive intelligence, in animals as well as in machines. This chapter focuses on what is required for the learning of complex behaviors. We believe it involves the learning of highly varying functions, in a mathematical sense. We bring forward two types of arguments which convey the messag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
8
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 22 publications
(24 reference statements)
1
8
0
Order By: Relevance
“…Experienced meditators have exhibited changes in functional ACC activation during attention ( Oquendo et al, 2012 ) and focused meditation paradigms ( Baca-Garcia et al, 2006 ; Zhang et al, 2011 ; Tovar et al, 2012 ), and increased cortical thickness ( Pasternak et al, 2009 ). A recent study investigating the effects of mindfulness training on attention found that those who underwent a 6-week mindfulness training program had increased DLPFC functional activation during an affective Stroop task and reduced affective Stroop conflict performance ( Bengio, 2007 ). Furthermore, the authors reported that increased DLPFC, ACC and insula activation during negatively valenced stimuli was related to increased practice time.…”
Section: Discussionmentioning
confidence: 99%
“…Experienced meditators have exhibited changes in functional ACC activation during attention ( Oquendo et al, 2012 ) and focused meditation paradigms ( Baca-Garcia et al, 2006 ; Zhang et al, 2011 ; Tovar et al, 2012 ), and increased cortical thickness ( Pasternak et al, 2009 ). A recent study investigating the effects of mindfulness training on attention found that those who underwent a 6-week mindfulness training program had increased DLPFC functional activation during an affective Stroop task and reduced affective Stroop conflict performance ( Bengio, 2007 ). Furthermore, the authors reported that increased DLPFC, ACC and insula activation during negatively valenced stimuli was related to increased practice time.…”
Section: Discussionmentioning
confidence: 99%
“… How Sparsey can learn arbitrarily nonlinear and intertwined, i.e., “tangled,” classes via supervised learning of associations between codes in different macs (Section Learning arbitrarily complex nonlinear similarity metrics). That categories in the physical world are smooth in the neighborhood around any single exemplar but possibly very nonlinear and intertwined, i.e., “tangled,” with other classes at the global scale has been pointed out by many, (e.g., Saul and Roweis, 2002 ; Bengio, 2007 ; Bengio et al, 2012 ). In particular, DiCarlo et al ( 2012 ) state as a next step the need to formally specify what is meant by “untangling local” subspace.…”
Section: Discussionmentioning
confidence: 99%
“…That is, if the same label is co-presented with multiple (arbitrarily different) inputs in another (raw sensory) modality, then a single internal representation of that label can be associated with the multiple (arbitrarily different) internal representations of the sensory inputs. That internal representation of the label then de facto constitutes a representation of the class that includes all those sensory inputs regardless of how different they are, providing the model a means to learn essentially arbitrarily nonlinear categories (invariances), i.e., instances of what Bengio terms “AI Set” problems (Bengio, 2007 ). Although we describe this principle in this paper, its full elaboration and demonstration in the context of supervised learning will be treated in a future paper.…”
Section: Introductionmentioning
confidence: 99%
“…The advantages of organizing knowledge hierarchically, both categorically and componentially (partwhole) have long been known. More recently, the advantages of many-leveled vs. flat representations have been described in terms of the efficiency (essentially, the number of parameters needed) of representing highly nonlinear relations (Bengio 2007, Bengio, Courville et al 2012) and the constant stream of impressive "Deep Learning" results strongly bears this out (Krizhevsky and Hinton 2011, LeCun, Bengio et al 2015, Silver, Huang et al 2016). However, Deep Learning models, including LSTM (Hochreiter and Schmidhuber 1997), have thus far not been combined with SDR, and indeed, the principles of the two paradigms are very different and may be essentially incompatible.…”
Section: Discussionmentioning
confidence: 99%