2005
DOI: 10.1162/0899766053011555
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Sequence Learning, Prediction, and Control: A Review of Different Models and Their Relation to Biological Mechanisms

Abstract: In this article we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spiketiming dependent plasticity. This review will briefly introduce the most influential models and focus on two questions: 1) To what degree are reward-based (e.g. TD-learning) and correlation based (hebbian) learning related? and 2) How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
150
0
1

Year Published

2007
2007
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 176 publications
(158 citation statements)
references
References 241 publications
(345 reference statements)
3
150
0
1
Order By: Relevance
“…This is closely related to perspectives on motor control and sequence learning that "minimize deviations from the desired state, that is, to minimize disturbances of the homeostasis of the feedback loop." See Wörgötter and Porr (2005) for a fuller discussion. In summary, avoiding surprise is fundamental for survival and speaks to the basic need of organisms to maintain equilibrium within their environment.…”
Section: Active Agentsmentioning
confidence: 99%
See 1 more Smart Citation
“…This is closely related to perspectives on motor control and sequence learning that "minimize deviations from the desired state, that is, to minimize disturbances of the homeostasis of the feedback loop." See Wörgötter and Porr (2005) for a fuller discussion. In summary, avoiding surprise is fundamental for survival and speaks to the basic need of organisms to maintain equilibrium within their environment.…”
Section: Active Agentsmentioning
confidence: 99%
“…Promising about such approaches is that they naturally extend to distributed state representations and efficiently cope with uncertainty"; see Toussaint (2009) for a fuller discussion of probabilistic inference as a model of planned behavior. There are also alternative self-referenced schemes (e.g., Verschure and Voegtlin 1998;Wörgötter and Porr 2005;Tschacher and Haken 2007) that may have greater ethological and neuronal plausibility. This theme will be developed further in a forthcoming article on value-learning and freeenergy.…”
Section: Active Inference and Optimal Controlmentioning
confidence: 99%
“…(4) A model of the environment which includes a representation of the environment dynamics required for maximizing the sum of future rewards. These general RL concepts are further explained and elaborated by various authors (Dayan and Daw 2008;Kaelbling et al 1996;Montague et al 2004a;Sutton and Barto 1998;Worgotter and Porr 2005;Dayan and Abbott 2001).…”
Section: Introductionmentioning
confidence: 98%
“…Given its generality (Wo¨rgo¨tter & Porr, 2005), we take reinforcement learning as our basic framework and express it in terms of optimal control systems. At the end of the day, animals are behavior systems-sets of behaviors that are organized around biological functions and goals, e.g., feeding (Timberlake & Silva, 1995), defense (Fanselow, 1994), or sex (Domjan, 1994).…”
Section: Basic Frameworkmentioning
confidence: 99%