2011
DOI: 10.4013/jacr.2011.11.01
|View full text |Cite
|
Sign up to set email alerts
|

IGMN: An incremental connectionist approach for concept formation, reinforcement learning and robotics

Abstract: This paper demonstrates the use of a new connectionist approach, called IGMN (standing for Incremental Gaussian Mixture Network) in some state-of-the-art research problems such as incremental concept formation, reinforcement learning and robotic mapping. IGMN is inspired on recent theories about the brain, especially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some special features that are not present in most neural network models such as MLP, RBF and G… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…In other words, any element can be used to predict any other element, like auto-associative neural networks [ 7 ] or missing data imputation [ 8 ]. This feature is useful for simultaneous learning of forward and inverse kinematics [ 9 ], as well as for simultaneous learning of a value function and a policy in reinforcement learning [ 10 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, any element can be used to predict any other element, like auto-associative neural networks [ 7 ] or missing data imputation [ 8 ]. This feature is useful for simultaneous learning of forward and inverse kinematics [ 9 ], as well as for simultaneous learning of a value function and a policy in reinforcement learning [ 10 ].…”
Section: Introductionmentioning
confidence: 99%
“…The IGMN is capable of supervised learning, simply by assigning any of its input vector elements as outputs (any element can be used to predict any other element, like autoassociative neural networks [4]). This feature is useful for simultaneous learning of forward and inverse kinematics, as well as for simultaneous learning of a value function and a policy in reinforcement learning [5].…”
Section: Introductionmentioning
confidence: 99%