2007
DOI: 10.1080/09540090600971302
|View full text |Cite
|
Sign up to set email alerts
|

Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

Abstract: Q. Meng and M. H Lee, Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks, Connection Science, 19(1), pp 25-52, 2007.Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 46 publications
0
16
0
Order By: Relevance
“…The implicit assumption is that the new coordinate systems is more appropriate for learning and controlling arm movements. However, an alternative strategy, embodied in many existing models, is to directly learn the mapping between sensory inputs and motor outputs without any recoding via intermediate reference frames (e.g., Albus, 1981;Balkenius, 1995;Baraduc et al, 2001;Churchland, 1990Churchland, , 1986Cohen et al, 1997;Coiton et al, 1991;Gaudiano and Grossberg, 1991;Mel, 1990;Meng and Lee, 2007;Metta et al, 1999;Ritter et al, 1992;Salinas and Abbott, 1995;Schulten and Zeller, 1996). Unlike many previous algorithms for sensory-sensory mappings, these sensory-motor mappings are usually learned rather than being hard-wired.…”
Section: Discussionmentioning
confidence: 99%
“…The implicit assumption is that the new coordinate systems is more appropriate for learning and controlling arm movements. However, an alternative strategy, embodied in many existing models, is to directly learn the mapping between sensory inputs and motor outputs without any recoding via intermediate reference frames (e.g., Albus, 1981;Balkenius, 1995;Baraduc et al, 2001;Churchland, 1990Churchland, , 1986Cohen et al, 1997;Coiton et al, 1991;Gaudiano and Grossberg, 1991;Mel, 1990;Meng and Lee, 2007;Metta et al, 1999;Ritter et al, 1992;Salinas and Abbott, 1995;Schulten and Zeller, 1996). Unlike many previous algorithms for sensory-sensory mappings, these sensory-motor mappings are usually learned rather than being hard-wired.…”
Section: Discussionmentioning
confidence: 99%
“…[14, 22-24, 18, 34] applied neural networks to mimic the brain's distinct cortices. Furthermore, a new type of constructive neural network (a growing radial-based function network) was created to simulate the growth of brain development, in which the network's topological structure grew while, simultaneously, the network was being trained [8,9]. Inspired by the above studies, we created a computational model wherein we reduced the complexity of robotic learning systems by simulating a part of the human brain.…”
Section: Background and Related Workmentioning
confidence: 99%
“…These approaches dealt with the robot's kinematic redundancy. In the work of [8,9], a new type of constructive neural network was created to build a mapping system, in which visual perception was transformed into hand motor values. In studies [10,11], a developmental learning algorithm was applied to obtain this type of transformation.…”
Section: Introductionmentioning
confidence: 99%
“…Basis function networks are very popular in robotics for complex non-linear sensory-motor transformations [13][14][15][16][17][18][19][20][21][22][23]. For non-linear and complex sensory-motor transformations setting of number of basis function neurons, their receptive fields (RFs) sizes and peak locations is a non-trivial task which cannot be pre-defined or hand-crafted.…”
Section: Introductionmentioning
confidence: 99%
“…All above mentioned basis function models were trained using two step complex training phases: learning of number of basis function neurons their RFs sizes and peak locations; learning of connection weights between the basis function neurons and the output neurons. For number of basis function neurons, RFs sizes and locations optimization: [23] used orthogonal least square algorithm, [18][19][20] used simplified node-decoupled extended Kalman filter algorithm, in [13,14,17,24] basis function units with fixed RFs size and defined locations were used. To learn the network connection weights: [17] used least-mean-square (LMS) gradient descent learning technique, in [23] linear least square (LLS) algorithm was used, in [18][19][20] simplified node-decouple extended Kalman filter (SDEKF) algorithm was employed, [14] used delta rule gradient descent technique, [13] used recursive least square (RLS) algorithm and in [24] extended Kalman filter was used.…”
Section: Introductionmentioning
confidence: 99%