2007
DOI: 10.1016/j.robot.2007.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Developmental learning for autonomous robots

Abstract: Developmental robotics is concerned with the design of algorithms that promote robot adaptation and learning through qualitative growth of behaviour and increasing levels of competence.This paper uses ideas and inspiration from early infant psychology (up to 3 months of age) to examine how robot systems could discover the structure of their local sensory-motor spaces and learn how to coordinate these for the control of action.An experimental learning model is described and results from robotic experiments usin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
24
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 21 publications
(13 reference statements)
2
24
0
Order By: Relevance
“…There is no external absolute coordinate system that the robot system refers to in order to derive spatial locations from the visual data. In former experiments we have shown that these mappings can be learned in a very fast way and are able to adapt continuously to changes, for example, if the spatial location between arm and vision system changes [5], [50]. Therefore, we argue that the introduced architectures establish an embodied representation of space which is generated by the learned sensorimotor mappings and which is entirely the result of robot-environment interaction.…”
Section: Representing Space Through Mappingsmentioning
confidence: 94%
See 1 more Smart Citation
“…There is no external absolute coordinate system that the robot system refers to in order to derive spatial locations from the visual data. In former experiments we have shown that these mappings can be learned in a very fast way and are able to adapt continuously to changes, for example, if the spatial location between arm and vision system changes [5], [50]. Therefore, we argue that the introduced architectures establish an embodied representation of space which is generated by the learned sensorimotor mappings and which is entirely the result of robot-environment interaction.…”
Section: Representing Space Through Mappingsmentioning
confidence: 94%
“…The execution of these motor position changes drives the camera in such a way that the corresponding stimulus at the coordinates end up in the fovea, i.e., the image center. The actual saccade mappings can either be learned [49], [50] or manually designed. The latter was employed for this study.…”
Section: Two Computational Architectures For Gaze Modulationmentioning
confidence: 99%
“…The hand-eye mapping problem is thus formulated as finding the relationship between the eye motor space and the arm joint space, i.e., finding the mapping: Hand-eye mapping is highly non-linear, mainly because the geometries of body parts exhibit quite complex kinematics and visual distortion [16]. In this case, the non-linear approximation ability of artificial neural networks supports the implementation of hand-eye mapping [35]; e.g., in [7,19,21] self-organizing map networks are used for the mapping. Also, several recent works consider a simulated human brain structure to solve the problem of hand-eye mapping.…”
Section: Background and Related Workmentioning
confidence: 99%
“…These maps and the link mechanism are described in the following sub-sections. The visuo-motor system and the hand sensory-motor system are implemented by respective sensory-motor coordination models, whose prototypes have been used in our previous work [31,35]. The sensor-motor coordination models are based on a basic sensory motor map structure.…”
Section: Computational Implementation For Robotic Hand-eye Coordinationmentioning
confidence: 99%
“…Now considering network size, as shown in Figure 1.4, the first method with full updating of all network parameters required by far the largest network, 48 nodes; while the second method removed three hidden units, reducing the network to 16 nodes; the third method kept the original network size, 19 nodes. We have also studied the staged development in sensory-motor mapping learning process [Lee et al, 2007]. The system constructs sensory-motor schemas in terms of interlinked topological mappings of sensory-motor events, and demonstrates that the constructive learning moves to next stage if stable behaviour patterns emerges.…”
Section: Constructive Learning and Adaptation In Tool-usementioning
confidence: 99%