2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2016
DOI: 10.1109/devlrn.2016.7846806
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous grounding of visual field experience through sensorimotor prediction

Abstract: In a developmental framework, autonomous robots need to explore the world and learn how to interact with it. Without an a priori model of the system, this opens the challenging problem of having robots master their interface with the world: how to perceive their environment using their sensors, and how to act in it using their motors. The sensorimotor approach of perception claims that a naive agent can learn to master this interface by capturing regularities in the way its actions transform its sensory inputs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…SMC theory (O'Regan and Noë 2001), in particular, has inspired a range of studies in which the relationships between a robotic agent's actions and sensory observations are modeled in order to learn skilled behaviors or to improve the quality of its state predictions. These studies tend to focus on narrow problem domains, including classifying objects according to their physical responses to manipulation (Hogman, Bjorkman, and Kragic 2013), segmenting objects via push-induced object movements (Bergström et al 2011;Van Hoof, Kroemer, and Peters 2013), learning to navigate (Maye and Engel 2011;2013), learning to manipulate objects in a generalizable manner (Sánchez-Fibla, Duff, and Verschure 2011), learning the structure of complex sensorimotor spaces such as a saccading, foveated vision system (Laflaqui 2016), and categorizing objects and their relations via programmed behaviors (Sinapov et al 2014). (Bohg et al 2016) provides an in depth review of robot-based sensorimotor interactions.…”
Section: Related Workmentioning
confidence: 99%
“…SMC theory (O'Regan and Noë 2001), in particular, has inspired a range of studies in which the relationships between a robotic agent's actions and sensory observations are modeled in order to learn skilled behaviors or to improve the quality of its state predictions. These studies tend to focus on narrow problem domains, including classifying objects according to their physical responses to manipulation (Hogman, Bjorkman, and Kragic 2013), segmenting objects via push-induced object movements (Bergström et al 2011;Van Hoof, Kroemer, and Peters 2013), learning to navigate (Maye and Engel 2011;2013), learning to manipulate objects in a generalizable manner (Sánchez-Fibla, Duff, and Verschure 2011), learning the structure of complex sensorimotor spaces such as a saccading, foveated vision system (Laflaqui 2016), and categorizing objects and their relations via programmed behaviors (Sinapov et al 2014). (Bohg et al 2016) provides an in depth review of robot-based sensorimotor interactions.…”
Section: Related Workmentioning
confidence: 99%
“…This should also open possibilities to tackle the problem of distinguishing multiple instances of the same proto-objects, and to reduce the ambiguity of a visual scene. Some preliminary results in this direction have already been published (Laflaquière, 2016 ). Instead of considering a small sensor moving in the environment, one could also imagine having a larger sensor with an attention mechanism focusing on a small part of it.…”
Section: Discussionmentioning
confidence: 99%
“…This should also open possibilities to tackle the problem of distinguishing multiple instances of the same proto-objects, and to reduce the ambiguity of a visual scene. Some preliminary results in this direction have already been published [42].…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Given its internal model, it can counterfactually [27] determine if a sensory state from another receptive field does correspond to the same visual feature and which saccade to execute to reach the rewarding sensory state. This is the kind of visual tasks that were proposed to subjects with altered sensory states associations in [8] and for which a basic model has recently been proposed in [15]. This use of the predictive model has not been illustrated in this paper.…”
Section: Discussionmentioning
confidence: 99%
“…An example of such a visual task would be to look into the estimated predictive model for the saccade that would transform, with the highest probability, a current sensory input into another desired one. Such an algorithm has been recently proposed in [15].…”
Section: Random Imagesmentioning
confidence: 99%