2021
DOI: 10.1109/tcds.2020.2965985
|View full text |Cite
|
Sign up to set email alerts
|

Robot Multimodal Object Perception and Recognition: Synthetic Maturation of Sensorimotor Learning in Embodied Systems

Abstract: It is known that during early infancy, humans experience many physical and cognitive changes that shape their learning and refine their understanding of objects in the world. With the extended arm being one of the very first objects they familiarise, infants undergo a series of developmental stages that progressively facilitate physical interactions, enrich sensory information and develop the skills to learn and recognise. Drawing inspiration from infancy, this study deals with the modelling of an open-ended l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 39 publications
(34 reference statements)
0
8
0
Order By: Relevance
“…It also provides several primitive actions that can be used to perform object manipulation, ranging from reaching and grasping to releasing and pushing objects. Note that apart from the push action, which consists of a fixed rotation to the vertical axis of torso allowing the extended arm to push objects, all other primitive actions are learnt as discussed in [33].…”
Section: Experimental Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…It also provides several primitive actions that can be used to perform object manipulation, ranging from reaching and grasping to releasing and pushing objects. Note that apart from the push action, which consists of a fixed rotation to the vertical axis of torso allowing the extended arm to push objects, all other primitive actions are learnt as discussed in [33].…”
Section: Experimental Methodologymentioning
confidence: 99%
“…Subsequently, this leads to non-refined hand trajectories while reaching. Indepth discussion about map calculations in the context of reaching can be found in [32,33].…”
Section: B Primitive Actionsmentioning
confidence: 99%
“…To address this problem, several lines of research have shown that incorporating a variety of sensory modalities is the key to further enhance the robotic capabilities in recognizing multisensory object properties (see [4] and [21] for a review). For example, visual and physical interaction data yields more accurate haptic classification for objects [11], and non-visual sensory modalities (e.g., audio, haptics) coupled with exploratory actions (e.g., touch or grasp) have been shown useful for recognizing objects and their properties [5,10,15,24,30], as well as grounding natural language descriptors that people use to refer to objects [3,39]. More recently, researchers have developed end-to-end systems to enable robots to learn to perceive the environment and perform actions at the same time [20,42].…”
Section: Data Augmentationmentioning
confidence: 99%
“…material, internal state, compliance). To address this problem, several lines of research have focused on how robots can use nonvisual sensory modalities of tasks that include grasping [12], [13], object recognition [14], [15], [16], object categorization [7], [17], [18] and language grounding [19], [20], [21], [22]. Inspired by these works, we propose an architecture that also uses multiple sensory modalities for the sensorimotor learning task of visual next-frame predication.…”
Section: Related Workmentioning
confidence: 99%