2020
DOI: 10.3389/frobt.2020.522141
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Sensorimotor Cross-Perception and Cross-Behavior Knowledge Transfer for Object Categorization

Abstract: From an early age, humans learn to develop an intuition for the physical nature of the objects around them by using exploratory behaviors. Such exploration provides observations of how objects feel, sound, look, and move as a result of actions applied on them. Previous works in robotics have shown that robots can also use such behaviors (e.g., lifting, pressing, shaking) to infer object properties that camera input alone cannot detect. Such learned representations are specific to each individual robot and cann… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 54 publications
0
5
0
Order By: Relevance
“…Dataset. The dataset described in [37] is used to evaluate and compare the proposed network with the single-modal network. For collecting the dataset, an uppertorso humanoid robot with a 7-DOF arm manipulates 100 objects by executing 9 different exploratory behaviors (push, poke, press, shake, lift, drop, grasp, tap and hold) multiple times and records visual, haptic, auditory and vibrotactile sensory data.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Dataset. The dataset described in [37] is used to evaluate and compare the proposed network with the single-modal network. For collecting the dataset, an uppertorso humanoid robot with a 7-DOF arm manipulates 100 objects by executing 9 different exploratory behaviors (push, poke, press, shake, lift, drop, grasp, tap and hold) multiple times and records visual, haptic, auditory and vibrotactile sensory data.…”
Section: Resultsmentioning
confidence: 99%
“…Since different robots have different morphologies and different sensor suites, the learned knowledge cannot be directly used by another robot. An interesting avenue for future work is to extend transfer learning methodologies (e.g., [38], [37]) as to enable a robot to bootstrap its sensorimotor learning process with knowledge learned by another robot. Another viable direction for future work is to integrate the multisensory next-frame prediction methodology described here with reinforcement learning methods for object manipulation tasks.…”
Section: Discussionmentioning
confidence: 99%
“…One limitation of our existing framework is that the attribute recognition models are learned by a single robot and cannot directly be used by another robot that has different behaviors, morphology, and sensory modalities. We plan to use sensorimotor transfer learning (e.g., [37,38]) to scale up our framework to allow multiple different robots to learn such models and share their knowledge as to further speed up learning. In addition, considering correlations between attributes and handling fuzzy attributes can potentially improve the performance of On-RAL.…”
Section: Discussionmentioning
confidence: 99%
“…One limitation of our existing framework is that the attribute recognition models are learned by a single robot and cannot directly be used by another robot that has different behaviors, morphology, and sensory modalities. We plan to use sensorimotor transfer learning (e.g., [38,37]) to scale up our framework to allow multiple different robots to learn such models and share their knowledge as to further speed up learning. In addition, considering correlations between attributes and handling fuzzy attributes can potentially improve the performance of On-RAL.…”
Section: Discussionmentioning
confidence: 99%