2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341420
|View full text |Cite
|
Sign up to set email alerts
|

Learning Bayes Filter Models for Tactile Localization

Abstract: Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…Their policy's objective is to solve a specific manipulation task rather than decrease state uncertainty. Others combine vision and tactile feedback to localize the robot's gripper with respect to the environment, the reverse problem to object pose localization [15].…”
Section: A Related Workmentioning
confidence: 99%
“…Their policy's objective is to solve a specific manipulation task rather than decrease state uncertainty. Others combine vision and tactile feedback to localize the robot's gripper with respect to the environment, the reverse problem to object pose localization [15].…”
Section: A Related Workmentioning
confidence: 99%
“…Chaplot et al [8] and Gottipati et al [18] applied this approach to the active localization problem with visual feedback. Finally, in [32], the authors proposed to use a learnable Bayes filter to localize a robotic gripper's position with respect to the environment image using tactile feedback. In the following work [33], they train agents using deep reinforcement learning with belief inputs to solve contact-rich manipulation tasks using only a single image of the environment and the tactile observations.…”
Section: Partial Observability In Deepmentioning
confidence: 99%