2022
DOI: 10.1109/lra.2022.3150045
|View full text |Cite
|
Sign up to set email alerts
|

Active Visuo-Tactile Interactive Robotic Perception for Accurate Object Pose Estimation in Dense Clutter

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 38 publications
(43 reference statements)
0
10
0
Order By: Relevance
“…We perform Markov Monte-Carlo sampling of N viewpoints on the hemisphere space located above the centroid o centroid of the bounding box of the object of interest which is known a priori. The 3D position p view is randomly sampled as a point on the hemisphere and the orientation of the view as axis of rotation ⃗e and angle θ is computed with [34]:…”
Section: B Deep Active Visual Object Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…We perform Markov Monte-Carlo sampling of N viewpoints on the hemisphere space located above the centroid o centroid of the bounding box of the object of interest which is known a priori. The 3D position p view is randomly sampled as a point on the hemisphere and the orientation of the view as axis of rotation ⃗e and angle θ is computed with [34]:…”
Section: B Deep Active Visual Object Learningmentioning
confidence: 99%
“…Deep Active Visual Object Learning: If the scene is cluttered, we use our prior work in Murali et al [34] to autonomously declutter the workspace. After the scene is decluttered, the robots initiate visual data collection.…”
Section: B Robot Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Shannon entropy has been used for selecting actions that provides the maximum discriminatory information for object classification [10], [19]. In our previous works [7], [8] KL-divergence has been used for tactile action selection for object pose estimation.…”
Section: Introductionmentioning
confidence: 99%
“…However, LiDAR-based point clouds are very sparse with increasing distance, ranging between 10-100 points [1], [2]. Such sparse point clouds are also produced with commercial tactile sensors [3]- [5] (Figure 1). Most state-of-art methodologies rely on relatively dense point clouds and the performance drops drastically with sparse point clouds [6].…”
Section: Introductionmentioning
confidence: 99%