2022
DOI: 10.1177/17298806221076978
|View full text |Cite
|
Sign up to set email alerts
|

Robust monocular 3D object pose tracking for large visual range variation in robotic manipulation via scale-adaptive region-based method

Abstract: Many robot manipulation processes involve large visual range variation between the hand-eye camera and the object, which in turn causes object scale change of a large span in the image sequence captured by the camera. In order to accurately guide the manipulator, the relative 6 degree of freedom (6D) pose between the object and manipulator is continuously required in the process. The large-span scale change of the object in the image sequence often leads to the 6D pose tracking failure of the object for existi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…In this paper, we propose a pose refinement method based on real image patch matching. Compared with some 3D model needed methods, [26][27][28] our method does not need the 3D object mesh models and solves the difficulty of generating real-world 3D models with high-quality textures. Meanwhile, based on our efficient template search method and fully convolutional patch M_R_Net, our method achieves real-time and high-precision.…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we propose a pose refinement method based on real image patch matching. Compared with some 3D model needed methods, [26][27][28] our method does not need the 3D object mesh models and solves the difficulty of generating real-world 3D models with high-quality textures. Meanwhile, based on our efficient template search method and fully convolutional patch M_R_Net, our method achieves real-time and high-precision.…”
Section: Discussionmentioning
confidence: 99%