2020
DOI: 10.1007/978-3-030-58517-4_7
|View full text |Cite
|
Sign up to set email alerts
|

Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…Though efficient, their formulation is built upon overlapped object regions across the reference image and target image, which is thus less robust against erroneous pose initialization. Grabner et al [48] proposed to refine the pose based on the correspondences, but their method is still limited to ideal scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Though efficient, their formulation is built upon overlapped object regions across the reference image and target image, which is thus less robust against erroneous pose initialization. Grabner et al [48] proposed to refine the pose based on the correspondences, but their method is still limited to ideal scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Generalizable pose estimators mostly require an object model either for shape embedding [69,42,12,43] or template matching [21,2,66,54,20,36,72] or renderingand-comparison [28,71,37,4,54,17]. To avoid using 3D models, recent works [70,40] utilize the advanced neural rendering techniques [35] to directly render from posed images for pose estimation.…”
Section: Generalizable Object Pose Estimatormentioning
confidence: 99%
“…This spawned several applications like self-supervision for monocular 3D object detection [5]. In [16], a differentiable renderer is employed -and even learned -to predict geometric correspondence fields to refine pose estimates of 3D objects. These approaches use rasterized rendering-schemes tightly integrated into a neural network.…”
Section: Related Workmentioning
confidence: 99%