2022
DOI: 10.48550/arxiv.2203.07918
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…More recent works explored different aspects to improve pose estimation accuracy. A category-level shape prior is found to be beneficial for pose estimation accuracy in [29] and further improved in [30], [31], [32]. DualPoseNet [33], 6D-ViT [34], ACR-Pose [35], and CPPF [36] proposed to incorporate rotation-invariant embedding, Transformer networks, Generative Adversarial Networks, and deep pointpair-feature, respectively.…”
Section: B Opaque Object Category-level Pose Estimationmentioning
confidence: 99%
See 3 more Smart Citations
“…More recent works explored different aspects to improve pose estimation accuracy. A category-level shape prior is found to be beneficial for pose estimation accuracy in [29] and further improved in [30], [31], [32]. DualPoseNet [33], 6D-ViT [34], ACR-Pose [35], and CPPF [36] proposed to incorporate rotation-invariant embedding, Transformer networks, Generative Adversarial Networks, and deep pointpair-feature, respectively.…”
Section: B Opaque Object Category-level Pose Estimationmentioning
confidence: 99%
“…Similar to other category-level pose estimation work [32], we fine-tune a Mask R-CNN [26] model to obtain the object's bounding box B, segmentation mask M and category label P c . Patches of ray direction R B , RGB I B and raw depth D B are extracted according to the bounding box B and serve as input to the first stage of TransNet.…”
Section: B Object Instance Detection and Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent instance-specific pose estimators [39,6,32] reconstruct the object model implicitly in the pipeline so that they are are model-free. Category-specific pose estimators [61,11,64,13,29,10,57,8,30,9,16,15] can generalize to objects in the same category and also do not require the object model. However, they are still unable to predict poses for objects in unseen categories.…”
Section: Specific Object Pose Estimatormentioning
confidence: 99%