2022
DOI: 10.1007/978-3-031-19769-7_14
|View full text |Cite
|
Sign up to set email alerts
|

AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 63 publications
0
13
0
Order By: Relevance
“…DexYCB For the DexYCB with official split S0 test set, we compare our method with existing works (Chao et al, 2021, Chen et al, 2022b, Li et al, 2021, Lin et al, 2023, Spurr et al, 2020, Tse et al, 2022 that utilize monocular input as ours. Since Method MPJPE(↓) AUC J (↑) MKA(↓) FPS(↑) ECCV22-Chen et al (Chen et al, 2022b) 19.00 ---ECCV20-Spurr et al (Spurr et al, 2020) 17.34 0.698 --CVPR22-Tse et al (Tse et al, 2022) 16.05 0.722 --CVPR22-Li et al (Li et al, 2021) 12.80 ---CVPR23-Yu et al (Yu et al, 2023) 8.92 ---CVPR21-Chao et al (Chao et al, 2021) 6.83 0.864 --CVPR23-Lin et al (Lin et al, 2023) 5.47 ---CVPR23-H2ONet 5 DexYCB is a sequential dataset and does not provide ground-truth for vertices, we compute only hand joint accuracy and MKA. As shown in Tab.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…DexYCB For the DexYCB with official split S0 test set, we compare our method with existing works (Chao et al, 2021, Chen et al, 2022b, Li et al, 2021, Lin et al, 2023, Spurr et al, 2020, Tse et al, 2022 that utilize monocular input as ours. Since Method MPJPE(↓) AUC J (↑) MKA(↓) FPS(↑) ECCV22-Chen et al (Chen et al, 2022b) 19.00 ---ECCV20-Spurr et al (Spurr et al, 2020) 17.34 0.698 --CVPR22-Tse et al (Tse et al, 2022) 16.05 0.722 --CVPR22-Li et al (Li et al, 2021) 12.80 ---CVPR23-Yu et al (Yu et al, 2023) 8.92 ---CVPR21-Chao et al (Chao et al, 2021) 6.83 0.864 --CVPR23-Lin et al (Lin et al, 2023) 5.47 ---CVPR23-H2ONet 5 DexYCB is a sequential dataset and does not provide ground-truth for vertices, we compute only hand joint accuracy and MKA. As shown in Tab.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…2, the proposed method demonstrates superior computational efficiency compared to other methods. Recent studies evaluated with DexYCB (Chen et al, 2022b, Li et al, 2021, Lin et al, 2023, Tse et al, 2022, Yu et al, 2023 all aimed at the simultaneous reconstruction of hands and objects, so real-time performance is not guaranteed. Among the existing studies, the most recent work, H2ONet shows a significant improvement in accuracy compared to previous works.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This advantage indicates that implicit functions can generalize to arbitrary hands. Several implicit hand models have been proposed, such as LISA (Corona et al 2022), AlignSDF (Chen et al 2022b), Im2Hands (Lee et al 2023), HandNeRF (Guo et al 2023), and Hand Avatar (Chen, Wang, and Shum 2023). However, compared with explicit models, the computational cost of implicit models is more expensive.…”
Section: Implicit Hand Modelsmentioning
confidence: 99%