2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00494
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(17 citation statements)
references
References 36 publications
1
16
0
Order By: Relevance
“…The workflow of E 3 Pose is shown in Figure 9(a). A key innovation of our proposed E 3 Pose system is to predict future 3D poses and then project the prediction results onto each individual camera view to obtain the 2D bounding boxes to enable the IoU calculation. This is because the IoU must be calculated before 2D poses in the scene of interest are estimated.…”
Section: Discussion On the Choice Of Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The workflow of E 3 Pose is shown in Figure 9(a). A key innovation of our proposed E 3 Pose system is to predict future 3D poses and then project the prediction results onto each individual camera view to obtain the 2D bounding boxes to enable the IoU calculation. This is because the IoU must be calculated before 2D poses in the scene of interest are estimated.…”
Section: Discussion On the Choice Of Methodsmentioning
confidence: 99%
“…The studio is designed to simulate and capture social activities of multiple people. We use the same set of training and testing sequences captured by the same set of five HD cameras (3,6,12,13,23) as in [10,37] for evaluation.…”
Section: Evaluation 51 Experiments Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…Ref. [ 30 ] proposes a multi-modal approach that uses 2D labels on RGB images as weak supervision to perform 3D HPE. And the multi-modal architecture also incorporates the camera and LiDAR with an auxiliary segmentation branch.…”
Section: Related Workmentioning
confidence: 99%