2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00601
|View full text |Cite
|
Sign up to set email alerts
|

Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Object reconstruction Most existing work on reconstructing 3D objects from RGB [21,46,53,75,78] and RGBD [45,55,82] data does so in isolation, without the human involvement or the interaction. While challenging, it is arguably more interesting to reconstruct objects in a dynamic setting under severe occlusions from the human.…”
Section: Appearance Modelling: Humans and Objects Without Scene Contextmentioning
confidence: 99%
“…Object reconstruction Most existing work on reconstructing 3D objects from RGB [21,46,53,75,78] and RGBD [45,55,82] data does so in isolation, without the human involvement or the interaction. While challenging, it is arguably more interesting to reconstruct objects in a dynamic setting under severe occlusions from the human.…”
Section: Appearance Modelling: Humans and Objects Without Scene Contextmentioning
confidence: 99%
“…We compare to competing methods and show that ours can produce better object representations and camera/object trajectories. Note that prior methods often focus on only rigid motion and separated optimization [WLNM21, MWM*21], assume access to geometric priors [MWM*21], a single non‐rigid object with local motion [PSH*21,CFF*22], a global representation [LNSW21,TTG*21, XHKK21], foreground‐background separation and novel‐view rendering without geometric reconstruction [GSKH21, YLSL21, WZT*22, SCL*23], or static scenes with an implicit representation [ZPL*22,SLOD21, YPN*22]. We relax many of these restrictions and demonstrate that our factorized representation naturally enables edits involving object‐level manipulations.…”
Section: Introductionmentioning
confidence: 99%