2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00051
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Monocular Depth and Ego-Motion Learning With Structure and Semantics

Abstract: We present an approach which takes advantage of both structure and semantics for unsupervised monocular learning of depth and ego-motion. More specifically, we model the motion of individual objects and learn their 3D motion vector jointly with depth and egomotion. We obtain more accurate results, especially for challenging dynamic scenes not addressed by previous approaches. This is an extended version of Casser et al. [1]. Code and models have been open sourced at: https

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 91 publications
(84 citation statements)
references
References 21 publications
0
75
0
Order By: Relevance
“…This mask can be obtained from a pretrained segmentation model. Unlike in prior work [7], instance segmentation and tracking are not required, as we need a single "possibly mobile" mask. In fact, we show that a union of bounding boxes is sufficient (see Fig.…”
Section: Learning Object Motionmentioning
confidence: 99%
See 4 more Smart Citations
“…This mask can be obtained from a pretrained segmentation model. Unlike in prior work [7], instance segmentation and tracking are not required, as we need a single "possibly mobile" mask. In fact, we show that a union of bounding boxes is sufficient (see Fig.…”
Section: Learning Object Motionmentioning
confidence: 99%
“…Cityscapes Table 2 summarizes the evaluation metrics of models trained and tested on Cityscapes. We follow the established protocol by previous work, using the disparity for evaluation [7,30]. Since this is a very challenging benchmark with many dynamic objects, very few approaches have evaluated on it.…”
Section: Depthmentioning
confidence: 99%
See 3 more Smart Citations