2019
DOI: 10.48550/arxiv.1902.02086
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GEN-SLAM: Generative Modeling for Monocular Simultaneous Localization and Mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…CodeSLAM [200] proposes a depth map from single image, which can be optimised efficiently jointly with pose variables. Mono-stixels [201] uses the depth, motion and semantic information in dynamic scene to estimate depth.…”
Section: D-sismentioning
confidence: 99%
“…CodeSLAM [200] proposes a depth map from single image, which can be optimised efficiently jointly with pose variables. Mono-stixels [201] uses the depth, motion and semantic information in dynamic scene to estimate depth.…”
Section: D-sismentioning
confidence: 99%
“…GeoNet [199] is a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos. CodeSLAM [200] proposes a depth map from single image, which can be optimised efficiently jointly with pose variables. Mono-stixels [201] uses the depth, motion and semantic information in dynamic scene to estimate depth.…”
Section: Deep Learning With Visual Slammentioning
confidence: 99%
“…CodeSLAM [184] proposes a depth map from single image, which can be optimised efficiently jointly with pose variables. GEN-SLAM [185] outputs the dense map with the aid of conventional geometric SLAM and the topological constraint in monocular. [186] proposes a training objective that is invariant to changes in depth range and scale.…”
Section: ) Deep Learning With Visual Slammentioning
confidence: 99%