2023
DOI: 10.48550/arxiv.2302.01838
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

vMAP: Vectorised Object Mapping for Neural Field SLAM

Abstract: Figure 1. vMAP automatically builds an object-level scene model from a real-time RGB-D input stream. Each object is represented by a separate MLP neural field model, all optimised in parallel via vectorised training. We use no 3D shape priors, but the MLP representation encourages object reconstruction to be watertight and complete, even when objects are partially observed or are heavily occluded in the input images. See for instance the separate reconstructions of the armchairs, sofas and cushions, which were… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…Neural implicit representations infer scene semantics [4], [5], [37]- [42] jointly with geometry using a multi-layer perceptron or similar parametric model. These have been extended to dynamic scenes [43]. Neural feature fields [5], [38], [44], [45] are neural implicit representations which map continuous 3D coordinates to vector-valued features.…”
Section: Semantic Scene Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural implicit representations infer scene semantics [4], [5], [37]- [42] jointly with geometry using a multi-layer perceptron or similar parametric model. These have been extended to dynamic scenes [43]. Neural feature fields [5], [38], [44], [45] are neural implicit representations which map continuous 3D coordinates to vector-valued features.…”
Section: Semantic Scene Representationsmentioning
confidence: 99%
“…For the time being, our method is limited to static scenes. Dealing with moving objects within scenes remains an open problem, but promising recent research [43] suggests that extending neural implicit representations to dynamic scenes might be feasible.…”
Section: Query Performancementioning
confidence: 99%
“…The work most similar to ours is [37], but it focuses on analyzing the effect of observation quality on reconstruction results. In addition, a concurrent work [38] also models objects separately while reconstructing the scene, but it uses RGB-D observations. III.…”
Section: B Nerfs and Nerf-based Slammentioning
confidence: 99%
“…Finally, outside of the point cloud reconstruction realm, the recent popularity of Neural Radiance Fields [Mildenhall et al 2021] has also given rise to uncertainty-driven approaches for next-bestview planning in RGB multi-view representations (see, e.g., [Jin et al 2023;Kong et al 2023;Smith et al 2022;Sucar et al 2021]).…”
Section: Next-best-view Planningmentioning
confidence: 99%