2018
DOI: 10.1109/jproc.2018.2856739
|View full text |Cite
|
Sign up to set email alerts
|

Navigating the Landscape for Real-Time Localization and Mapping for Robotics and Virtual and Augmented Reality

Abstract: Visual understanding of 3D environments in realtime, at low power, is a huge computational challenge. Often referred to as SLAM (Simultaneous Localisation and Mapping), it is central to applications spanning domestic and industrial robotics, autonomous vehicles, virtual and augmented reality. This paper describes the results of a major research effort to assemble the algorithms, architectures, tools, and systems software needed to enable delivery of SLAM, by supporting applications specialists in selecting and… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 47 publications
(21 citation statements)
references
References 94 publications
0
17
0
1
Order By: Relevance
“…Similar benchmarking works have been performed in [5] and [6]. As the application of SLAM algorithms in robotics and computer vision is growing, it is becoming apparent that a more sophisticated approach to benchmarking is needed [7]. In this paper we develop ideas from statistics to propose Department of Computing, Imperial College London, UK § Department of Computer Science, University of Bath, UK novel metrics to label datasets in terms of the motion, structure, and appearance qualities which are important to SLAM performance.…”
Section: Introductionmentioning
confidence: 94%
“…Similar benchmarking works have been performed in [5] and [6]. As the application of SLAM algorithms in robotics and computer vision is growing, it is becoming apparent that a more sophisticated approach to benchmarking is needed [7]. In this paper we develop ideas from statistics to propose Department of Computing, Imperial College London, UK § Department of Computer Science, University of Bath, UK novel metrics to label datasets in terms of the motion, structure, and appearance qualities which are important to SLAM performance.…”
Section: Introductionmentioning
confidence: 94%
“…Similar to SemanticFusion, ORB-SLAM2-CNN [53] is based on ORB-SLAM2 [42], projecting the segmentation of a modified version of MobileNet [29] to label the keypoints of ORB-SLAM2-generated map. Thus, the key difference from SemanticFusion is that this algorithm produces a labelled sparse map, rather than a dense one.…”
Section: Semantic Slammentioning
confidence: 99%
“…• Enhancements that allow joint evaluation of quality for both reconstruction and semantic segmentation, using the NYU RGB-Dv2 [44] and ScanNet [8] datasets. This is demonstrated on two systems: SemanticFusion [40] and ORB-SLAM2-CNN [53], which construct labelled dense and sparse scene maps, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…one second by allowing the neural net to detect a product on as many images as it can process in this time. We can use spatial information from algorithms such as SLAM [2,29,37], video object tracking [20,24] or optical flow to track [15,28] the position over multiple frames. The prediction scores from those multiple frames are then average-pooled to choose the maximumconfidence prediction across all frames (within one second).…”
Section: Rq3mentioning
confidence: 99%