The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018
DOI: 10.1111/cgf.13386
|View full text |Cite
|
Sign up to set email alerts
|

State of the Art on 3D Reconstruction with RGB‐D Cameras

Abstract: The advent of affordable consumer grade RGB‐D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB‐D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
123
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 256 publications
(145 citation statements)
references
References 226 publications
0
123
0
1
Order By: Relevance
“…Regarding visual SLAM, many open‐source approaches exist but not many can be easily used on a robot (consult Zollhöfer et al () for a review on 3D reconstruction focused approaches). For navigation, to avoid dealing with scale ambiguities, we limit our review to approaches able to estimate the real scale of the environment while mapping (e.g., with stereo and RGB‐D cameras or with visual–inertial odometry), thus excluding structure from motion or monocular SLAM approaches like parallel tracking and mapping (PTAM) (Klein & Murray, ), semi‐direct visual odometry (SVO) (Forster, Pizzoli, & Scaramuzza, ), REgularized MOnocular Depth Estimation (REMODE) (Pizzoli, Forster, & Scaramuzza, ), DT‐SLAM (Herrera, Kim, Kannala, Pulli, & Heikkilä, ), large‐scale direct monocular SLAM (LSD‐SLAM) (Engel, Schöps, & Cremers, ) or oriented FAST and rotated BRIEF (ORB)‐SLAM (Mur‐Artal, Montiel, & Tardos, ).…”
Section: Popular Slam Approaches Available On Rosmentioning
confidence: 99%
“…Regarding visual SLAM, many open‐source approaches exist but not many can be easily used on a robot (consult Zollhöfer et al () for a review on 3D reconstruction focused approaches). For navigation, to avoid dealing with scale ambiguities, we limit our review to approaches able to estimate the real scale of the environment while mapping (e.g., with stereo and RGB‐D cameras or with visual–inertial odometry), thus excluding structure from motion or monocular SLAM approaches like parallel tracking and mapping (PTAM) (Klein & Murray, ), semi‐direct visual odometry (SVO) (Forster, Pizzoli, & Scaramuzza, ), REgularized MOnocular Depth Estimation (REMODE) (Pizzoli, Forster, & Scaramuzza, ), DT‐SLAM (Herrera, Kim, Kannala, Pulli, & Heikkilä, ), large‐scale direct monocular SLAM (LSD‐SLAM) (Engel, Schöps, & Cremers, ) or oriented FAST and rotated BRIEF (ORB)‐SLAM (Mur‐Artal, Montiel, & Tardos, ).…”
Section: Popular Slam Approaches Available On Rosmentioning
confidence: 99%
“…As universal 3D representations, 3D point clouds "can represent almost any type of physical object, site, landscape, geographic region, or infrastructure-at all scales and with any precision" as Richter (2018) states, who discusses algorithms and data structures for out-of-core processing, analysing, and classifying of 3D point clouds. To acquire 3D point clouds, various technologies can be applied including airborne or terrestrial laser scanning, mobile mapping, RGB-D cameras (Zollhöfer et al 2018), image matching, or multi-beam echo sounding.…”
Section: D Point Cloudsmentioning
confidence: 99%
“…It still remains a challenge to obtain accurate depth for casual videos using portable devices. A survey on RGBD camera is written by Zollhöfer [Zollhöfer et al 2018]. Current high-end smartphones such as iPhone X supports depth measurement using dual-pixels and dedicated post-processing to generate smooth, edge-preserving depth maps.…”
Section: Rvr Evaluationmentioning
confidence: 99%