2020
DOI: 10.3390/rs12162624
View full text |Buy / Rent full text
|
Sign up to set email alerts
|

Abstract: The automated 3D modeling of indoor spaces is a rapidly advancing field, in which recent developments have made the modeling process more accessible to consumers by lowering the cost of instruments and offering a highly automated service for 3D model creation. We compared the performance of three low-cost sensor systems; one RGB-D camera, one low-end terrestrial laser scanner (TLS), and one panoramic camera, using a cloud-based processing service to automatically create mesh models and point clouds, evaluating… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 47 publications
(81 reference statements)
0
6
0
Order By: Relevance
“…The study aims to evaluate the quality, accuracy and useability of these methods as digital twin models. Terrestrial laser scanners (TLS) provide good quality point clouds, but data collection requires careful planning and is time consuming [13]. This is especially true in indoor spaces where visibility is restricted by walls and other clutter that raise the number of needed scanning locations up to a large number.…”
Section: Methodsmentioning
confidence: 99%
“…Current photogrammetric software can automatically reconstruct 3D mesh models (Furukawa, Curless, Seitz, & Szeliski, 2009;Jancosek & Pajdla, 2011;Romanoni, Delaunoy, Pollefeys, & Matteucci, 2016). For indoor modelling, various photogrammetric methods have been used, such as 3D mapping systems (El-Hakim, Boulanger, Blais, & Beraldin, 1997), videogrammetry-based 3D modelling (Haggrén & Mattila, 1997), structured indoor modelling (Ikehata, Yang, & Furukawa, 2015), cloud-based indoor 3D modelling (Ingman, Virtanen, Vaaja, & Hyyppä, 2020) and other indoor measuring methods (Georgantas, Brédif, & Pierrot-Desseilligny, 2012;Lehtola et al, 2017;León-Vega & Rodríguez-Laitón, 2019). In addition, high dynamic range (HDR) photogrammetry has been used for luminance mapping of the sky and the sun (Cai, 2015) and a laser-scanned point cloud have been coloured with luminance values in a nighttime road environment (Vaaja et al, 2015;Vaaja et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, the sensor has been the basis of other RGB-D cameras such as Asus Xtion [ 13 ], Orbbec Astra 3D [ 14 ] or Occipital Structure Sensor [ 15 ]. All them are used in many high accuracy applications, including indoor 3D modeling [ 16 , 17 , 18 , 19 ], simultaneous localization and mapping (SLAM) [ 20 , 21 , 22 ], or augmented reality [ 23 ], which require a rigorous calibration and error modeling of RGB-D camera data to produce high quality information [ 24 , 25 ].…”
Section: Introductionmentioning
confidence: 99%
“…However, since the launch of low-price depth cameras in 2011 [3], the number of users of low-price depth cameras has gradually increased, and the application of depth cameras has become increasingly common. The combination of depth and RGB images can be used in action recognition [4], simultaneous localization and mapping (SLAM) [5], 3D reconstruction [6], augmented reality (AR) [7], and other geographic information applications.…”
Section: Introductionmentioning
confidence: 99%
“…However, since the launch of Kinect’s low-price depth cameras in 2011 [ 1 ], the number of users of low-price depth cameras has gradually increased, and the application of depth cameras has become increasingly common. The combination of depth and RGB images can be used in action recognition [ 2 , 3 , 4 ], simultaneous localization and mapping (SLAM) [ 5 , 6 , 7 ], 3D reconstruction [ 8 , 9 , 10 , 11 ], augmented reality [ 12 , 13 , 14 ] and other geographic information applications. The resolution of a depth image collected by the early Kinect depth sensor was only 640 × 480 pixels; hence, the data were not ideal.…”
Section: Introductionmentioning
confidence: 99%