2016
DOI: 10.1007/978-3-319-47289-8_17
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Location Based Services—Semantic 3D City Data as Virtual and Augmented Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(27 citation statements)
references
References 30 publications
1
26
0
Order By: Relevance
“…For example, SolarB performs city wide computations of solar radiation received by buildings' wall and roof surfaces [12], CityStats performs a socio-demographic clustering model for energy usage [13], CityBEM calculates monthly cooling and heating energy need of the buildings [6], heat demand and heat loss model calculate annual heat demand [11]. Furthermore, multiple visualization interfaces including 3D web mapping (three.js, Cesium), Augmented reality (AR), Virtual Reality (VR) and mobile 3D mapping have been realized (Glob3m API) [14][15]. This paper will focus on the generation of additional semantic 3D city models of other cities from free and open data that can be used by the above mentioned software infrastructure.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, SolarB performs city wide computations of solar radiation received by buildings' wall and roof surfaces [12], CityStats performs a socio-demographic clustering model for energy usage [13], CityBEM calculates monthly cooling and heating energy need of the buildings [6], heat demand and heat loss model calculate annual heat demand [11]. Furthermore, multiple visualization interfaces including 3D web mapping (three.js, Cesium), Augmented reality (AR), Virtual Reality (VR) and mobile 3D mapping have been realized (Glob3m API) [14][15]. This paper will focus on the generation of additional semantic 3D city models of other cities from free and open data that can be used by the above mentioned software infrastructure.…”
Section: Resultsmentioning
confidence: 99%
“…For LoD1 models, the average height value from the buffer analysis is taken to define the building height, for LoD2 models a ridgeline is identified in the LIDAR point cloud and averages values are taken along this ridge are taken to define the roof height and shape. Moreover, a custom 13 https://www.altergeosistemas.com/blog/2015/04/30/extrayendo-la-altura-de-losedificios-a-partir-de-datos-LIDAR/ script (genCityGML, available in GitHub 14 ) is developed to generate LoD1 and LoD2 CityGML standard from the 2D geometries by adding height information of ground and roof surfaces. The script is based on the Random3Dcity 15 tool developed by the TU Delft.…”
Section: Fig 3 Four Methods To Generate Citygml Data From Free and Omentioning
confidence: 99%
“…Most underground-related applications are thought to be only useful in the context of AR and when designed for geological purposes (Lee et al [33]) or management of power and water underground utilities [3,27,29,34]. However, for multimodal applications we find the example of [35], in which a unique application features multiple environments, including virtual globes and VR or AR viewers. Their research shows that integrating underground visualization is still an open problem.…”
Section: Related Workmentioning
confidence: 99%
“…Handheld accelerometers are known as inertial sensors as they are allowed to exploit the property of inertia, for example, the resistance to a change in momentum in AR environment, to sense angular motion in the case of depth camera sensing and mobile changes in linear motion [10]. Furthermore, inclinometers are also inertial sensors that measure the orientation of the acceleration vector due to gravity [11]. Besides, this paper defines inertial sensors as independent of any external references or infrastructure, apart from the ubiquitous gravity field.…”
Section: Real Environment Ar Environmentmentioning
confidence: 99%