2009
DOI: 10.1080/13658810802001313
|View full text |Cite
|
Sign up to set email alerts
|

Creating and delivering augmented scenes

Abstract: An augmented scene delivery system (ASDS) creates and distributes annotated cartographic products in real time from captured still imagery. An ASDS applies augmented reality technology to the visualization of unrestricted exterior viewsheds through an adaptive, perspective-based surface resampling model. This paper describes an ASDS testing platform, introduces a client/server model for distributing augmented scenes over the Internet, presents a linear-time algorithm for resampling dense elevation models for p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Step 1: Identification of the objects that require fusion in the view frustum is of fundamental importance in ensuring that the videos are accurately fused with the 3-D GIS scene. First, a virtual depth camera is set up at the camera's position such that the orientation of the virtual depth camera is configured according to the camera's coordinates and the distance from the clipping plane [18]. Second, all rendered objects in the 3-D scene within the user's FOV are processed (this may involve hundreds to thousands of objects) to select objects for fusing that are within the view frustum of the depth camera.…”
Section: Current Video Projection-based Methods For the Fusion Of Videos With Virtual Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Step 1: Identification of the objects that require fusion in the view frustum is of fundamental importance in ensuring that the videos are accurately fused with the 3-D GIS scene. First, a virtual depth camera is set up at the camera's position such that the orientation of the virtual depth camera is configured according to the camera's coordinates and the distance from the clipping plane [18]. Second, all rendered objects in the 3-D scene within the user's FOV are processed (this may involve hundreds to thousands of objects) to select objects for fusing that are within the view frustum of the depth camera.…”
Section: Current Video Projection-based Methods For the Fusion Of Videos With Virtual Environmentmentioning
confidence: 99%
“…The fusion techniques of videos with virtual 3-D scenes generally include methods based on video projection [4,11,19,21], video image deformation [10] and video image reconstruction [30]. In particular, the video projection-based approach has become one of the most common approaches for the fusion of 3-D scenes with real images [3,18] because it requires neither manual intervention and offline fusion [25] nor the predetermination of vertices and the textures of projected textures. Moreover, this approach has a high degree of fidelity [9,15].…”
Section: Introductionmentioning
confidence: 99%