2010
DOI: 10.1016/j.future.2009.12.007
|View full text |Cite
|
Sign up to set email alerts
|

Giga-stack: A method for visualizing giga-pixel layered imagery on massively tiled displays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…Due to the limits on texture size in the main memory and GPU's texture memory, huge images generally need to be segmented into smaller pieces in a preprocess stage, and a collection of the image pieces can also be pre-generated in different resolutions to accelerate rendering speed [20], [55], [56]. An efficient solution should provide ways of distributing texture data, minimizing the amount of data that must be handled by any particular node, and caching it (i.e., out-of-core approaches such as [57]) to reduce the network traffic.…”
Section: Imagery and Multimedia Viewingmentioning
confidence: 99%
“…Due to the limits on texture size in the main memory and GPU's texture memory, huge images generally need to be segmented into smaller pieces in a preprocess stage, and a collection of the image pieces can also be pre-generated in different resolutions to accelerate rendering speed [20], [55], [56]. An efficient solution should provide ways of distributing texture data, minimizing the amount of data that must be handled by any particular node, and caching it (i.e., out-of-core approaches such as [57]) to reduce the network traffic.…”
Section: Imagery and Multimedia Viewingmentioning
confidence: 99%
“…Large images are converted into pyramidal tiled TIFF files through Vips [29]. These images are then stored on and subsequently accessed via a network mounted drive and data is rendered on a per node basis in a similar fashion to [31]. The framebuffers described in [29] are now created on a per monitor basis allowing for pixel accurate interrogation on the large display environment as described in the section above.…”
Section: Localized Interrogationmentioning
confidence: 99%
“…The recent adoption of 4kUHD (3840 × 2160) video resolution now makes it necessary to investigate the possibilities of streaming such video resolution in a wireless environment due to its popularity. Video display at such resolutions can be done with the aid of SAGE [1,2] (scalable adaptive graphics environment), tiled displays (using more than one display output unit to produce the required resolution), CAVE [3] (cave automatic virtual environment) a one to many presentation systems, and more recently commercial production of 4kUHD television sets. Currently streaming at this resolution is normally done using compressed formats [4][5][6][7], as the minimum requirement for uncompressed UHD video starts at 2.39 Gb/s for 8-bit 4 : 2 : 0 subsampling at 24 frames per second.…”
Section: Introductionmentioning
confidence: 99%