2003
DOI: 10.1109/tvcg.2003.1260746
|View full text |Cite
|
Sign up to set email alerts
|

External memory management and simplification of huge meshes

Abstract: Very large triangle meshes, i.e. meshes composed of millions of faces, are becoming common in many applications. Obviously, processing, rendering, transmission and archival of these meshes are not simple tasks. Mesh simplification and LOD management are a rather mature technology that in many cases can efficiently manage complex data. But only few available systems can manage meshes characterized by a huge size: RAM size is often a severe bottleneck. In this paper we present a data structure called Octreebased… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 127 publications
(65 citation statements)
references
References 24 publications
0
65
0
Order By: Relevance
“…Beside this application, also processing on the data is performed -for example Poisson surface reconstruction like in [Bolitho et al, 2007] or mesh simplification [Cignoni et al, 2003]. …”
Section: Out-of-corementioning
confidence: 99%
“…Beside this application, also processing on the data is performed -for example Poisson surface reconstruction like in [Bolitho et al, 2007] or mesh simplification [Cignoni et al, 2003]. …”
Section: Out-of-corementioning
confidence: 99%
“…The basic common structures are trees (including B-trees [22] and its derivatives [23]). Cignoni et al [7] proposed an adapted a version of the octree [20], [21] devoted to the generic Out-of-Core algorithms for meshes that they called Octree Based External Memory Mesh. This structure is based on a decomposition of the cube including the triangulation recursively until the desired size of the elementary cubes (in number of vertices per cube, which are indexed locally in the cube) is reached.…”
Section: Triangle Soupmentioning
confidence: 99%
“…Algorithms have been designed that guarantee the optimal number of disk I/O operations for specific problems [2,3], at the cost of greater programming complexity. Data layouts have been proposed that are I/O-efficient when accessed with sufficient locality [4,5], at the cost of an expensive initial re-ordering step. Instead, we advocate a streaming paradigm for processing large data sets, wherein the order in which computations are performed is dictated by the data's order [6,7].…”
Section: Introductionmentioning
confidence: 99%