Abstract:In order to provide a highly performant rendering system while maintaining a scene graph structure with a high level of abstraction, we introduce improved rendering caches, that can be updated incrementally without any scene graph traversal. The basis of this novel system is the use of a dependency graph, that can be synthesized from the scene graph and links all sources of changes to the affected parts of rendering caches. By using and extending concepts from incremental computation we minimize the computatio… Show more
“…Previous iterations of the discussed visualization solution have been presented in [18], [19]. The system has since been completely rebuilt using the latest version of the Aardvark visualization platform [20] having various implications on performance, programmability, and maintainability [21][22][23].…”
Tunnel surveys making use of photogrammetric three‐dimensional (3D) tunnel reconstruction reach resolutions in the millimeter range. Classical big data visualization approaches display point clouds only, neglecting this considerable resolution difference between structure and texture. The article suggests a data structure that separates structural and textural resolution by a regular grid on the unwrapped design surface for 3D, combined with a UV mapping technique as regularly used in computer graphics. For real‐time rendering of huge multiscale data sets, the result of photogrammetric commercial‐off‐the‐shelf reconstructions is transformed into a proprietary hierarchical data structure. It facilitates to only load currently relevant parts of the tunnel surface from the hard drive, and only upload and render currently adequate levels‐of‐detail onto the graphics card for seamless exploration of high‐resolution geometric and image 3D tunnel data of arbitrary length. The solution allows for smooth interactive analysis and annotation such as crack identification and mapping, inventory, deformation assessment, and dimensional measurements. Aspects of data generation are addressed and information is given about the data structure, showing examples from entire tunnel 3D representations to demonstrate the smooth behaviour of the real‐time rendering of huge data volumes in various scales on standard graphics hardware.
“…Previous iterations of the discussed visualization solution have been presented in [18], [19]. The system has since been completely rebuilt using the latest version of the Aardvark visualization platform [20] having various implications on performance, programmability, and maintainability [21][22][23].…”
Tunnel surveys making use of photogrammetric three‐dimensional (3D) tunnel reconstruction reach resolutions in the millimeter range. Classical big data visualization approaches display point clouds only, neglecting this considerable resolution difference between structure and texture. The article suggests a data structure that separates structural and textural resolution by a regular grid on the unwrapped design surface for 3D, combined with a UV mapping technique as regularly used in computer graphics. For real‐time rendering of huge multiscale data sets, the result of photogrammetric commercial‐off‐the‐shelf reconstructions is transformed into a proprietary hierarchical data structure. It facilitates to only load currently relevant parts of the tunnel surface from the hard drive, and only upload and render currently adequate levels‐of‐detail onto the graphics card for seamless exploration of high‐resolution geometric and image 3D tunnel data of arbitrary length. The solution allows for smooth interactive analysis and annotation such as crack identification and mapping, inventory, deformation assessment, and dimensional measurements. Aspects of data generation are addressed and information is given about the data structure, showing examples from entire tunnel 3D representations to demonstrate the smooth behaviour of the real‐time rendering of huge data volumes in various scales on standard graphics hardware.
“…Instead of maintaining the number of ways a difference can be obtained, a lazy checking is rather made on demand to determine whether a difference can be obtained from the unused integers. Lazy computation has been using in local search and constraint-based local search for quite long time [14,17,28,10]. Lazy approach compute values only when the values are actually needed.…”
All-interval series is a standard benchmark problem for constraint satisfaction search. An all-interval series of size n is a permutation of integers [0, n) such that the differences between adjacent integers are a permutation of [1, n). Generating each such all-interval series of size n is an interesting challenge for constraint community. The problem is very difficult in terms of the size of the search space. Different approaches have been used to date to generate all the solutions of AIS but the search space that must be explored still remains huge. In this paper, we present a constraint-directed backtrackingbased tree search algorithm that performs efficient lazy checking rather than immediate constraint propagation. Moreover, we prove several key properties of all-interval series that help prune the search space significantly. The reduced search space essentially results into fewer backtracking. We also present scalable parallel versions of our algorithm that can exploit the advantage of having multi-core processors and even multiple computer systems. Our new algorithm generates all the solutions of size up to 27 while a satisfiability-based state-of-the-art approach generates all solutions up to size 24.
“…However, there are also systems such as ours which seek simpler solutions to domain-specific incrementalization problems. In particular, C3's callsite caching mechanism was inspired in part by recent work in computer graphics on hierarchical render caches [20]. 1 The Venture PPL features an algorithm to incrementally update a probabilistic execution trace in response to a random choice change [5].…”
Lightweight, source-to-source transformation approaches to implementing MCMC for probabilistic programming languages are popular for their simplicity, support of existing deterministic code, and ability to execute on existing fast runtimes [1]. However, they are also slow, requiring a complete re-execution of the program on every Metropolis Hastings proposal. We present a new extension to the lightweight approach, C3, which enables efficient, incrementalized re-execution of MH proposals. C3 is based on two core ideas: transforming probabilistic programs into continuation passing style (CPS), and caching the results of function calls. We show that on several common models, C3 reduces proposal runtime by 20-100x, in some cases reducing runtime complexity from linear in model size to constant. We also demonstrate nearly an order of magnitude speedup on a complex inverse procedural modeling application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.