Cluster-based tiled display walls can provide cost-effective and scalable displays with high resolution and a large display area. The software to drive them needs to scale too if arbitrarily large displays are to be built. Chromium is a popular software API used to construct such displays. Chromium transparently renders any OpenGL application to a tiled display by partitioning and sending individual OpenGL primitives to each client per frame. Visualization applications often deal with massive geometric data with millions of primitives. Transmitting them every frame results in huge network requirements that adversely affect the scalability of the system. In this paper, we present Garuda, a client-server-based display wall framework that uses off-the-shelf hardware and a standard network. Garuda is scalable to large tile configurations and massive environments. It can transparently render any application built using the Open Scene Graph (OSG) API to a tiled display without any modification by the user. The Garuda server uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display tile using a novel adaptive algorithm that culls the scene graph to a hierarchy of frustums. Required parts of the scene graph are transmitted to the clients, which cache them to exploit the interframe redundancy. A multicast-based protocol is used to transmit the geometry to exploit the spatial redundancy present in tiled display systems. A geometry push philosophy from the server helps keep the clients in sync with one another. Neither the server nor a client needs to render the entire scene, making the system suitable for interactive rendering of massive models. Transparent rendering is achieved by intercepting the cull, draw, and swap functions of OSG and replacing them with our own. We demonstrate the performance and scalability of the Garuda system for different configurations of display wall. We also show that the server and network loads grow sublinearly with the increase in the number of tiles, which makes our scheme suitable to construct very large displays.
In this article, we present a parallel prioritized Jacobian-based inverse kinematics algorithm for multithreaded architectures. We solve damped least squares inverse kinematics using a parallel line search by identifying and sampling critical input parameters. Parallel competing execution paths are spawned for each parameter in order to select the optimum that minimizes the error criteria. Our algorithm is highly scalable and can handle complex articulated bodies at interactive frame rates. We show results on complex skeletons consisting of more than 600 degrees of freedom while being controlled using multiple end effectors. We implement the algorithm both on multicore and GPU architectures and demonstrate how the GPU can further exploit fine-grain parallelism not directly available on a multicore processor. Our implementations are 10 to 150 times faster compared to a state-of-the-art serial implementation while providing higher accuracy. We also demonstrate the scalability of the algorithm over multiple scenarios and explore the GPU implementation in detail.
No abstract
Abstract-Displays have remained flat and passive amidst the many changes in their fundamental technologies. One natural step ahead is to create displays that merge seamlessly in shape and appearance to one's natural surroundings. In this paper, we present a system to design, render to, and build view-dependent multiplanar displays, of arbitrary shape built using planar, polygonal facets. Our system provides high quality, interactive rendering of 3D environments to a head-tracked viewer on arbitrary planar display shapes. We develop a novel rendering scheme that creates exact image and depth at each display facet. The facets thus align exactly at boundaries without inconsistencies in comparison with existing methods. Our approach scales well to large numbers of display facets. This is achieved using a single-pass rendering of all facets using a parallel, per-frame, viewdependent binning and prewarping of scene triangles. The method places no constraints on the scene or display and allows for fully dynamic scenes to be rendered at high resolutions using a single pass of rasterization. These are implemented efficiently on the GPUs. A general realization of our system envisions a display of an arbitrary shape built using polygonal facets. The display is driven using one or more quilt images into which the the pixels are packed. We present a few prototype displays to establish the scalability of our system to different shapes, form factors, and complexity: from a cube made out of LCD panels to spherical/cylindrical projected setups to arbitrary complex shapes in simulation. Performance is shown in terms of both quality and rendering speeds of our system for increasing scene and display facet sizes. A subjective user study is also presented to evaluate the user experience using a walk-around display to a flat panel in a game-like setting. 3Index Terms-Non-rectangular displays, fish tank virtual reality, arbitrary shaped displays, 3D visualization, view-dependent rendering, fast culliung, user interaction.
Abstract. Visibility culling of a scene is a crucial stage for interactive graphics applications, particularly for scenes with thousands of objects. The culling time must be small for it to be effective. A hierarchical representation of the scene is used for efficient culling tests. However, when there are multiple view frustums (as in a tiled display wall), visibility culling time becomes substantial and cannot be hidden by pipelining it with other stages of rendering. In this paper, we address the problem of culling an object to a hierarchically organized set of frustums, such as those found in tiled displays and shadow volume computation. We present an adaptive algorithm to unfold the twin hierarchies at every stage in the culling procedure. Our algorithm computes from-point visibility and is conservative. The precomputation required is minimal, allowing our approach to be applied for dynamic scenes as well. We show performance of our technique over different variants of culling a scene to multiple frustums. We also show results for dynamic scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.