Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility-participants may move around the shared space-and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system-cameras, image processing, networking, and display-contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance-its measurement, enhancement, and control.
volume rendering, compositing, ray tracing Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. We demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. We present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. We show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. Our proposed technique may also have broad impact on opacity-texture-mapped polygon rendering. Abstract Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. We demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. We present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. We show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. Our proposed technique may also have broad impact on opacity-texture-mapped polygon rendering.
Three-dimensional scenes have become an important form of content deliverable through the Internet. Standard formats such as Virtual Reality Modeling Language (VRML) make it possible to dynamically download complex scenes from a server directly to a web browser. However, limited bandwidth between servers and clients presents an obstacle to the availability of more complex scenes, since geometry and texture maps for a reasonably complex scene may take many minutes to transfer over a typical telephone modem link. This paper addresses one part of the bandwidth bottleneck, texture transmission. Current display methods transmit an entire texture to the client before it can be used for rendering. We present an alternative method which subdivides each texture into tiles, and dynamically determines on the client which tiles are visible to the user. Texture tiles are requested by the client in an order determined by the number of screen pixels affected by the texture tile, so that texture tiles which affect the greatest number of screen pixels are transmitted first. The client can render images during texture loading using tiles which have already been loaded. The tile visibility calculations take full account of occlusion and multiple texture image resolution levels, and are dynamically recalculated each time a new frame is rendered. We show how a few additions to the standard graphics hardware pipeline can add this capability without radical architecture changes, and with only moderate hardware cost. The addition of this capability makes it practical to use large textures even over relatively slow network connections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.