Figure 1: Screenshots showing third party applications realized with X3DOM: Simulation of the planets and 100000 of the known 480000 asteroids of the Solar System (left), 3D visualization of social networks (middle), an animated WoW character with dynamic shadows (right). AbstractWe present a scalable architecture, which implements and further evolves the HTML/X3D integration model X3DOM introduced in [Behr et al. 2009]. The goal of this model is to integrate and update declarative X3D content directly in the HTML DOM tree. The model was previously presented in a very abstract and generic way by only suggesting implementation strategies. The available opensource x3dom.js architecture provides concrete solutions to the previously open points and extents the generic model if necessary. The outstanding feature of the architecture is to provide a single declarative interface to application developers and at the same time support of various backends through a powerful fallback-model. This fallback-model does not provide a single implementation strategy for the runtime and rendering module but supports different methods transparently. This includes native browser implementations and X3D-plugins as well as a WebGL-based scene-graph, which allows running the content without the need for installing additional plugins on all browsers that support WebGL. The paper furthermore discusses generic aspects of the architecture like encoding and introspection, but also provides details concerning two backends. It shows how the system interfaces with X3D-plugins and WebGL and also discusses implementation specific features and limitations.
JSON, XML-based 3D formats (e.g. X3D or Collada) and Declarative 3D approaches share some benefits but also one major drawback: all encoding schemes store the scene-graph and vertex data in the same file structure; unstructured raw mesh data is found within descriptive elements of the scene. Web Browsers therefore have to download all elements (including every single coordinate) before being able to further process the structure of the document. Therefore, we separate the structured scene information and unstructured vertex data to increase the user experience and overall performance of the system by introducing two new referenced containers, which encode external mesh data as so-called Sequential Image Geometry (SIG) or Typed-Array-based Binary Geometry (BG). We also discuss compression, rendering and application results and introduce a novel data layout for image geometry data that supports incremental updates, arbitrary input meshes and GPU decoding
Within this paper, we present a novel, straightforward progressive encoding scheme for general triangle soups, which is particularly well-suited for mobile and Web-based environments due to its minimal requirements on the client's hardware and software. Our rapid encoding method uses a hierarchy of quantization to effectively reorder the original primitive data into several nested levels of detail. The resulting stateless buffer can progressively be transferred as-is to the GPU, where clustering is efficiently performed in parallel during rendering. We combine our approach with a crack-free mesh partitioning scheme to obtain a straightforward method for fast streaming and basic view-dependent LOD control.
Despite many advances in mesh compression methods within the past two decades, there is still no consensus about a standardized compact mesh encoding format for 3D Web applications. In order to facilitate the design of a future platform-independent solution, this paper investigates the crucial trade-off between compactness of the compressed representation and decompression time. Our case study evaluates different encoding formats, combined with various transmission bandwidths, using different client devices. Results indicate that good compression rates, and at the same time a fast decompression, can be achieved by exploiting existing browser features and by minimizing the complexity of operations that have to be performed inside the JavaScript layer. Our findings are summarized in concrete recommendations for future standards.
The previous publications on X3DOM focused on the general integration model [Behr et al. 2009] and implementation strategies [Behr et al. 2010]. The aspects of dynamic and interactive worlds were an essential part, but not specifically addressed as such. The recent major additions to the system are CSS Animations and CSS 3D-Transforms as well as various forms of events for user interaction and system monitoring, which complement the existing design to support a large number of interactive and dynamic use cases. This overall design, including scene update mechanisms, animations, and a large number of DOM-based events are thus presented in this paper as part of a single overall system design
With the advent of WebGL, plugin-free hardware-accelerated interactive 3D graphics has finally arrived in all major Web browsers. WebGL is an imperative solution that is tied to the functionality of rasterization APIs. Consequently, its usage requires a deeper understanding of the rasterization pipeline. In contrast to this stands a declarative approach with an abstract description of the 3D scene. We strongly believe that such approach is more suitable for the integration of 3D into HTML5 and related Web technologies, as those concepts are well-known by millions of Web developers and therefore crucial for the fast adoption of 3D on the Web. Hence, in this paper we explore the options for new declarative ways of incorporating 3D graphics directly into HTML to enable its use on any Web page. We present declarative 3D principles that guide the work of the Declarative 3D for the Web Architecture W3C Community Group and describe the current state of the fundamentals to this initiative. Finally, we draw an agenda for the next development stages of Declarative 3D for the Web.
Until recently, depth sensing cameras have been used almost exclusively in research due to the high costs of such specialized equipment. With the introduction of the Microsoft Kinect device, real-time depth imaging is now available for the ordinary developer at low expenses and so far it has been received with great interest from both the research and hobby developer community. The underlying OpenNI framework not only allows to extract the depth image from the camera, but also provides tracking information of gestures or user skeletons. In this paper, we present a framework to include depth sensing devices into X3D in order to enhance visual fidelity of X3D Mixed Reality applications by introducing some extensions for advanced rendering techniques. We furthermore outline how to calibrate depth and image data in a meaningful way through calibration for devices that do not already come with precalibrated sensors, as well as a discussion of some of the OpenNI functionality that X3D can benefit from in the future
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.