Freezing of gait (FOG), an episodic gait disturbance characterized by the inability to generate effective stepping, occurs in more than half of Parkinson's disease patients. It is associated with both executive dysfunction and attention and becomes most evident during dual tasking (performing two tasks simultaneously). This study examined the effect of dual motor-cognitive virtual reality training on dual-task performance in FOG. Twenty community dwelling participants with Parkinson's disease (13 with FOG, 7 without FOG) participated in a pre-assessment, eight 20-minute intervention sessions, and a post-assessment. The intervention consisted of a virtual reality maze (DFKI, Germany) through which participants navigated by stepping-in-place on a balance board (Nintendo, Japan) under time pressure. This was combined with a cognitive task (Stroop test), which repeatedly divided participants' attention. The primary outcome measures were pre- and post-intervention differences in motor (stepping time, symmetry, rhythmicity) and cognitive (accuracy, reaction time) performance during single- and dual-tasks. Both assessments consisted of 1) a single cognitive task 2) a single motor task, and 3) a dual motor-cognitive task. Following the intervention, there was significant improvement in dual-task cognitive and motor parameters (stepping time and rhythmicity), dual-task effect for those with FOG and a noteworthy improvement in FOG episodes. These improvements were less significant for those without FOG. This is the first study to show benefit of a dual motor-cognitive approach on dual-task performance in FOG. Advances in such virtual reality interventions for home use could substantially improve the quality of life for patients who experience FOG.
With the advent of WebGL, plugin-free hardware-accelerated interactive 3D graphics has finally arrived in all major Web browsers. WebGL is an imperative solution that is tied to the functionality of rasterization APIs. Consequently, its usage requires a deeper understanding of the rasterization pipeline. In contrast to this stands a declarative approach with an abstract description of the 3D scene. We strongly believe that such approach is more suitable for the integration of 3D into HTML5 and related Web technologies, as those concepts are well-known by millions of Web developers and therefore crucial for the fast adoption of 3D on the Web. Hence, in this paper we explore the options for new declarative ways of incorporating 3D graphics directly into HTML to enable its use on any Web page. We present declarative 3D principles that guide the work of the Declarative 3D for the Web Architecture W3C Community Group and describe the current state of the fundamentals to this initiative. Finally, we draw an agenda for the next development stages of Declarative 3D for the Web.
No abstract
Figure 1: A Virtual World scene featuring several instances of the same robot character, configured to have individual poses and colors. AbstractThe Declarative 3D for the Web initiative by the W3C [W3C 2011] connects 3D content to the Web document, intertwining it with other Web technologies known to millions of Web developers. The goal is to make 3D on the Web more accessible compared to lowlevel APIs such as WebGL. However, all proposals for Declarative 3D for the Web are missing an essential feature: configurable instances of structured 3D models. While instance mechanisms do exist, they all have limited capabilities to configure instances individually.In this paper we present a new approach for configurable instances of 3D models that is integrated into XML3D. Our approach comes with a compact interface, a powerful extension mechanism to handle configurations, and an efficient data structures for efficient instancing. We demonstrate how our instance mechanism simplifies the handling of 3D models in several different application areas, including Virtual Worlds, and provide several performance results for the instancing process.
Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing.
Figure 1: Left image: an AR application developed with XML3D and Xflow. The displayed teapot jumps from one visible marker to the other. Center image: another AR application. This time, animated characters are displayed on top of the markers (character models by Valve Software). Right image: several image processing operators implemented with Xflow. (from the top left: AbstractRecently, modern Web browser became capable of supporting powerful, interactive 3D graphics both via the low-level, imperative API of WebGL as well as via a high-level, declarative approach like XML3D. The obvious next step (particularly with respect to mobile platforms) is to combine video from the real world with matched virtual content -Augmented or Mixed Reality (AR/MR). However, AR requires extensive image or video processing, feature detection and tracking, and applying the results to 3D renderingall of which is hard to implement in a Web context.In this paper we present a novel approach that encapsulates lowlevel image-processing and AR operations into re-usable high-level XML3D/Xflow components that are part of the HTML-5 DOM. A Web developer can then easily and flexibly arrange these components into (possibly complex) processing flow-graphs without having to worry about the internal computations and the structure of these modules. Our extended Xflow implementation automatically optimizes, schedules, and synchronizes the processing of the flow graph(s) in the context of real-time 3D rendering. Finally, we provide an integration model that greatly simplifies building AR applications for the browser.We demonstrate this with several simple AR and image processing applications using a polyfill implementation working in all modern browsers and evaluate the performance. Finally, we show how the declarative framework can be optimized with respect to performance and usability using parallelization with Web Workers and RiverTrail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.