BackgroundDespite the enormous number of assistive technologies (ATs) in dementia care, the management of challenging behavior (CB) of persons with dementia (PwD) by informal caregivers in home care is widely disregarded. The first-line strategy to manage CB is to support the understanding of the underlying causes of CB to formulate individualized nonpharmacological interventions. App- and sensor-based approaches combining multimodal sensors (actimetry and other modalities) and caregiver information are innovative ways to support the understanding of CB for family caregivers.ObjectiveThe main aim of this study is to describe the design of a feasibility study consisting of an outcome and a process evaluation of a newly developed app- and sensor-based intervention to manage CB of PwD for family caregivers at home.MethodsIn this feasibility study, we perform an outcome and a process evaluation with a pre-post descriptive design over an 8-week intervention period. The Medical Research Council framework guides the design of this feasibility study. The data on 20 dyads (primary caregiver and PwD) are gathered through standardized questionnaires, protocols, and log files as well as semistructured qualitative interviews. The outcome measures (neuropsychiatric inventory and Cohen-Mansfield agitation inventory) are analyzed by using descriptive statistics and statistical tests relevant to the individual assessments (eg, chi-square test and Wilcoxon signed-rank test). For the analysis of the process data, the Unified Theory of Acceptance and Use of Technology is used. Log files are analyzed by using descriptive statistics, protocols are analyzed by using documentary analysis, and semistructured interviews are analyzed deductively using content analysis.ResultsThe newly developed app- and sensor-based AT has been developed and was evaluated until July in 2018. The recruitment of dyads started in September 2017 and was concluded in March 2018. The data collection was completed at the end of July 2018.ConclusionsThis study presents the protocol of the first feasibility study to encompass an outcome and process evaluation to assess a complex app- and sensor-based AT combining multimodal actimetry sensors for informal caregivers to manage CB. The feasibility study will provide in-depth information about the study procedure and on how to optimize the design of the intervention and its delivery.International Registered Report Identifier (IRRID)DERR1-10.2196/11630
This paper summarizes the results of the design and implementation of an audio-visual installation, which focuses on the interaction of two casual players with complementary designed musical sphere interfaces. While one user interacts with a small dome touchscreen and controls the harmonic sound space, the other user stands inside a larger dome creating melodies by continuous gestures in mid-air. Similar to a theremin notes are played without touch using an invisible two-dimensional melodic table of notes with a dedicated layout that allows creating harmonic expressions by continuous gestures. The generation of musical expressions is visualized on both instruments with several visualization techniques. For prototyping purposes and to illustrate the idea of the installation without the need to set up the complex installation we also provide an immersion user experience by simulating the system using an Oculus Rift head-mounted display. The goal of this project is to provide and explore new methods of musical interaction and to exploit the complementary character of the installation to create a harmonic sound environment based on cooperative play.
Building context-aware applications is an already widely researched topic. It is our belief that context awareness has the potential to supplement the Internet of Things, when a suitable methodology including supporting tools will ease the development of context-aware applications. We believe that a meta-model based approach can be key to achieving this goal. In this paper, we present our meta-model based methodology, which allows us to define and build application-specific context models and the integration of sensor data without any programming. We describe how that methodology is applied with the implementation of a relatively simple context-aware COVID-safe navigation app. The outcome showed that programmers with no experience in context-awareness were able to understand the concepts easily and were able to effectively use it after receiving a short training. Therefore, context-awareness is able to be implemented within a short amount of time. We conclude that this can also be the case for the development of other context-aware applications, which have the same context-awareness characteristics. We have also identified further optimization potential, which we will discuss at the conclusion of this article.
Figure 1: Different prototypes of the indoor airship made with MiReAS. AbstractVirtual prototyping has become an established design tool in complex interdisciplinary development processes using state-of-the-art virtual reality techniques. Due to numerous benefits virtual prototyping has seen increasing acceptance in recent years, especially in the development of systems that involve complex interactions between components or require the integration of newly developed hardware. Mixed reality, considered as an extension to VR, has high potential to support the development of complex systems that operate in a real world environment even further. In this paper we demonstrate that the principles of mixed reality prototyping can also be effectively applied to the development of complex systems if a structured process is defined and supported by a proper software framework. In our paper we present a system that supports an iterative approach to prototype complex dynamical systems. Using this approach allows the designer to seamlessly progress from an initial virtual prototype to the final system along the mixed reality continuum. We describe MiReAS, a mixed reality software framework that supports an iterative design evolution with arbitrary combinations of real and virtual elements. The use of the system is demonstrated by the development of interaction techniques and control strategies for an unmanned aerial vehicle.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.