Virtual Reality (VR) is morphing into a ubiquitous technology by leveraging of smartphones and screenless cases in order to provide highly immersive experiences at a low price point. The result of this shift in paradigm is now known as mobile VR (mVR). Although mVR offers numerous advantages over conventional immersive VR methods, one of the biggest limitations is related with the interaction pathways available for the mVR experiences. Using physiological computing principles, we created the PhysioVR framework, an Open-Source software tool developed to facilitate the integration of physiological signals measured through wearable devices in mVR applications. PhysioVR includes heart rate (HR) signals from Android wearables, electroencephalography (EEG) signals from a lowcost brain computer interface and electromyography (EMG) signals from a wireless armband. The physiological sensors are connected with a smartphone via Bluetooth and the PhysioVR facilitates the streaming of the data using UDP communication protocol, thus allowing a multicast transmission for a third party application such as the Unity3D game engine. Furthermore, the framework provides a bidirectional communication with the VR content allowing an external event triggering using a real-time control as well as data recording options. We developed a demo game project called EmoCat Rescue which encourage players to modulate HR levels in order to successfully complete the in-game mission. EmoCat Rescue is included in the PhysioVR project which can be freely downloaded. This framework simplifies the acquisition, streaming and recording of multiple physiological signals and parameters from wearable consumer devices providing a single and efficient interface to create novel physiologically-responsive mVR applications.
The growing number of people with cognitive deficits creates an urgent need for new cognitive training solutions. Paper-and-pencil tasks are still widely used for cognitive rehabilitation despite the proliferation of new computer-based methods, like VR-based simulations of ADL's. The health professionals' resistance in adopting new tools might be explained by the small number of validation trials. Studies have established construct validity of VR assessment tools with their paper-and-pencil versions by demonstrating significant associations with their traditional construct-driven measures. However, adaptive rehabilitation tools for intervention are mostly not equivalent to their counterpart paper-and-pencil versions, which makes it difficult to carry out comparative studies. Here we present a 12-session intervention study with 31 stroke survivors who underwent different rehabilitation protocols based on the same content and difficulty adaptation progression framework: 17 performed paper-and-pencil training with the Task Generator and 14 performed VR-based training with the Reh@City. Results have shown that both groups performed at the same level and there was not an effect of the training methodology in overall performance. However, the Reh@City enabled more intensive training, which may translate in more cognitive improvements.
BackgroundVirtual Reality (VR) based methods for stroke rehabilitation have mainly focused on motor rehabilitation, but there is increasing interest in integrating motor and cognitive training to increase similarity to real-world settings. Unfortunately, more research is needed for the definition of which type of content should be used in the design of these tools. One possibility is the use of emotional stimuli, which are known to enhance attentional processes. According to the Socioemotional Selectivity Theory, as people age, the emotional salience arises for positive and neutral, but not for negative stimuli.MethodsFor this study we developed a cognitive-motor VR task involving attention and short-term memory, and we investigated the impact of using emotional images of varying valence. The task consisted of finding a target image, shown for only two seconds, among fourteen neutral distractors, and selecting it through arm movements. After performing the VR task, a recall task took place and the patients had to identify the target images among a valence-matched number of distractors. Ten stroke patients participated in a within-subjects experiment with three conditions based on the valence of the images: positive, negative and neutral. Eye movements were recorded during VR task performance with an eye tracking system.ResultsOur results show decreased attention for negative stimuli in the VR task performance when compared to neutral stimuli. The recall task shows significantly more wrongly identified images (false memories) for negative stimuli than for neutral. Regression and correlation analyses with the Montreal Cognitive Assessment and the Geriatric Depression Scale revealed differential effects of cognitive function and depressive symptomatology in the encoding and recall of positive, negative and neutral images. Further, eye movement data shows reduced search patterns for wrongly selected stimuli containing emotional content.ConclusionsThe results of this study suggest that it is feasible to use emotional content in a VR based cognitive-motor task for attention and memory training after stroke. Stroke survivors showed less attention towards negative information, exhibiting reduced visual search patterns and more false memories. We have also shown that the use of emotional stimuli in a VR task can provide additional information regarding patient’s mood and cognitive status.
Exergames are exercise-oriented games that offer opportunities to increase motivation for exercising and improving health benefits. However, Exergames need to be adaptive and provide accurate feedback for physiologically correct exercising, sustaining motivation and for better personalized experiences. To investigate the role of physiological computing in those aspects, we employed a repeated measures design exploring changes in physiological responses caused by the gaming and exercising components of an Exergame intervention. Seventeen older adults (64.5±6.4 years) interacted with a videogame in two modes (Control, Exergaming) in different difficulty levels. Electrocardiography, Electrodermal and Kinematic data were gathered synchronously with game data. Findings show that Exercise intensities and heart rate changes were largely modulated by game difficulty, and positive feedback was more likely to produce arousal responses during Exergaming than negative feedback. A heart rate-variability analysis revealed strong influences of the interaction mode showing that Exergaming has potential to enhance cardiac regulation. Our results bring new insights on the usefulness of psychophysiological methods to sustain exercising motivation and personalizing gameplay to the individual needs of users in Exergaming experiences.
Introduction Action observation neurorehabilitation systems are usually based on the observation of a virtual limb performing different kinds of actions. In this way, the activity in the frontoparietal Mirror Neuron System is enhanced, which can be helpful to rehabilitate stroke patients. However, the presence of limbs in such systems might not be necessary to produce mirror activity, for example, frontoparietal mirror activity can be produced just by the observation of virtual tool movements. The objective of this work was to explore to what point the presence of a virtual limb impacts the Mirror Neuron System activity in neurorehabilitation systems. Methods The study was conducted by using an action observation neurorehabilitation task during a functional magnetic resonance imaging (fMRI) experiment with healthy volunteers and comparing two action observation conditions that: 1 – included or 2 – did not include a virtual limb. Results It was found that activity in the Mirror Neuron System was similar during both conditions (i.e. virtual limb present or absent). Conclusions These results open up the possibility of using new tasks that do not include virtual limbs in action observation neurorehabilitation environments, which can give more freedom to develop such systems.
Cognitive impairments are among the most common age-related disabilities worldwide. Literature has shown that cognitive training using Virtual Reality (VR) systems can be a valid and effective solution for cognitive rehabilitation. Virtual environments can be easily customized to deliver very specific training by controlling the presentation of stimuli and keeping track of the user responses. Reh@City (RC) is a virtual reality simulation of a city where patients can train a variety of cognitive skills while performing simulated activities of daily living. An initial prototype of this city with four environments was clinically validated with a stroke sample, and the encouraging results motivated further iterations and improvements in the RC, in terms of its tasks, interaction with the content, and task adaptation. This paper presents the efforts of creating RC v2.0, a VR-based software system for cognitive rehabilitation that presents different cognitive training tasks that take place in 8 realistically modeled 3D environments, that are personalized to the patient clinical profile and also implements automatic difficulty adaptation.
VR-based methods for stroke rehabilitation have mainly focused on motor rehabilitation, but there is increasing interest towards the integration of cognitive training for providing more ecologically valid solutions. However, more studies are needed, especially in the definition of which type of content should be used in the design of these tools. One possibility is the use of emotional stimuli, which are known to enhance attentional processes. According to the Socio-emotional Selectivity Theory, as people age, this emotional salience arises for positive and neutral, but not for negative stimuli. Conversely, negative stimuli can be better remembered. In this study, we investigated the impact of using emotional stimuli with positive, negative and neutral valence in a VR cognitive and motor attention task. Ten stroke patients participated in a withinsubjects experiment with four conditions based on the type of stimuli: abstract (control condition), positive, negative and neutral. The main task consisted of finding a target stimulus, shown for only two seconds, among fourteen neutral distractors. Eye movements were recorded with an eye-tracking system to investigate differences between conditions and in search patterns. Subsequently, a recall task took place and the patients had to identify all the target images among a valence-matched number of distractors. Our results corroborate the attention salience effect of positive and neutral stimuli in the VR task performance. Although we found no statistically significant differences between conditions in the recall task, there was a trend for recalling more negative images. This negative advantage comes at the expense of significantly more wrongly identified images/false memories for negative stimuli. Finally, we performed an analysis in which we relate performance scores with well-established cognitive assessment instruments, which supportsG the use of this approach both for assessment and rehabilitation purposes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.