Exposure Therapy (ET) has demonstrated its efficacy in the treatment of phobias, anxiety and Post-traumatic Stress Disorder (PTSD), however, it suffers a high drop-out rate because of too low or too high patient engagement in treatment. Virtual Reality Exposure Therapy (VRET) is comparably effective regarding symptom reduction and offers an alternative tool to facilitate engagement for avoidant participants. Neuroimaging studies have demonstrated that both ET and VRET normalize brain activity within a fear circuit. However, previous studies have employed brain imaging technology which restricts people’s movement and hides their body, surroundings and therapist from view. This is at odds with the way engagement is typically controlled. We used a novel combination of neural imaging and VR technology—Functional Near-Infrared Spectroscopy (fNIRS) and Immersive Projection Technology (IPT), to avoid these limitations. Although there are a few studies that have investigated the effect of VRET on a brain function after the treatment, the present study utilized technologies which promote ecological validity to measure brain changes after VRET treatment. Furthermore, there are no studies that have measured brain activity within VRET session. In this study brain activity within the prefrontal cortex (PFC) was measured during three consecutive exposure sessions. N = 13 acrophobic volunteers were asked to walk on a virtual plank with a 6 m drop below. Changes in oxygenated (HbO) hemoglobin concentrations in the PFC were measured in three blocks using fNIRS. Consistent with previous functional magnetic resonance imaging (fMRI) studies, the analysis showed decreased activity in the DLPFC and MPFC during first exposure. The activity increased toward normal across three sessions. The study demonstrates potential efficacy of a method for measuring within-session neural response to virtual stimuli that could be replicated within clinics and research institutes, with equipment better suited to an ET session and at fraction of the cost, when compared to fMRI. This has application in widening access to, and increasing ecological validity of, immersive neuroimaging across understanding, diagnosis, assessment and treatment of, a range of mental disorders such as phobia, anxiety and PTSD or addictions.
User-centred evaluation of brain-training and coaching applications is discussed, with a focus on dementia. A brief outline of outcomes measures used for cognitive training is presented. The design of a set of four patient and public involvement workshops is described which are intended to examine user aspects of relevance to brain-training, including motivation, attitudes to learning, trust in technology and cultural relationships to the playing of games and their content. The groups involved researchers, facilitators, three people living with dementia and three care-givers, two of these being dyads. Data was audio recorded and field notes were taken. Initial results are given from the ongoing qualitative study.
Ti t l e Tes ti n g t h e p o t e n ti al of c o m bi ni n g fu n c tio n al n e a r-infr a r e d s p e c t r o s c o py wi t h diffe r e n t vir t u al r e ality di s pl ay sO c ul u s Rift a n d oC tAVE
R e m ovi n g t h e m a s k -d o p e o pl e ov e r t r u s t a v a t a r s r e c o n s t r u c t e d fro m vid e o ?Abstract. This experiment compared the detection of deceit across video conferencing and a fixed viewpoint 3D video based computer graphic medium. The purpose was to determine if the process of 3D reconstruction influenced trust by reducing detail of facial expression. Comparison with the literature investigates the impact of facial expression on trust. Inspiration comes from previous studies in the natural and virtual world that suggest a stronger tendency to over trust a person when their facial expression is hidden. A virtual avatar that copies head and eye movement but not that of the face, could be argued as akin to a person wearing a mask. Thus, our opening research question is: Would a 3D medium that removed this mask result in a truth bias similar to video and therefore real world? Two confederates each gave a set of accounts of which half were true. These were captured and transmitted simultaneously in real time using 2D and full 3D video based communication mediums. Recordings of these sessions were later examined by two sets of participants. Twenty-one participants were asked to determine which accounts were true. Measures included: accuracy at detecting truth and deceit, and from this tendency to over trust and lastly cognitive effort in determining truthfulness. Results show that participants performed and worked to a similar degree in both mediums. Findings are of interest to those developing 3D telepresence technologies and virtual humans, and to those concerned with the trustworthiness of a medium.
Background Cognitive training and assessment technologies offer the promise of dementia risk reduction and a more timely diagnosis of dementia, respectively. Cognitive training games may help reduce the lifetime risk of dementia by helping to build cognitive reserve, whereas cognitive assessment technologies offer the opportunity for a more convenient approach to early detection or screening. Objective This study aims to elicit perspectives of potential end users on factors related to the acceptability of cognitive training games and assessment technologies, including their opinions on the meaningfulness of measurement of cognition, barriers to and facilitators of adoption, motivations to use games, and interrelationships with existing health care infrastructure. Methods Four linked workshops were conducted with the same group, each focusing on a specific topic: meaningful improvement, learning and motivation, trust in digital diagnosis, and barriers to technology adoption. Participants in the workshops included local involvement team members acting as facilitators and those recruited via Join Dementia Research through a purposive selection and volunteer sampling method. Group activities were recorded, and transcripts were analyzed using thematic analysis with a combination of a priori and data-driven themes. Using a mixed methods approach, we investigated the relationships between the categories of the Capability, Opportunity, and Motivation–Behavior change model along with data-driven themes by measuring the φ coefficient between coded excerpts and ensuring the reliability of our coding scheme by using independent reviewers and assessing interrater reliability. Finally, we explored these themes and their relationships to address our research objectives. Results In addition to discussions around the capability, motivation, and opportunity categories, several important themes emerged during the workshops: family and friends, cognition and mood, work and hobbies, and technology. Group participants mentioned the importance of functional and objective measures of cognitive change, the social aspect of activities as a motivating factor, and the opportunities and potential shortcomings of digital health care provision. Our quantitative results indicated at least moderate agreement on all but one of the coding schemes and good independence of our coding categories. Positive and statistically significant φ coefficients were observed between several coding themes between categories, including a relatively strong positive φ coefficient between capability and cognition (0.468; P<.001). Conclusions The implications for researchers and technology developers include assessing how cognitive training and screening pathways would integrate into existing health care systems; however, further work needs to be undertaken to address barriers to adoption and the potential real-world impact of cognitive training and screening technologies. International Registered Report Identifier (IRRID) RR2-10.1007/978-3-030-49065-2_4
UNSTRUCTURED This paper reports on a series of Patient and Public Involvement (PPI) workshops with people living with dementia and carers where they discussed cognitive training and screening technologies designed to reduce the risk of dementia and identify changes in cognition. Little is known about the factors influencing the acceptance of such technologies. Four linked workshops were conducted with the same group, each focusing on a specific topic: meaningful improvement, learning and motivation, trust in digital diagnosis and barriers to technology adoption. Participants in the workshops included local Involvement Team members as well as those recruited via Join Dementia Research and researcher took part in some activities. The group activities were recorded, and transcripts were analyzed using thematic analysis with a combination of a priori and data-driven themes. Several important findings emerged, including the importance the group placed on maintaining good cognitive health, the importance of community activities within dementia and self-care and need for more support after a dementia diagnosis. The implications for researchers and technology developers are discussed. INTERNATIONAL REGISTERED REPORT RR2-10.1007/978-3-030-49065-2_4
Online walkthrough interviews were conducted via internet video-calling, which formed part of wider Patient and Public Involvement activities investigating perceptions of digital and gamified cognitive assessment and training/coaching applications. Participants were invited to play a series of mobile mini-games which have been developed for the purposes of training of executive functions and the assessment of memory, whilst verbalizing their thought processes, using a process based on the Think-Aloud Protocol and Cognitive Walkthrough principles, before concluding with a semi-structured interview. The enquiry was particularly interested in wider motivational aspects surrounding these technologies, including identifying potential barriers to engagement and facilitators of adoption. In general, there was broad acceptance of digital cognitive assessments and training, although issues of data handling and trust were raised by participants. Several usability issues were also captured.
BackgroundWhile efforts to establish best practices with functional near infrared spectroscopy (fNIRS) signal processing have been published, there are still no community standards for applying machine learning to fNIRS data. Moreover, the lack of open source benchmarks and standard expectations for reporting means that published works often claim high generalisation capabilities, but with poor practices or missing details in the paper. These issues make it hard to evaluate the performance of models when it comes to choosing them for brain-computer interfaces.MethodsWe present an open-source benchmarking framework, BenchNIRS, to establish a best practice machine learning methodology to evaluate models applied to fNIRS data, using five open access datasets for brain-computer interface (BCI) applications. The BenchNIRS framework, using a robust methodology with nested cross-validation, enables researchers to optimise models and evaluate them without bias. The framework also enables us to produce useful metrics and figures to detail the performance of new models for comparison. To demonstrate the utility of the framework, we present a benchmarking of six baseline models [linear discriminant analysis (LDA), support-vector machine (SVM), k-nearest neighbours (kNN), artificial neural network (ANN), convolutional neural network (CNN), and long short-term memory (LSTM)] on the five datasets and investigate the influence of different factors on the classification performance, including: number of training examples and size of the time window of each fNIRS sample used for classification. We also present results with a sliding window as opposed to simple classification of epochs, and with a personalised approach (within subject data classification) as opposed to a generalised approach (unseen subject data classification).Results and discussionResults show that the performance is typically lower than the scores often reported in literature, and without great differences between models, highlighting that predicting unseen data remains a difficult task. Our benchmarking framework provides future authors, who are achieving significant high classification scores, with a tool to demonstrate the advances in a comparable way. To complement our framework, we contribute a set of recommendations for methodology decisions and writing papers, when applying machine learning to fNIRS data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.