Background Delivery of behavioral health interventions on the internet offers many benefits, including accessibility, cost-effectiveness, convenience, and anonymity. In recent years, an increased number of internet interventions have been developed, targeting a range of conditions and behaviors, including depression, pain, anxiety, sleep disturbance, and eating disorders. Human support (coaching) is a common component of internet interventions that is intended to boost engagement; however, little is known about how participants interact with coaches and how this may relate to their experience with the intervention. By examining the data that participants produce during an intervention, we can characterize their interaction patterns and refine treatments to address different needs. Objective In this study, we employed text mining and visual analytics techniques to analyze messages exchanged between coaches and participants in an internet-delivered pain management intervention for adolescents with chronic pain and their parents. Methods We explored the main themes in coaches’ and participants’ messages using an automated textual analysis method, topic modeling. We then clustered participants’ messages to identify subgroups of participants with similar engagement patterns. Results First, we performed topic modeling on coaches’ messages. The themes in coaches’ messages fell into 3 categories: Treatment Content, Administrative and Technical, and Rapport Building. Next, we employed topic modeling to identify topics from participants’ message histories. Similar to the coaches’ topics, these were subsumed under 3 high-level categories: Health Management and Treatment Content, Questions and Concerns, and Activities and Interests. Finally, the cluster analysis identified 4 clusters, each with a distinguishing characteristic: Assignment-Focused, Short Message Histories, Pain-Focused, and Activity-Focused. The name of each cluster exemplifies the main engagement patterns of that cluster. Conclusions In this secondary data analysis, we demonstrated how automated text analysis techniques could be used to identify messages of interest, such as questions and concerns from users. In addition, we demonstrated how cluster analysis could be used to identify subgroups of individuals who share communication and engagement patterns, and in turn facilitate personalization of interventions for different subgroups of patients. This work makes 2 key methodological contributions. First, this study is innovative in its use of topic modeling to provide a rich characterization of the textual content produced by coaches and participants in an internet-delivered behavioral health intervention. Second, to our knowledge, this is the first example of the use of a visual analysis method to cluster participants and identify similar patterns of behavior based on intervention message content.
Recent advances in distributed language modeling have led to large performance increases on a variety of natural language processing (NLP) tasks. However, it is not well understood how these methods may be augmented by knowledge-based approaches. This paper compares the performance and internal representation of an Enhanced Sequential Inference Model (ESIM) between three experimental conditions based on the representation method: Bidirectional Encoder Representations from Transformers (BERT), Embeddings of Semantic Predications (ESP), or Cui2Vec. The methods were evaluated on the Medical Natural Language Inference (MedNLI) subtask of the MEDIQA 2019 shared task. This task relied heavily on semantic understanding and thus served as a suitable evaluation set for the comparison of these representation methods.
Background Health dialog systems have seen increased adoption by patients, hospitals, and universities due to the confluence of advancements in machine learning and the ubiquity of high-performance hardware that supports real-time speech recognition, high-fidelity text-to-speech, and semantic understanding of natural language. Objectives This review seeks to enumerate opportunities to apply dialog systems toward the improvement of health outcomes while identifying both gaps in the current literature that may impede their implementation and recommendations that may improve their success in medical practice. Methods A search over PubMed and the ACM Digital Library was conducted on September 12, 2017 to collect all articles related to dialog systems within the domain of health care. These results were screened for eligibility with the main criteria being a peer-reviewed study of a system that includes both a natural language interface and either end-user testing or practical implementation. Results Forty-six studies met the inclusion criteria including 24 quasi-experimental studies, 16 randomized control trials, 2 case–control studies, 2 prospective cohort studies, 1 system description, and 1 human–computer conversation analysis. These studies evaluated dialog systems in five application domains: medical education (n = 20), clinical processes (n = 14), mental health (n = 5), personal health agents (n = 5), and patient education (n = 2). Conclusion We found that dialog systems have been widely applied to health care; however, most studies are not reproducible making direct comparison between systems and independent confirmation of findings difficult. Widespread adoption will also require the adoption of standard evaluation and reporting methods for health dialog systems to demonstrate clinical significance.
Introduction Until recently, understanding one’s sleep activity relied on technology only available in sleep labs with data analyzed by experts. Transitioning this technology from the lab to natural environments results in noisy data. Fortunately, advances in signal processing through Artificial Intelligence (AI) have made these technologies accessible to consumers. This study seeks to provide recommendations that address user preferences and concerns related to sleep self-management devices and software that leverage AI, as they have the potential to increase both the quantity and quality of sleep data available to researchers. Methods We assigned adult participants (N=25) with Pittsburgh Sleep Quality Index scores ≥ 5 (indicating low sleep quality) to one of four focus group sessions based on their self-reported prior use of sleep technologies. After a short demonstration, the moderator solicited participant feedback on devices and software in each of the following four categories: • headbands (Beddr, Dreem 2, Muse S) • sleep tracking mats (Withings) • snoring detectors (Smart Nora) • mobile applications (Sleep Cycle Alarm Clock, Sleep Score, Do I Snore, Sleep Rate) Results Participants anticipated discomfort from wearing headbands and placing snoring detectors under their pillow, although a subset of participants indicated that they would be willing to sacrifice comfort in exchange for improved accuracy. Conversely, participants were interested in sleep tracking pads since they could passively collect sleep data without additional burden. Similarly, participants viewed mobile applications positively due to their ability to collect sleep data from a nightstand rather than being attached to the participant; however, there were concerns about remembering to activate these applications. Conclusion Based on these results, we recommend using sleep tracking mats to collect patient-generated sleep data due to their ease of use and relative comfort, the main concerns related to lab-based sleep study participation. As a passive sensor, these require the least setup and support consistent data collection. Other devices run the risk of participants forgetting to use the device or becoming removed during the night resulting in missing data. By leveraging these existing technologies for remote sleep studies, researchers can increase recruitment and accessibility to promote sleep research participant diversity. Support (if any):
This study aims to assess the perspectives and usability of different consumer sleep technologies (CSTs) that leverage artificial intelligence (AI). We answer the following research questions: (1) what are user perceptions and ideations of CSTs (phase 1), (2) what are the users’ actual experiences with CSTs (phase 2), (3) and what are the design recommendations from participants (phases 1 and 2)? In this two-phase qualitative study, we conducted focus groups and usability testing to describe user ideations of desires and experiences with different AI sleep technologies and identify ways to improve the technologies. Results showed that focus group participants prioritized comfort, actionable feedback, and ease of use. Participants desired customized suggestions about their habitual sleeping environments and were interested in CSTs+AI that could integrate with tools and CSTs they already use. Usability study participants felt CSTs+AI provided an accurate picture of the quantity and quality of sleep. Participants identified room for improvement in usability, accuracy, and design of the technologies. We conclude that CSTs can be a valuable, affordable, and convenient tool for people who have issues or concerns with sleep and want more information. They provide objective data that can be discussed with clinicians.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.