Background Current standards of psychiatric assessment and diagnostic evaluation rely primarily on the clinical subjective interpretation of a patient’s outward manifestations of their internal state. While psychometric tools can help to evaluate these behaviors more systematically, the tools still rely on the clinician’s interpretation of what are frequently nuanced speech and behavior patterns. With advances in computing power, increased availability of clinical data, and improving resolution of recording and sensor hardware (including acoustic, video, accelerometer, infrared, and other modalities), researchers have begun to demonstrate the feasibility of cutting-edge technologies in aiding the assessment of psychiatric disorders. Objective We present a research protocol that utilizes facial expression, eye gaze, voice and speech, locomotor, heart rate, and electroencephalography monitoring to assess schizophrenia symptoms and to distinguish patients with schizophrenia from those with other psychiatric disorders and control subjects. Methods We plan to recruit three outpatient groups: (1) 50 patients with schizophrenia, (2) 50 patients with unipolar major depressive disorder, and (3) 50 individuals with no psychiatric history. Using an internally developed semistructured interview, psychometrically validated clinical outcome measures, and a multimodal sensing system utilizing video, acoustic, actigraphic, heart rate, and electroencephalographic sensors, we aim to evaluate the system’s capacity in classifying subjects (schizophrenia, depression, or control), to evaluate the system’s sensitivity to within-group symptom severity, and to determine if such a system can further classify variations in disorder subtypes. Results Data collection began in July 2020 and is expected to continue through December 2022. Conclusions If successful, this study will help advance current progress in developing state-of-the-art technology to aid clinical psychiatric assessment and treatment. If our findings suggest that these technologies are capable of resolving diagnoses and symptoms to the level of current psychometric testing and clinician judgment, we would be among the first to develop a system that can eventually be used by clinicians to more objectively diagnose and assess schizophrenia and depression with the possibility of less risk of bias. Such a tool has the potential to improve accessibility to care; to aid clinicians in objectively evaluating diagnoses, severity of symptoms, and treatment efficacy through time; and to reduce treatment-related morbidity. International Registered Report Identifier (IRRID) DERR1-10.2196/36417
Background The wide adoption of social media in daily life renders it a rich and effective resource for conducting near real-time assessments of consumers’ perceptions of health services. However, its use in these assessments can be challenging because of the vast amount of data and the diversity of content in social media chatter. Objective This study aims to develop and evaluate an automatic system involving natural language processing and machine learning to automatically characterize user-posted Twitter data about health services using Medicaid, the single largest source of health coverage in the United States, as an example. Methods We collected data from Twitter in two ways: via the public streaming application programming interface using Medicaid-related keywords (Corpus 1) and by using the website’s search option for tweets mentioning agency-specific handles (Corpus 2). We manually labeled a sample of tweets in 5 predetermined categories or other and artificially increased the number of training posts from specific low-frequency categories. Using the manually labeled data, we trained and evaluated several supervised learning algorithms, including support vector machine, random forest (RF), naïve Bayes, shallow neural network (NN), k-nearest neighbor, bidirectional long short-term memory, and bidirectional encoder representations from transformers (BERT). We then applied the best-performing classifier to the collected tweets for postclassification analyses to assess the utility of our methods. Results We manually annotated 11,379 tweets (Corpus 1: 9179; Corpus 2: 2200) and used 7930 (69.7%) for training, 1449 (12.7%) for validation, and 2000 (17.6%) for testing. A classifier based on BERT obtained the highest accuracies (81.7%, Corpus 1; 80.7%, Corpus 2) and F1 scores on consumer feedback (0.58, Corpus 1; 0.90, Corpus 2), outperforming the second best classifiers in terms of accuracy (74.6%, RF on Corpus 1; 69.4%, RF on Corpus 2) and F1 score on consumer feedback (0.44, NN on Corpus 1; 0.82, RF on Corpus 2). Postclassification analyses revealed differing intercorpora distributions of tweet categories, with political (400778/628411, 63.78%) and consumer feedback (15073/27337, 55.14%) tweets being the most frequent for Corpus 1 and Corpus 2, respectively. Conclusions The broad and variable content of Medicaid-related tweets necessitates automatic categorization to identify topic-relevant posts. Our proposed system presents a feasible solution for automatic categorization and can be deployed and generalized for health service programs other than Medicaid. Annotated data and methods are available for future studies.
IntroductionMedications such as buprenorphine and methadone are effective for treating opioid use disorder (OUD), but many patients face barriers related to treatment and access. We analyzed two sources of data—social media and published literature—to categorize and quantify such barriers.MethodsIn this mixed methods study, we analyzed social media (Reddit) posts from three OUD-related forums (subreddits): r/suboxone, r/Methadone, and r/naltrexone. We applied natural language processing to identify posts relevant to treatment barriers, categorized them into insurance- and non-insurance-related, and manually subcategorized them into fine-grained topics. For comparison, we used substance use-, OUD- and barrier-related keywords to identify relevant articles from PubMed published between 2006 and 2022. We searched publications for language expressing fear of barriers, and hesitation or disinterest in medication treatment because of barriers, paying particular attention to the affected population groups described.ResultsOn social media, the top three insurance-related barriers included having no insurance (22.5%), insurance not covering OUD treatment (24.7%), and general difficulties of using insurance for OUD treatment (38.2%); while the top two non-insurance-related barriers included stigma (47.6%), and financial difficulties (26.2%). For published literature, stigma was the most prominently reported barrier, occurring in 78.9% of the publications reviewed, followed by financial and/or logistical issues to receiving medication treatment (73.7%), gender-specific barriers (36.8%), and fear (31.5%).ConclusionThe stigma associated with OUD and/or seeking treatment and insurance/cost are the two most common types of barriers reported in the two sources combined. Harm reduction efforts addressing barriers to recovery may benefit from leveraging multiple data sources.
BACKGROUND The wide adoption of social media in daily life renders it a rich and effective resource for conducting near real-time assessments of consumers’ perceptions of health services. However, its use in these assessments can be challenging because of the vast amount of data and the diversity of content in social media chatter. OBJECTIVE This study aims to develop and evaluate an automatic system involving natural language processing and machine learning to automatically characterize user-posted Twitter data about health services using Medicaid, the single largest source of health coverage in the United States, as an example. METHODS We collected data from Twitter in two ways: via the public streaming application programming interface using Medicaid-related keywords (Corpus 1) and by using the website’s search option for tweets mentioning agency-specific handles (Corpus 2). We manually labeled a sample of tweets in 5 predetermined categories or <i>other</i> and artificially increased the number of training posts from specific low-frequency categories. Using the manually labeled data, we trained and evaluated several supervised learning algorithms, including support vector machine, random forest (RF), naïve Bayes, shallow neural network (NN), k-nearest neighbor, bidirectional long short-term memory, and bidirectional encoder representations from transformers (BERT). We then applied the best-performing classifier to the collected tweets for postclassification analyses to assess the utility of our methods. RESULTS We manually annotated 11,379 tweets (Corpus 1: 9179; Corpus 2: 2200) and used 7930 (69.7%) for training, 1449 (12.7%) for validation, and 2000 (17.6%) for testing. A classifier based on BERT obtained the highest accuracies (81.7%, Corpus 1; 80.7%, Corpus 2) and F<sub>1</sub> scores on consumer feedback (0.58, Corpus 1; 0.90, Corpus 2), outperforming the second best classifiers in terms of accuracy (74.6%, RF on Corpus 1; 69.4%, RF on Corpus 2) and F<sub>1</sub> score on consumer feedback (0.44, NN on Corpus 1; 0.82, RF on Corpus 2). Postclassification analyses revealed differing intercorpora distributions of tweet categories, with political (400778/628411, 63.78%) and consumer feedback (15073/27337, 55.14%) tweets being the most frequent for Corpus 1 and Corpus 2, respectively. CONCLUSIONS The broad and variable content of Medicaid-related tweets necessitates automatic categorization to identify topic-relevant posts. Our proposed system presents a feasible solution for automatic categorization and can be deployed and generalized for health service programs other than Medicaid. Annotated data and methods are available for future studies. CLINICALTRIAL
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.