BackgroundPatientsLikeMe is an online quantitative personal research platform for patients with life-changing illnesses to share their experience using patient-reported outcomes, find other patients like them matched on demographic and clinical characteristics, and learn from the aggregated data reports of others to improve their outcomes. The goal of the website is to help patients answer the question: “Given my status, what is the best outcome I can hope to achieve, and how do I get there?”ObjectiveUsing a cross-sectional online survey, we sought to describe the potential benefits of PatientsLikeMe in terms of treatment decisions, symptom management, clinical management, and outcomes.MethodsAlmost 7,000 members from six PatientsLikeMe communities (amyotrophic lateral sclerosis [ALS], Multiple Sclerosis [MS], Parkinson’s Disease, human immunodeficiency virus [HIV], fibromyalgia, and mood disorders) were sent a survey invitation using an internal survey tool (PatientsLikeMe Lens).ResultsComplete responses were received from 1323 participants (19% of invited members). Between-group demographics varied according to disease community. Users perceived the greatest benefit in learning about a symptom they had experienced; 72% (952 of 1323) rated the site “moderately” or “very helpful.” Patients also found the site helpful for understanding the side effects of their treatments (n = 757, 57%). Nearly half of patients (n = 559, 42%) agreed that the site had helped them find another patient who had helped them understand what it was like to take a specific treatment for their condition. More patients found the site helpful with decisions to start a medication (n = 496, 37%) than to change a medication (n = 359, 27%), change a dosage (n = 336, 25%), or stop a medication (n = 290, 22%). Almost all participants (n = 1,249, 94%) were diagnosed when they joined the site. Most (n = 824, 62%) experienced no change in their confidence in that diagnosis or had an increased level of confidence (n = 456, 34%). Use of the site was associated with increasing levels of comfort in sharing personal health information among those who had initially been uncomfortable. Overall, 12% of patients (n = 151 of 1320) changed their physician as a result of using the site; this figure was doubled in patients with fibromyalgia (21%, n = 33 of 150). Patients reported community-specific benefits: 41% of HIV patients (n = 72 of 177) agreed they had reduced risky behaviors and 22% of mood disorders patients (n = 31 of 141) agreed they needed less inpatient care as a result of using the site. Analysis of the Web access logs showed that participants who used more features of the site (eg, posted in the online forum) perceived greater benefit.ConclusionsWe have established that members of the community reported a range of benefits, and that these may be related to the extent of site use. Third party validation and longitudinal evaluation is an important next step in continuing to evaluate the potential of online data-sharing platforms.
BackgroundTreatment burden refers to the workload imposed by healthcare on patients, and the effect this has on quality of life. The Treatment Burden Questionnaire (TBQ) aims to assess treatment burden in different condition and treatment contexts. Here, we aimed to evaluate the validity and reliability of an English version of the TBQ, a scale that was originally developed in French.MethodsThe TBQ was translated into English by a forward-backward translation method. Wording and possible missing items were assessed during a pretest involving 200 patients with chronic conditions. Measurement properties of the instrument were assessed online with a patient network, using the PatientsLikeMe website. Dimensional structure of the questionnaire was assessed by factor analysis. Construct validity was assessed by associating TBQ global score wıth clinical variables, adherence to medication assessed by Morisky’s Medication Adherence Scale (MMAS-8), quality of life (QOL) assessed by the PatientsLikeMe Quality of Life Scale (PLMQOL), and patients’ confidence in their knowledge of their conditions and treatments. Reliability was determined by a test–retest method.ResultsIn total, 610 patients with chronic conditions, mainly from the USA, UK, Canada, Australia, or New Zealand, completed the TBQ between September and October 2013. The English TBQ showed a unidimensional structure with Cronbach α of 0.90. The TBQ global score was negatively correlated with the PLMQOL score (rs = −0.50; p < 0.0001). Low rather than moderate or high adherence to medication was associated with high TBQ score (mean [SD] TBQ score 61.8 [30.5] vs. 37.7 [27.5]; P < 0.0001). The treatment burden was higher for patients who had insufficient knowledge compared with those who had sufficient knowledge about their treatments (mean ± SD TBQ score 62.3 ± 31.3 vs. 47.8 ± 30.4; P < 0.0001) and conditions (63.0 ± 31.6 vs. 49.3 ± 30.7; P < 0.0001). The intraclass correlation coefficient for the retest (n = 282) was 0.77 (95% CI 0.70 to 0.82).ConclusionsWe found that the English TBQ is a reliable instrument in this population, and provide evidence supporting the construct validity for its use to assess treatment burden for patients with one or more chronic conditions in English-speaking countries.
ObjectivesTo compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of eight popular symptom assessment apps.DesignVignettes study.Setting200 primary care vignettes.Intervention/comparatorFor eight apps and seven general practitioners (GPs): breadth of coverage and condition-suggestion and urgency advice accuracy measured against the vignettes’ gold-standard.Primary outcome measures(1) Proportion of conditions ‘covered’ by an app, that is, not excluded because the user was too young/old or pregnant, or not modelled; (2) proportion of vignettes with the correct primary diagnosis among the top 3 conditions suggested; (3) proportion of ‘safe’ urgency advice (ie, at gold standard level, more conservative, or no more than one level less conservative).ResultsCondition-suggestion coverage was highly variable, with some apps not offering a suggestion for many users: in alphabetical order, Ada: 99.0%; Babylon: 51.5%; Buoy: 88.5%; K Health: 74.5%; Mediktor: 80.5%; Symptomate: 61.5%; Your.MD: 64.5%; WebMD: 93.0%. Top-3 suggestion accuracy was GPs (average): 82.1%±5.2%; Ada: 70.5%; Babylon: 32.0%; Buoy: 43.0%; K Health: 36.0%; Mediktor: 36.0%; Symptomate: 27.5%; WebMD: 35.5%; Your.MD: 23.5%. Some apps excluded certain user demographics or conditions and their performance was generally greater with the exclusion of corresponding vignettes. For safe urgency advice, tested GPs had an average of 97.0%±2.5%. For the vignettes with advice provided, only three apps had safety performance within 1 SD of the GPs—Ada: 97.0%; Babylon: 95.1%; Symptomate: 97.8%. One app had a safety performance within 2 SDs of GPs—Your.MD: 92.6%. Three apps had a safety performance outside 2 SDs of GPs—Buoy: 80.0% (p<0.001); K Health: 81.3% (p<0.001); Mediktor: 87.3% (p=1.3×10-3).ConclusionsThe utility of digital symptom assessment apps relies on coverage, accuracy and safety. While no digital tool outperformed GPs, some came close, and the nature of iterative improvements to software offers scalable improvements to care.
Patients with serious diseases may experiment with drugs that have not received regulatory approval. Online patient communities structured around quantitative outcome data have the potential to provide an observational environment to monitor such drug usage and its consequences. Here we describe an analysis of data reported on the website PatientsLikeMe by patients with amyotrophic lateral sclerosis (ALS) who experimented with lithium carbonate treatment. To reduce potential bias owing to lack of randomization, we developed an algorithm to match 149 treated patients to multiple controls (447 total) based on the progression of their disease course. At 12 months after treatment, we found no effect of lithium on disease progression. Although observational studies using unblinded data are not a substitute for double-blind randomized control trials, this study reached the same conclusion as subsequent randomized trials, suggesting that data reported by patients over the internet may be useful for accelerating clinical discovery and evaluating the effectiveness of drugs already in use.
Objective: To determine whether providing remote neurologic care into the homes of people with Parkinson disease (PD) is feasible, beneficial, and valuable.Methods: In a 1-year randomized controlled trial, we compared usual care to usual care supplemented by 4 virtual visits via video conferencing from a remote specialist into patients' homes. Primary outcome measures were feasibility, as measured by the proportion who completed at least one virtual visit and the proportion of virtual visits completed on time; and efficacy, as measured by the change in the Parkinson's Disease Questionnaire-39, a quality of life scale. Secondary outcomes included quality of care, caregiver burden, and time and travel savings.Results: A total of 927 individuals indicated interest, 210 were enrolled, and 195 were randomized.Participants had recently seen a specialist (73%) and were largely college-educated (73%) and white (96%). Ninety-five (98% of the intervention group) completed at least one virtual visit, and 91% of 388 virtual visits were completed. Quality of life did not improve in those receiving virtual house calls (0.3 points worse on a 100-point scale; 95% confidence interval [CI] 22.0 to 2.7 points; p 5 0.78) nor did quality of care or caregiver burden. Each virtual house call saved patients a median of 88 minutes (95% CI 70-120; p , 0.0001) and 38 miles per visit (95% CI 36-56; p , 0.0001).Conclusions: Providing remote neurologic care directly into the homes of people with PD was feasible and was neither more nor less efficacious than usual in-person care. Virtual house calls generated great interest and provided substantial convenience.ClinicalTrials.gov identifier: NCT02038959.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.