Despite growing interest in remote patient monitoring, limited evidence exists to substantiate claims of its ability to improve outcomes. Our aim was to evaluate randomized controlled trials (RCTs) that assess the effects of using wearable biosensors (e.g. activity trackers) for remote patient monitoring on clinical outcomes. We expanded upon prior reviews by assessing effectiveness across indications and presenting quantitative summary data. We searched for articles from January 2000 to October 2016 in PubMed, reviewed 4,348 titles, selected 777 for abstract review, and 64 for full text review. A total of 27 RCTs from 13 different countries focused on a range of clinical outcomes and were retained for final analysis; of these, we identified 16 high-quality studies. We estimated a difference-in-differences random effects meta-analysis on select outcomes. We weighted the studies by sample size and used 95% confidence intervals (CI) around point estimates. Difference-in-difference point estimation revealed no statistically significant impact of remote patient monitoring on any of six reported clinical outcomes, including body mass index (−0.73; 95% CI: −1.84, 0.38), weight (−1.29; −3.06, 0.48), waist circumference (−2.41; −5.16, 0.34), body fat percentage (0.11; −1.56, 1.34), systolic blood pressure (−2.62; −5.31, 0.06), and diastolic blood pressure (−0.99; −2.73, 0.74). Studies were highly heterogeneous in their design, device type, and outcomes. Interventions based on health behavior models and personalized coaching were most successful. We found substantial gaps in the evidence base that should be considered before implementation of remote patient monitoring in the clinical setting.
Objectives Therapeutic virtual reality (VR) has emerged as an effective, drug-free tool for pain management, but there is a lack of randomized, controlled data evaluating its effectiveness in hospitalized patients. We sought to measure the impact of on-demand VR versus “health and wellness” television programming for pain in hospitalized patients. Methods We performed a prospective, randomized, comparative effectiveness trial in hospitalized patients with an average pain score of ≥3 out of 10 points. Patients in the experimental group received a library of 21 VR experiences administered using the Samsung Gear Oculus headset; control patients viewed specialized television programming to promote health and wellness. Clinical staff followed usual care; study interventions were not protocolized. The primary outcome was patient-reported pain using a numeric rating scale, as recorded by nursing staff during usual care. Pre- and post-intervention pain scores were compared immediately after initial treatment and after 48- and 72-hours. Results There were 120 subjects (61 VR; 59 control). The mean within-subject difference in immediate pre- and post-intervention pain scores was larger in the VR group (-1.72 points; SD 3.56) than in the control group (-0.46 points; SD 3.01); this difference was significant in favor of VR (P < .04). When limited to the subgroup of patients with severe baseline pain (≥7 points), the effect of VR was more pronounced vs. control (-3.04, SD 3.75 vs. -0.93, SD 2.16 points; P = .02). In regression analyses adjusting for pre-intervention pain, time, age, gender, and type of pain, VR yielded a .59 (P = .03) and .56 (P = .04) point incremental reduction in pain versus control during the 48- and 72-hour post-intervention periods, respectively. Conclusions VR significantly reduces pain versus an active control condition in hospitalized patients. VR is most effective for severe pain. Future trials should evaluate standardized order sets that interpose VR as an early non-drug option for analgesia.
Social media reveals a dynamic range of themes governing AS patients' experience and choice with biologics. The complexity of selecting among biologics and navigating their risk-benefit profiles suggests merit in creating online tailored decision-tools to support patients' decision-making with AS biologic therapies. This article is protected by copyright. All rights reserved.
BackgroundHealth care consumers are increasingly using online ratings to select providers, but differences in the distribution of scores across specialties and skew of the data have the potential to mislead consumers about the interpretation of ratings.ObjectiveThe objective of our study was to determine whether distributions of consumer ratings differ across specialties and to provide specialty-specific data to assist consumers and clinicians in interpreting ratings.MethodsWe sampled 212,933 health care providers rated on the Healthgrades consumer ratings website, representing 29 medical specialties (n=128,678), 15 surgical specialties (n=72,531), and 6 allied health (nonmedical, nonnursing) professions (n=11,724) in the United States. We created boxplots depicting distributions and tested the normality of overall patient satisfaction scores. We then determined the specialty-specific percentile rank for scores across groupings of specialties and individual specialties.ResultsAllied health providers had higher median overall satisfaction scores (4.5, interquartile range [IQR] 4.0-5.0) than physicians in medical specialties (4.0, IQR 3.3-4.5) and surgical specialties (4.2, IQR 3.6-4.6, P<.001). Overall satisfaction scores were highly left skewed (normal between –0.5 and 0.5) for all specialties, but skewness was greatest among allied health providers (–1.23, 95% CI –1.280 to –1.181), followed by surgical (–0.77, 95% CI –0.787 to –0.755) and medical specialties (–0.64, 95% CI –0.648 to –0.628). As a result of the skewness, the percentages of overall satisfaction scores less than 4 were only 23% for allied health, 37% for surgical specialties, and 50% for medical specialties. Percentile ranks for overall satisfaction scores varied across specialties; percentile ranks for scores of 2 (0.7%, 2.9%, 0.8%), 3 (5.8%, 16.6%, 8.1%), 4 (23.0%, 50.3%, 37.3%), and 5 (63.9%, 89.5%, 86.8%) differed for allied health, medical specialties, and surgical specialties, respectively.ConclusionsOnline consumer ratings of health care providers are highly left skewed, fall within narrow ranges, and differ by specialty, which precludes meaningful interpretation by health care consumers. Specialty-specific percentile ranks may help consumers to more meaningfully assess online physician ratings.
Objective The number of therapies for axial spondyloarthritis (axSpA) is increasing. Thus, it has become more challenging for patients and physicians to navigate the risk‐benefit profiles of the various treatment options. In this study, we used conjoint analysis—a form of trade‐off analysis that elucidates how people make complex decisions by balancing competing factors—to examine patient decision‐making surrounding medication options for axSpA. Methods We conducted an adaptive choice‐based conjoint analysis survey for patients with axSpA to assess the relative importance of medication attributes (eg, chance of symptom improvement, risk of side effects, route of administration, etc) in their decision‐making. We also performed logistic regression to explore whether patient demographics and disease characteristics predicted decision‐making. Results Overall, 397 patients with axSpA completed the conjoint analysis survey. Patients prioritized medication efficacy (importance score 26.8%), cost (26.3%), and route of administration (13.9%) as most important in their decision‐making. These were followed by risk of lymphoma (9.5%), dosing frequency (7.2%), risk of serious infection (6.0%), tolerability of side effects (5.3%), and clinic visit and laboratory test frequency (4.8%). In regression analyses, there were few significant associations between patients’ treatment preferences and sociodemographic and axSpA characteristics. Conclusions Treatment decision‐making in axSpA is highly individualized, and demographics and baseline disease characteristics are poor predictors of individual preferences. This calls for the development of online shared decision‐making tools for patients and providers, with the goal of selecting a treatment that is consistent with patients’ preferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.