BackgroundGeneral consumers can now easily access drug information and quickly check for potential drug-drug interactions (PDDIs) through mobile health (mHealth) apps. With aging population in Canada, more people have chronic diseases and comorbidities leading to increasing numbers of medications. The use of mHealth apps for checking PDDIs can be helpful in ensuring patient safety and empowerment.ObjectiveThe aim of this study was to review the characteristics and quality of publicly available mHealth apps that check for PDDIs.MethodsApple App Store and Google Play were searched to identify apps with PDDI functionality. The apps’ general and feature characteristics were extracted. The Mobile App Rating Scale (MARS) was used to assess the quality.ResultsA total of 23 apps were included for the review—12 from Apple App Store and 11 from Google Play. Only 5 of these were paid apps, with an average price of $7.19 CAD. The mean MARS score was 3.23 out of 5 (interquartile range 1.34). The mean MARS scores for the apps from Google Play and Apple App Store were not statistically different (P=.84). The information dimension was associated with the highest score (3.63), whereas the engagement dimension resulted in the lowest score (2.75). The total number of features per app, average rating, and price were significantly associated with the total MARS score.ConclusionsSome apps provided accurate and comprehensive information about potential adverse drug effects from PDDIs. Given the potentially severe consequences of incorrect drug information, there is a need for oversight to eliminate low quality and potentially harmful apps. Because managing PDDIs is complex in the absence of complete information, secondary features such as medication reminder, refill reminder, medication history tracking, and pill identification could help enhance the effectiveness of PDDI apps.
BackgroundNursing notes have not been widely used in prediction models for clinical outcomes, despite containing rich information. Advances in natural language processing have made it possible to extract information from large scale unstructured data like nursing notes. This study extracted the sentiment—impressions and attitudes—of nurses, and examined how sentiment relates to 30-day mortality and survival.MethodsThis study applied a sentiment analysis algorithm to nursing notes extracted from MIMIC-III, a public intensive care unit (ICU) database. A multiple logistic regression model was fitted to the data to correlate measured sentiment with 30-day mortality while controlling for gender, type of ICU, and SAPS-II score. The association between measured sentiment and 30-day mortality was further examined in assessing the predictive performance of sentiment score as a feature in a classifier, and in a survival analysis for different levels of measured sentiment.ResultsNursing notes from 27,477 ICU patients, with an overall 30-day mortality of 11.02%, were extracted. In the presence of known predictors of 30-day mortality, mean sentiment polarity was a highly significant predictor in a multiple logistic regression model (Adjusted OR = 0.4626, p < 0.001, 95% CI: [0.4244, 0.5041]) and led to improved predictive accuracy (AUROC = 0.8189 versus 0.8092, 95% BCI of difference: [0.0070, 0.0126]). The Kaplan Meier survival curves showed that mean sentiment polarity quartiles are positively correlated with patient survival (log-rank test: p < 0.001).ConclusionsThis study showed that quantitative measures of unstructured clinical notes, such as sentiment of clinicians, correlate with 30-day mortality and survival, thus can also serve as a predictor of patient outcomes in the ICU. Therefore, further research is warranted to study and make use of the wealth of data that clinical notes have to offer.
BackgroundThere have been public health interventions that aim to reduce barriers to health care access by extending opening hours of health care facilities. However, the impact of opening hours from the patient’s perspective is not well understood.ObjectiveThis study aims to investigate the relationship between temporal accessibility of health care services and how patients rate the providers on Yelp, an online review website that is popular in the United States. Using crowdsourced open Internet data, such as Yelp, can help circumvent the traditional survey method.MethodsFrom Yelp’s limited academic dataset, this study examined the pattern of visits to health care providers and performed a secondary analysis to examine the association between patient rating (measured by Yelp’s rating) and temporal accessibility of health care services (measured by opening hours) using ordinal logistic regression models. Other covariates included were whether an appointment was required, the type of health care service, the region of the health care service provider, the number of reviews the health care service provider received in the past, the number of nearby competitors, the mean rating of competitors, and the standard deviation of competitors’ ratings.ResultsFrom the 2085 health care service providers identified, opening hours during certain periods, the type of health care service, and the variability of competitors’ ratings showed an association with patient rating. Most of the visits to health care service providers took place between normal working hours (9 AM-5 PM) from Sunday to Thursday, and the least on Saturday. A model fitted to the entire sample showed that increasing hours during normal working hours on Monday (OR 0.926, 95% CI 0.880-0.973, P=0.03), Saturday (OR 0.897, 95% CI 0.860-0.935, P<0.001), Sunday (OR 0.904, 95% CI 0.841-0.970, P=0.005), and outside normal working hours on Friday (OR 0.872, 95% CI 0.760-0.998, P=0.048) was associated with receiving lower ratings. But increasing hours during outside normal working hours on Sunday was associated with receiving higher ratings (OR 1.400, 95% CI 1.036-1.924, P=0.03). There were also observed differences in patient ratings among the health care services types, but not geographically or by appointment requirement.ConclusionsThis study shows that public health interventions, especially those involving opening hours, could use crowdsourced open Internet data to enhance the evidence base for decision making and evaluation in the future. This study illustrates one example of how Yelp data could be used to understand patient experiences with health care services, making a case for future research for exploring online reviews as a health dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.