Objective To synthesise the findings from individual qualitative studies on patients' understanding and experiences of hypertension and drug taking; to investigate whether views differ internationally by culture or ethnic group and whether the research could inform interventions to improve adherence.Design Systematic review and narrative synthesis of qualitative research using the 2006 UK Economic and Social Research Council research methods programme guidance.Data sources Medline, Embase, the British Nursing Index, Social Policy and Practice, and PsycInfo from inception to October 2011. Study selectionQualitative interviews or focus groups among people with uncomplicated hypertension (studies principally in people with diabetes, established cardiovascular disease, or pregnancy related hypertension were excluded).Results 59 papers reporting on 53 qualitative studies were included in the synthesis. These studies came from 16 countries (United States, United Kingdom, Brazil, Sweden, Canada, New Zealand, Denmark, Finland, Ghana, Iran, Israel, Netherlands, South Korea, Spain, Tanzania, and Thailand). A large proportion of participants thought hypertension was principally caused by stress and produced symptoms, particularly headache, dizziness, and sweating. Participants widely intentionally reduced or stopped treatment without consulting their doctor. Participants commonly perceived that their blood pressure improved when symptoms abated or when they were not stressed, and that treatment was not needed at these times. Participants disliked treatment and its side effects and feared addiction. These findings were consistent across countries and ethnic groups. Participants also reported various external factors that prevented adherence, including being unable to find time to take the drugs or to see the doctor; having insufficient money to pay for treatment; the cost of appointments and healthy food; a lack of health insurance; and forgetfulness.Conclusions Non-adherence to hypertension treatment often resulted from patients' understanding of the causes and effects of hypertension; particularly relying on the presence of stress or symptoms to determine if blood pressure was raised. These beliefs were remarkably similar across ethnic and geographical groups; calls for culturally specific education for individual ethnic groups may therefore not be justified. To improve adherence, clinicians and educational interventions must better understand and engage with patients' ideas about causality, experiences of symptoms, and concerns about drug side effects. IntroductionHypertension is a major health problem in both developed and developing countries and is estimated to cause more than 13% of deaths annually.1 Despite national and international guidelines and initiatives for hypertension, population based studies have found that around two thirds of people with hypertension are either untreated or inadequately controlled, including a substantial number who remain undiagnosed. [2][3][4] Among those with a diagnosis of...
Technologies and methods to speed up the production of systematic reviews by reducing the manual labour involved have recently emerged. Automation has been proposed or used to expedite most steps of the systematic review process, including search, screening, and data extraction. However, how these technologies work in practice and when (and when not) to use them is often not clear to practitioners. In this practical guide, we provide an overview of current machine learning methods that have been proposed to expedite evidence synthesis. We also offer guidance on which of these are ready for use, their strengths and weaknesses, and how a systematic review team might go about using them in practice.
The latest international evidence on socio-economic status and stroke shows that stroke not only disproportionately affects low-and middle-income countries, but also socio-economically deprived populations within countries of all income categories. These disparities are found at every stage: from stroke prevention through acute care and rehabilitation, to long-term outcomes. Increased average levels of 'traditional' risk factors (hypertension, hyperlipidaemia, excess alcohol intake, smoking, obesity, sedentary lifestyle) in populations with lower SES appears to explain around half of the effect. In many countries there is evidence that people with lower SES are less likely to receive good quality acute hospital and rehabilitation care. For practice, better implementation of well-established treatments: traditional risk factor treatment and equity of access to high quality acute stroke care and rehabilitation seems likely to reduce inequality substantially. Overcoming barriers and adapting evidence-based interventions to different countries and healthcare settings remains a research priority.
We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the ‘PICO’ elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.
We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions.
Objective To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments.Methods We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR.Results By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated ‘highly relevant' v 56.5% of text from reviews; difference +3.9%, [−3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR).Conclusion Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses.
While it is important for the evidence supporting practice guidelines to be current, that is often not the case. The advent of living systematic reviews has made the concept of "living guidelines" realistic, with the promise to provide timely, up-to-date and high-quality guidance to target users. We define living guidelines as an optimization of the guideline development process to allow updating individual recommendations as soon as new relevant evidence becomes available. A major implication of that definition is that the unit of update is the individual recommendation and not the whole guideline. We then discuss when living guidelines are appropriate, the workflows required to support them, the collaboration between living systematic reviews and living guideline teams, the thresholds for changing recommendations, and potential approaches to publication and dissemination. The success and sustainability of the concept of living guideline will depend on those of its major pillar, the living systematic review. We conclude that guideline developers should both experiment with and research the process of living guidelines.
Machine learning (ML) algorithms have proven highly accurate for identifying Randomized Controlled Trials (RCTs) but are not used much in practice, in part because the best way to make use of the technology in a typical workflow is unclear. In this work, we evaluate ML models for RCT classification (support vector machines, convolutional neural networks, and ensemble approaches). We trained and optimized support vector machine and convolutional neural network models on the titles and abstracts of the Cochrane Crowd RCT set. We evaluated the models on an external dataset (Clinical Hedges), allowing direct comparison with traditional database search filters. We estimated area under receiver operating characteristics (AUROC) using the Clinical Hedges dataset. We demonstrate that ML approaches better discriminate between RCTs and non-RCTs than widely used traditional database search filters at all sensitivity levels; our best-performing model also achieved the best results to date for ML in this task (AUROC 0.987, 95% CI, 0.984–0.989). We provide practical guidance on the role of ML in (1) systematic reviews (high-sensitivity strategies) and (2) rapid reviews and clinical question answering (high-precision strategies) together with recommended probability cutoffs for each use case. Finally, we provide open-source software to enable these approaches to be used in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.