BackgroundVarious types of framing can influence risk perceptions, which may have an impact on treatment decisions and adherence. One way of framing is the use of verbal terms in communicating the probabilities of treatment effects. We systematically reviewed the comparative effects of words versus numbers in communicating the probability of adverse effects to consumers in written health information.MethodsNine electronic databases were searched up to November 2012. Teams of two reviewers independently assessed studies. Inclusion criteria: randomised controlled trials; verbal versus numerical presentation; context: written consumer health information.ResultsTen trials were included. Participants perceived probabilities presented in verbal terms as higher than in numeric terms: commonly used verbal descriptors systematically led to an overestimation of the absolute risk of adverse effects (Range of means: 3% - 54%). Numbers also led to an overestimation of probabilities, but the overestimation was smaller (2% – 20%). The difference in means ranged from 3.8% to 45.9%, with all but one comparison showing significant results. Use of numbers increased satisfaction with the information (MD: 0.48 [CI: 0.32 to 0.63], p < 0.00001, I2 = 0%) and likelihood of medication use (MD for very common side effects: 1.45 [CI: 0.78 to 2.11], p = 0.0001, I2 = 68%; MD for common side effects: 0.90 [CI: 0.61 to 1.19], p < 0.00001, I2 = 1%; MD for rare side effects: 0.39 [0.02 to 0.76], p = 0.04, I2 = not applicable). Outcomes were measured on a 6-point Likert scale, suggesting small to moderate effects.ConclusionsVerbal descriptors including “common”, “uncommon” and “rare” lead to an overestimation of the probability of adverse effects compared to numerical information, if used as previously suggested by the European Commission. Numbers result in more accurate estimates and increase satisfaction and likelihood of medication use. Our review suggests that providers of consumer health information should quantify treatment effects numerically. Future research should focus on the impact of personal and contextual factors, use representative samples or be conducted in real life settings, measure behavioral outcomes and address whether benefit information can be described verbally.
Background
Data extraction forms link systematic reviews with primary research and provide the foundation for appraising, analysing, summarising and interpreting a body of evidence. This makes their development, pilot testing and use a crucial part of the systematic reviews process. Several studies have shown that data extraction errors are frequent in systematic reviews, especially regarding outcome data.
Methods
We reviewed guidance on the development and pilot testing of data extraction forms and the data extraction process. We reviewed four types of sources: 1) methodological handbooks of systematic review organisations (SRO); 2) textbooks on conducting systematic reviews; 3) method documents from health technology assessment (HTA) agencies and 4) journal articles. HTA documents were retrieved in February 2019 and database searches conducted in December 2019. One author extracted the recommendations and a second author checked them for accuracy. Results are presented descriptively.
Results
Our analysis includes recommendations from 25 documents: 4 SRO handbooks, 11 textbooks, 5 HTA method documents and 5 journal articles. Across these sources the most common recommendations on form development are to use customized or adapted standardised extraction forms (14/25); provide detailed instructions on their use (10/25); ensure clear and consistent coding and response options (9/25); plan in advance which data are needed (9/25); obtain additional data if required (8/25); and link multiple reports of the same study (8/25). The most frequent recommendations on piloting extractions forms are that forms should be piloted on a sample of studies (18/25); and that data extractors should be trained in the use of the forms (7/25). The most frequent recommendations on data extraction are that extraction should be conducted by at least two people (17/25); that independent parallel extraction should be used (11/25); and that procedures to resolve disagreements between data extractors should be in place (14/25).
Conclusions
Overall, our results suggest a lack of comprehensiveness of recommendations. This may be particularly problematic for less experienced reviewers. Limitations of our method are the scoping nature of the review and that we did not analyse internal documents of health technology agencies.
Background: Evidence syntheses provide the basis for evidence-based decision making in healthcare. To judge the certainty of findings for the specific decision context evidence syntheses should consider context suitability (ie, generalizability, external validity, applicability or transferability). Our objective was to determine the status quo and to provide a comprehensive overview of existing methodological recommendations of Health Technology Assessment (HTA) and Systematic Review (SR) producing organizations in assessing context suitability of evidence on effectiveness of health care interventions. Additionally, we analyzed similarities and differences between the recommendations. Methods: In this Integrative Review we performed a structured search for methods documents from evidence synthesis producing organizations that include recommendations on appraising context suitability in effectiveness assessments. Two reviewers independently selected documents according to predefined eligibility criteria. Data were extracted in standardized and piloted tables by one reviewer and verified by a second reviewer. We performed a thematic analysis to identify and summarize the main themes and categories regarding recommended context suitability assessments. Results: We included 14 methods documents of 12 organizations in our synthesis. Assessment approaches are very heterogeneous both regarding the general concepts (eg, integration in the evidence synthesis preparation process) and the content of assessments (eg, assessment criteria). Conclusion: Some heterogeneity seems to be justified because of the need to tailor the assessment to different settings and medical areas. However, most differences were inexplicable. More harmonization is desirable and appears possible.
BackgroundOverviews of systematic reviews (overviews) attempt to systematically retrieve and summarize the results of multiple systematic reviews (SRs) for a given condition or public health problem. Two prior descriptive analyses of overviews found substantial variation in the methodological approaches used in overviews, and deficiencies in reporting of key methodological steps. Since then, new methods have been developed so it is timely to update the prior descriptive analyses. The objectives are to: (1) investigate the epidemiological, descriptive, and reporting characteristics of a random sample of 100 overviews published from 2012 to 2016 and (2) compare these recently published overviews (2012–2016) to those published prior to 2012 (based on the prior descriptive analyses).MethodsMedline, EMBASE, and CDSR will be searched for overviews published 2012–2016, using a validated search filter for overviews. Only overviews written in English will be included. All titles and abstracts will be screened by one review author; those deemed not relevant will be verified by a second person for exclusion. Full-texts will be assessed for inclusion by two reviewers independently. Of those deemed relevant, a random sample of 100 overviews will be selected for inclusion. Data extraction will be either performed by one reviewer with verification by a second reviewer or by one reviewer only depending on the complexity of the item. Discrepancies at any stage will be resolved by consensus or consulting a third person. Data will be extracted on the epidemiological, descriptive, and reporting characteristics of each overview. Data will be analyzed descriptively. When data are available for both time points (up to 2011 vs. 2012–2016), we will compare characteristics by calculating risk ratios or applying the Mann-Whitney test.DiscussionOverviews are becoming increasingly valuable evidence syntheses, and the number of published overviews is increasing. However, former analyses found limitations in the conduct and reporting of overviews. This update of a recent sample of overviews will inform whether this has changed, while also identifying areas for further improvement.Systematic review registrationThe review will not be registered in PROSPERO as it does not meet the eligibility criterion of dealing with health-related outcomes.
Aims We investigated the involvement of first-time mothers, who had a planned Caesarean section, in the decision to have a Caesarean section, taking into account their different educational levels. Subjects and methods A self-assessment questionnaire was sent in July 2005 to women who had undergone a Caesarean section in 2004. Participants were 2,685 members of a statutory health insurance fund who had given birth by Caesarean section (response rate: 48.0%). Included were primiparae with planned Caesarean section (n=352). Results The women in this cross-sectional study felt well informed about the procedure of a section but not its consequences. They used several sources of information and were most satisfied with the information provided by doctors and midwives. Of the women in this study 20% did not have a midwife. No major differences were observed between different educational levels. Conclusion Although most women were satisfied with their decision, they felt that they did not receive enough information about the consequences of a Caesarean section. This information need could be met by a further involvement of midwives in maternity care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.