Loss to follow-up can introduce bias into research, making it difficult to develop inclusive evidence-based health policies and practice guidelines. We aimed to deepen understanding of reasons why participants leave or remain in longitudinal health studies. We interviewed 59 researchers and current and former research participants in six focus groups (n = 55) or interviews (n = 4) at three study centers in a large academic research institution. We used minimally structured interview guides and inductive thematic analysis to explore participant-level, study-level, and contextual participation barriers and facilitators. Four main themes emerged: transportation, incentives and motivation, caregiver concerns, and the social and physical environment. Themes shared crosscutting issues involving funding, flexibility, and relationships between researchers and research participants. Study-level and contextual factors appear to interact with participant characteristics, particularly socioeconomic status and disease severity to affect participant retention. Participants’ characteristics do not seem to be the main cause of study dropout. Researchers and funders might be able to address contextual and study factors in ways that reduce barriers to participation.
BackgroundStrong opinions for or against the use of systematic reviews to inform policymaking have been published in the medical literature. The purpose of this paper was to examine whether funding sources and author financial conflicts of interest were associated with whether an opinion article was supportive or critical of the use of systematic reviews for policymaking. We examined the nature of the arguments within each article, the types of disclosures present, and whether these articles are being cited in the academic literature.MethodsWe searched for articles that expressed opinions about the use of systematic reviews for policymaking. We included articles that presented opinions about the use of systematic reviews for policymaking and categorized each article as supportive or critical of such use. We extracted all arguments regarding the use of systematic reviews from each article and inductively coded each as internal or external validity argument, categorized disclosed funding sources, conflicts of interest, and article types, and systematically searched for undisclosed financial ties. We counted the number of times each article has been cited in the “Web of Science.” We report descriptive statistics.ResultsArticles that were critical of the use of systematic reviews (n = 25) for policymaking had disclosed or undisclosed industry ties 2.3 times more often than articles that were supportive of the use (n = 34). We found that editorials, comments, letters, and perspectives lacked published disclosures nearly twice as often (60% v. 33%) as other types of articles. We also found that editorials, comments, letters, and perspectives were less frequently cited in the academic literature than other article types (median number of citations = 5 v. 19).ConclusionsIt is important to consider whether an article has industry ties when evaluating the strength of the argument for or against the use of systematic reviews for policymaking. We found that journal conflict of interest disclosures are often inadequate, particularly for editorials, comments, letters, and perspectives and that these articles are being cited as evidence in the academic literature. Our results further suggest the need for more consistent and complete disclosure for all article types.
Recognizing bias in health research is crucial for evidence-based decision making. We worked with eight community groups to develop materials for nine modular, individualized critical appraisal workshops we conducted with 102 consumers (four workshops), 43 healthcare providers (three workshops), and 33 journalists (two workshops) in California. We presented workshops using a “cycle of bias” framework, and developed a toolbox of presentations, problem-based small group sessions, and skill-building materials to improve participants’ ability to evaluate research for financial and other conflicts of interest, bias, validity, and applicability. Participant feedback indicated that the adaptability of the toolbox and our focus on bias were critical elements in the success of our workshops.
Objectives-We sought to determine whether failure to locate hard-to-reach respondents in longitudinal studies causes biased and inaccurate study results.Methods-We performed a nonresponse simulation in a survey of 498 low-income women who received cash aid in a California county. Our simulation was based on a previously published analysis that found that women without children who applied for General Assistance experienced more violence than did women with children who applied for Temporary Assistance to Needy Families. We compared hard-to-reach respondents whom we reinterviewed only after extended follow-up effort 12 months after baseline with other respondents. We then removed these hard-to-reach respondents from our analysis.Results-Other than having a greater prevalence of substance dependence (14% vs 6%), there were no significant differences between hard-and easy-to-reach respondents. However, excluding the hard to reach would have decreased response rates from 89% to 71% and nullified the findings, a result that did not stem primarily from reduced statistical power.Conclusions-The effects of failure to retain hard-to-reach respondents are not predicable based on respondent characteristics. Retention of these respondents should be a priority in public health research.Respondents who participate in all phases of longitudinal studies are likely to differ from those who are lost to follow-up. [1][2][3][4][5] Differential attrition, or nonrandom loss of respondents, can lead to bias in a study's findings by changing the composition of the sample so that it no longer represents the study population, especially when response rates are low and there are large differences between responders and nonresponders. 6 Attrition also reduces sample sizes, contributing to the risk of type 2 error by decreasing statistical power to detect effects. 7 Requests for reprints should be sent to Donna H. Odierna, Department of Clinical Pharmacy, University of California, San Francisco, 3333 California St, Suite 420, San Francisco, CA 94118 (donna.odierna@ucsf.edu or dodierna@gmail.com).. Contributors D. H. Odierna developed the concept for the article, carried out the statistical analysis, and wrote versions of the article. L. A. Schmidt contributed substantially to the overall themes, writing, and editing at each stage; she also originated the Welfare Client Longitudinal Study (WCLS) and supervised all aspects of its implementation. Human Participant ProtectionThe survey design, survey instrument, and consent documents were approved by the institutional review boards at the University of California, San Francisco, and the Public Health Institute. Participants were protected by a federal Certificate of Confidentiality from the US Department of Health and Human Services. This secondary analysis was conducted after appropriate review and exemption by the institutional review board at the University of California, Berkeley. NIH Public Access Author ManuscriptAm J Public Health. Author manuscript; available in PMC 2010 August ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.