T HE INTERNET HAS BECOME AN IMportant mass medium for consumers seeking health information and health care services online.1 A recent concern and public health issue has been the quality of health information on the World Wide Web. However, the scale of the problem and the "epidemiology" (distribution and determinants) of poor health information on the Web are still unclear, as is their impact on public health and the question of whether poor health information on the Web is a problem at all. 2 Many studies have been conducted to describe, critically appraise, and analyze consumer health information on the Web. These typically report proportions of inaccurate or imperfect information as estimates of the prevalence of flawed information or the risk of encountering misinformation on the Web.However, to date no systematic and comprehensive synthesis of the methodology and evidence has been attempted. Two previous systematic reviews focused on compiling quality criteria and rating instruments, but did not synthesize evaluation results. Jadad and Gagliari 3 reviewed nonresearch-based rating systems (eg, criAuthor Affiliations: Unit for Cybermedicine and EHealth, Department of Clinical Social Medicine, University of Heidelberg, Heidelberg, Germany (Dr Eysenbach); Health Services Research Unit, London School of Hygiene and Tropical Medicine, London, England (Dr Powell); Department of Medical Epidemiology, Biometry and Informatics, University of Halle-Wittenberg, Halle/Saale, Germany (Dr Kuss); and Global Health Network Group, Department of Epidemiology, University of Pittsburgh, Pittsburgh, Pa (Ms Sa). Dr Eysenbach is now with the Centre for Global eHealth Innovation, Toronto General Hospital, Toronto, Ontario. Corresponding Author and Reprints: Gunther Eysenbach, MD, Centre for Global eHealth Innovation, Toronto General Hospital, 190 Elizabeth St, Toronto, Ontario,. ContextThe quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. ObjectivesTo establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. . We also conducted hand searches, general Internet searches, and a personal bibliographic database search. Data Sources We searched MEDLINE and PREMEDLINE (1966 throughSeptem Study SelectionWe included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web...
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Background C-reactive protein (CRP) is a heritable marker of chronic inflammation that is strongly associated with cardiovascular disease. We aimed to identify genetic variants that are associated with CRP levels. Methods and Results We performed a genome wide association (GWA) analysis of CRP in 66,185 participants from 15 population-based studies. We sought replication for the genome wide significant and suggestive loci in a replication panel comprising 16,540 individuals from ten independent studies. We found 18 genome-wide significant loci and we provided evidence of replication for eight of them. Our results confirm seven previously known loci and introduce 11 novel loci that are implicated in pathways related to the metabolic syndrome (APOC1, HNF1A, LEPR, GCKR, HNF4A, and PTPN2), immune system (CRP, IL6R, NLRP3, IL1F10, and IRF1), or that reside in regions previously not known to play a role in chronic inflammation (PPP1R3B, SALL1, PABPC4, ASCL1, RORA, and BCL7B). We found significant interaction of body mass index (BMI) with LEPR (p<2.9×10−6). A weighted genetic risk score that was developed to summarize the effect of risk alleles was strongly associated with CRP levels and explained approximately 5% of the trait variance; however, there was no evidence for these genetic variants explaining the association of CRP with coronary heart disease. Conclusion We identified 18 loci that were associated with CRP levels. Our study highlights immune response and metabolic regulatory pathways involved in the regulation of chronic inflammation.
Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting.
Despite a significant increase in EPCs and release of cytochemokines during CABG, age is a major limiting factor for mobilization of EPCs. Further studies are necessary to improve the strategies for mobilization, ex vivo expansion, and re-transplantation of EPCs in aging patients.
Parental eczema is the major risk factor for eczema. But in this study, each month of breastfeeding also increased the risk.
Meta-analyses are an important tool within systematic reviews to estimate the overall effect size and its confidence interval for an outcome of interest. If heterogeneity between the results of the relevant studies is anticipated, then a random-effects model is often preferred for analysis. In this model, a prediction interval for the true effect in a new study also provides additional useful information. However, the DerSimonian and Laird method-frequently used as the default method for meta-analyses with random effects-has been long challenged due to its unfavorable statistical properties. Several alternative methods have been proposed that may have better statistical properties in specific scenarios. In this paper, we aim to provide a comprehensive overview of available methods for calculating point estimates, confidence intervals, and prediction intervals for the overall effect size under the random-effects model. We indicate whether some methods are preferable than others by considering the results of comparative simulation and real-life data studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.