Analogous to checklists of recommendations such as the CONSORT statement (for randomized trials), or the QUORUM statement (for systematic reviews), which are designed to ensure the quality of reports in the medical literature, a checklist of recommendations for authors is being presented by the Journal of Medical Internet Research (JMIR) in an effort to ensure complete descriptions of Web-based surveys. Papers on Web-based surveys reported according to the CHERRIES statement will give readers a better understanding of the sample (self-)selection and its possible differences from a “representative” sample. It is hoped that author adherence to the checklist will increase the usefulness of such reports.
T HE INTERNET HAS BECOME AN IMportant mass medium for consumers seeking health information and health care services online.1 A recent concern and public health issue has been the quality of health information on the World Wide Web. However, the scale of the problem and the "epidemiology" (distribution and determinants) of poor health information on the Web are still unclear, as is their impact on public health and the question of whether poor health information on the Web is a problem at all. 2 Many studies have been conducted to describe, critically appraise, and analyze consumer health information on the Web. These typically report proportions of inaccurate or imperfect information as estimates of the prevalence of flawed information or the risk of encountering misinformation on the Web.However, to date no systematic and comprehensive synthesis of the methodology and evidence has been attempted. Two previous systematic reviews focused on compiling quality criteria and rating instruments, but did not synthesize evaluation results. Jadad and Gagliari 3 reviewed nonresearch-based rating systems (eg, criAuthor Affiliations: Unit for Cybermedicine and EHealth, Department of Clinical Social Medicine, University of Heidelberg, Heidelberg, Germany (Dr Eysenbach); Health Services Research Unit, London School of Hygiene and Tropical Medicine, London, England (Dr Powell); Department of Medical Epidemiology, Biometry and Informatics, University of Halle-Wittenberg, Halle/Saale, Germany (Dr Kuss); and Global Health Network Group, Department of Epidemiology, University of Pittsburgh, Pittsburgh, Pa (Ms Sa). Dr Eysenbach is now with the Centre for Global eHealth Innovation, Toronto General Hospital, Toronto, Ontario. Corresponding Author and Reprints: Gunther Eysenbach, MD, Centre for Global eHealth Innovation, Toronto General Hospital, 190 Elizabeth St, Toronto, Ontario,. ContextThe quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. ObjectivesTo establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. . We also conducted hand searches, general Internet searches, and a personal bibliographic database search. Data Sources We searched MEDLINE and PREMEDLINE (1966 throughSeptem Study SelectionWe included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web...
and assuring the quality of medical information on the internet. JAMA 1997;277:1244-5. 4 Wyatt JC. Measuring quality and impact of the world wide web [commentary]. BMJ 1997;314:1879-81. 5 Shon J, Marshall J, Musen MA. The impact of displayed awards on the credibility and retention of web site information. AbstractObjectives To describe techniques for retrieval and appraisal used by consumers when they search for health information on the internet. Design Qualitative study using focus groups, naturalistic observation of consumers searching the world wide web in a usability laboratory, and in-depth interviews. Participants A total of 21 users of the internet participated in three focus group sessions. 17 participants were given a series of health questions and observed in a usability laboratory setting while retrieving health information from the web; this was followed by in-depth interviews. Setting Heidelberg, Germany. Results Although their search technique was often suboptimal, internet users successfully found health information to answer questions in an average of 5 minutes 42 seconds (median 4 minutes 18 seconds) per question. Participants in focus groups said that when assessing the credibility of a website they primarily looked for the source, a professional design, a scientific or official touch, language, and ease of use. However, in the observational study, no participants checked any "about us" sections of websites, disclaimers, or disclosure statements. In the post-search interviews, it emerged that very few participants had noticed and remembered which websites they had retrieved information from. Conclusions Further observational studies are needed to design and evaluate educational and technological innovations for guiding consumers to high quality health information on the web. IntroductionLittle is known about how consumers retrieve and assess health information on the world wide web. Some surveys have elicited data by using semistructured questionnaires or focus groups, 1-3 but little (if any) unobtrusive observational research has been done to explore how consumers are actually surfing the web. Although several criteria for quality of health websites have been proposed-including disclosure of site owners, authors, and update cycle 4 5 -little or nothing is known about whether and to what degree such markers are recognised or even looked at by consumers or what other credibility markers consumers are looking for. We aimed to obtain qualitative and semiquantitative data to generate some hypotheses on how consumers might search for and appraise health information. MethodsWe used multiple methods of data collection that are commonly used in studies of human-computer interactions, 6 combining focus groups, 7 naturalistic observation of consumers searching the internet, and post-search in-depth interviews. Two researchers independently analysed transcripts by using N5 (NUD*IST 5.0; QSR International, Melbourne) with the grounded theory approach. 8 Participants in the focus groups and the observatio...
BackgroundSurveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary “infoveillance” approach using Twitter during the 2009 H1N1 pandemic. Our study aimed to: 1) monitor the use of the terms “H1N1” versus “swine flu” over time; 2) conduct a content analysis of “tweets”; and 3) validate Twitter as a real-time content, sentiment, and public attention trend-tracking tool.Methodology/Principal FindingsBetween May 1 and December 31, 2009, we archived over 2 million Twitter posts containing keywords “swine flu,” “swineflu,” and/or “H1N1.” using Infovigil, an infoveillance system. Tweets using “H1N1” increased from 8.8% to 40.5% (R 2 = .788; p<.001), indicating a gradual adoption of World Health Organization-recommended terminology. 5,395 tweets were randomly selected from 9 days, 4 weeks apart and coded using a tri-axial coding scheme. To track tweet content and to test the feasibility of automated coding, we created database queries for keywords and correlated these results with manual coding. Content analysis indicated resource-related posts were most commonly shared (52.6%). 4.5% of cases were identified as misinformation. News websites were the most popular sources (23.2%), while government and health agencies were linked only 1.5% of the time. 7/10 automated queries correlated with manual coding. Several Twitter activity peaks coincided with major news stories. Our results correlated well with H1N1 incidence data.ConclusionsThis study illustrates the potential of using social media to conduct “infodemiology” studies for public health. 2009 H1N1-related tweets were primarily used to disseminate information from credible sources, but were also a source of opinions and experiences. Tweets can be used for real-time content analysis and knowledge translation research, allowing health authorities to respond to public concerns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.