In the scientific literature, spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favourable light. The presence of spin in biomedical research can negatively impact the development of further studies, clinical practice, and health policies. This systematic review aims to explore the nature and prevalence of spin in the biomedical literature. We searched MEDLINE, PreMEDLINE, Embase, Scopus, and hand searched reference lists for all reports that included the measurement of spin in the biomedical literature for at least 1 outcome. Two independent coders extracted data on the characteristics of reports and their included studies and all spin-related outcomes. Results were grouped inductively into themes by spin-related outcome and are presented as a narrative synthesis. We used meta-analyses to analyse the association of spin with industry sponsorship of research. We included 35 reports, which investigated spin in clinical trials, observational studies, diagnostic accuracy studies, systematic reviews, and meta-analyses. The nature of spin varied according to study design. The highest (but also greatest) variability in the prevalence of spin was present in trials. Some of the common practices used to spin results included detracting from statistically nonsignificant results and inappropriately using causal language. Source of funding was hypothesised by a few authors to be a factor associated with spin; however, results were inconclusive, possibly due to the heterogeneity of the included papers. Further research is needed to assess the impact of spin on readers’ decision-making. Editors and peer reviewers should be familiar with the prevalence and manifestations of spin in their area of research in order to ensure accurate interpretation and dissemination of research.
ObjectivesTo investigate whether and how user data are shared by top rated medicines related mobile applications (apps) and to characterise privacy risks to app users, both clinicians and consumers.DesignTraffic, content, and network analysis.SettingTop rated medicines related apps for the Android mobile platform available in the Medical store category of Google Play in the United Kingdom, United States, Canada, and Australia.Participants24 of 821 apps identified by an app store crawling program. Included apps pertained to medicines information, dispensing, administration, prescribing, or use, and were interactive.InterventionsLaboratory based traffic analysis of each app downloaded onto a smartphone, simulating real world use with four dummy scripts. The app’s baseline traffic related to 28 different types of user data was observed. To identify privacy leaks, one source of user data was modified and deviations in the resulting traffic observed.Main outcome measuresIdentities and characterisation of entities directly receiving user data from sampled apps. Secondary content analysis of company websites and privacy policies identified data recipients’ main activities; network analysis characterised their data sharing relations.Results19/24 (79%) of sampled apps shared user data. 55 unique entities, owned by 46 parent companies, received or processed app user data, including developers and parent companies (first parties) and service providers (third parties). 18 (33%) provided infrastructure related services such as cloud services. 37 (67%) provided services related to the collection and analysis of user data, including analytics or advertising, suggesting heightened privacy risks. Network analysis revealed that first and third parties received a median of 3 (interquartile range 1-6, range 1-24) unique transmissions of user data. Third parties advertised the ability to share user data with 216 “fourth parties”; within this network (n=237), entities had access to a median of 3 (interquartile range 1-11, range 1-140) unique transmissions of user data. Several companies occupied central positions within the network with the ability to aggregate and re-identify user data.ConclusionsSharing of user data is routine, yet far from transparent. Clinicians should be conscious of privacy risks in their own use of apps and, when recommending apps, explain the potential for loss of privacy as part of informed consent. Privacy regulation should emphasise the accountabilities of those who control and process user data. Developers should disclose all data sharing practices and allow users to choose precisely what data are shared and with whom.
ObjectiveTo compare results reporting and the presence of spin in COVID-19 study preprints with their finalised journal publications.DesignCross-sectional study.SettingInternational medical literature.ParticipantsPreprints and final journal publications of 67 interventional and observational studies of COVID-19 treatment or prevention from the Cochrane COVID-19 Study Register published between 1 March 2020 and 30 October 2020.Main outcome measuresStudy characteristics and discrepancies in (1) results reporting (number of outcomes, outcome descriptor, measure, metric, assessment time point, data reported, reported statistical significance of result, type of statistical analysis, subgroup analyses (if any), whether outcome was identified as primary or secondary) and (2) spin (reporting practices that distort the interpretation of results so they are viewed more favourably).ResultsOf 67 included studies, 23 (34%) had no discrepancies in results reporting between preprints and journal publications. Fifteen (22%) studies had at least one outcome that was included in the journal publication, but not the preprint; eight (12%) had at least one outcome that was reported in the preprint only. For outcomes that were reported in both preprints and journals, common discrepancies were differences in numerical values and statistical significance, additional statistical tests and subgroup analyses and longer follow-up times for outcome assessment in journal publications.At least one instance of spin occurred in both preprints and journals in 23/67 (34%) studies, the preprint only in 5 (7%), and the journal publications only in 2 (3%). Spin was removed between the preprint and journal publication in 5/67 (7%) studies; but added in 1/67 (1%) study.ConclusionsThe COVID-19 preprints and their subsequent journal publications were largely similar in reporting of study characteristics, outcomes and spin. All COVID-19 studies published as preprints and journal publications should be critically evaluated for discrepancies and spin.
ObjectivesTo identify and calculate the prevalence of spin in studies of spin.DesignMeta-research analysis (research on research).Setting35 studies of spin in the scientific literature.Main outcome measuresSpin, categorised as: reporting practices that distort the presentation and interpretation of results, creating misleading conclusions; discordance between results and their interpretation, with presentation of favourable conclusions that are not supported by the data or results; attribution of causality when study design does not support it; and over-interpretation or inappropriate extrapolation of results.ResultsFive (14%) of 35 spin studies contained spin categorised as reporting practices that distort the presentation and interpretation of results (n=2) or categorised as over-interpretation or inappropriate extrapolation of results (n=3).ConclusionSpin occurs in research on spin. Although researchers on this topic should be sensitive to spinning their findings, our study does not undermine the need for rigorous interventions to reduce spin across various research fields.Conclusion with spinOur hypothesis that spin will be less prevalent in spin studies than in studies on other topics has been proven. Spin scholars are less likely to spin their conclusions than other researchers, and they should receive substantial resources to launch and test interventions to reduce spin and research waste in reporting.
Objective: To compare results reporting and the presence of spin in COVID-19 study preprints with their finalized journal publications Design: Cross-sectional Setting: International medical literature Participants: Preprints and final journal publications of 67 interventional and observational studies of COVID-19 treatment or prevention from the Cochrane COVID-19 Study Register published between March 1, 2020 and October 30, 2020 Main outcome measures: Study characteristics and discrepancies in 1) Results reporting (number of outcomes, outcome descriptor, measure (e.g., PCR test), metric (e.g., mean change from baseline), assessment time point (e.g., 1 week post treatment), data reported (e.g., effect estimate and measures of precision), reported statistical significance of result, type of statistical analysis (e.g., chi-squared test), subgroup analyses (if any), whether outcome was identified as primary or secondary and 2) Spin (reporting practices that distort the interpretation of results so that results are viewed more favorably). Results: Of 67 included studies, 23 (34%) had no discrepancies in results reporting between preprints and journal publications. Fifteen (22%) studies had at least one outcome that was included in the journal publication, but not the preprint; 8 (12%) had at least one outcome that was reported in the preprint only. For outcomes that were reported in both preprints and journals, common discrepancies were differences in numerical values and statistical significance, additional statistical tests and subgroup analyses conducted in journal publications, and longer follow-up times for outcome assessment in journal publications. At least one instance of spin occurred in both preprints and journals in 23 / 67 (34%) studies, the preprint only in 5 (7%) studies, and the journal publications only in 2 (3%) of studies. Spin was removed between the preprint and journal publication in 5/67 (7%) studies; but added in 1/67 (1%) study. Conclusions: The COVID-19 preprints and their subsequent journal publications were largely similar in reporting of study characteristics, outcomes and spin. All COVID-19 studies published as preprints and journal publications should be critically evaluated for discrepancies and spin.
BACKGROUND: Developers of medicines-related apps collect a variety of technical, health-related, and identifying user information to improve and tailor services. User data may also be used for promotional purposes. Apps, for example, may be used to skirt regulation of direct-to-consumer advertising of medicines. Researchers have documented routine and extensive sharing of user data with third parties for commercial purposes, but little is known about the ways that app developers or "first" parties employ user data. OBJECTIVE: We aimed to investigate the nature of user data collection and commercialization by developers of medicines-related apps. APPROACH: We conducted a content analysis of apps' store descriptions, linked websites, policies, and sponsorship prospectuses for prominent medicinesrelated apps found in the USA, Canada, Australia, and UK Google Play stores in late 2017. Apps were included if they pertained to the prescribing, administration, or use of medicines, and were interactive. Two independent coders extracted data from documents using a structured, open-ended instrument. We performed open, inductive coding to identify the range of promotional strategies involving user data for commercial purposes and wrote descriptive memos to refine and detail these codes. KEY RESULTS: Ten of 24 apps primarily provided medication adherence services; 14 primarily provided medicines information. The majority (71%, 17/24) outlined at least one promotional strategy involving users' data for commercial purposes which included personalized marketing of the developer's related products and services, highly tailored advertising, third-party sponsorship of targeted content or messaging, and sale of aggregated customer insights to stakeholders. CONCLUSIONS: App developers may employ users' data in a feedback loop to deliver highly targeted promotional messages from developers, and commercial sponsors, including the pharmaceutical industry. These practices call into question developers' claims about the trustworthiness and independence of purportedly evidenced-based medicines information and may create a risk for mis-or overtreatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.