This paper presents a case study of long-term post-retraction citation to falsified clinical trial data (Matsuyama et al. in Chest 128(6):3817–3827, 2005. 10.1378/chest.128.6.3817), demonstrating problems with how the current digital library environment communicates retraction status. Eleven years after its retraction, the paper continues to be cited positively and uncritically to support a medical nutrition intervention, without mention of its 2008 retraction for falsifying data. To date no high quality clinical trials reporting on the efficacy of omega-3 fatty acids on reducing inflammatory markers have been published. Our paper uses network analysis, citation context analysis, and retraction status visibility analysis to illustrate the potential for extended propagation of misinformation over a citation network, updating and extending a case study of the first 6 years of post-retraction citation (Fulton et al. in Publications 3(1):7–26, 2015. 10.3390/publications3010017). The current study covers 148 direct citations from 2006 through 2019 and their 2542 second-generation citations and assesses retraction status visibility of the case study paper and its retraction notice on 12 digital platforms as of 2020. The retraction is not mentioned in 96% (107/112) of direct post-retraction citations for which we were able to conduct citation context analysis. Over 41% (44/107) of direct post-retraction citations that do not mention the retraction describe the case study paper in detail, giving a risk of diffusing misinformation from the case paper. We analyze 152 second-generation citations to the most recent 35 direct citations (2010–2019) that do not mention the retraction but do mention methods or results of the case paper, finding 23 possible diffusions of misinformation from these non-direct citations to the case paper. Link resolving errors from databases show a significant challenge in a reader reaching the retraction notice via a database search. Only 1/8 databases (and 1/9 database records) consistently resolved the retraction notice to its full-text correctly in our tests. Although limited to evaluation of a single case (N = 1), this work demonstrates how retracted research can continue to spread and how the current information environment contributes to this problem.
Argumentation represents the study of views and opinions that humans express with the goal of reaching a conclusion through logical reasoning. Since the 1950's, several models have been proposed to capture the essence of informal argumentation in different settings. With the emergence of the Web, and then the Semantic Web, this modeling shifted towards ontologies, while from the development perspective, we witnessed an important increase in Web 2.0 human-centered collaborative deliberation tools. Through a review of more than 150 scholarly papers, this article provides a comprehensive and comparative overview of approaches to modeling argumentation for the Social Semantic Web. We start from theoretical foundational models and investigate how they have influenced Social Web tools. We also look into Semantic Web argumentation models. Finally we end with Social Web tools for argumentation, including online applications combining Web 2.0 and Semantic Web technologies, following the path to a global World Wide Argument Web.
We present the first database-wise study on the citation contexts of retracted papers, which covers 7,813 retracted papers indexed in PubMed, 169,434 citations collected from iCite, and 48,134 citation contexts identified from the XML version of the PubMed Central Open Access Subset. Compared with previous citation studies that focused on comparing citation counts using two time frames (i.e., pre-retraction and post-retraction), our analyses show the longitudinal trends of citations to retracted papers in the past 60 years (1960-2020). Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed. Analysis of the text progression of pre- and post-retraction citation contexts shows that retraction did not change the way the retracted papers were cited. Furthermore, among the 13,252 post-retraction citation contexts, only 722 (5.4%) citation contexts acknowledged the retraction. In these 722 citation contexts, the retracted papers were most commonly cited as related work or as an example of problematic science. Our findings deepen the understanding of why retraction does not stop citation and demonstrate that the vast majority of post-retraction citations in biomedicine do not document the retraction. Peer Review https://publons.com/publon/10.1162/qss_a_00155
Background Preventing drug interactions is an important goal to maximize patient benefit from medications. Summarizing potential drug-drug interactions (PDDIs) for clinical decision support is challenging, and there is no single repository for PDDI evidence. Additionally, inconsistencies across compendia and other sources have been well documented. Standard search strategies for complete and current evidence about PDDIs have not heretofore been developed or validated. Objective This study aimed to identify common methods for conducting PDDI literature searches used by experts who routinely evaluate such evidence. Methods We invited a convenience sample of 70 drug information experts, including compendia editors, knowledge-base vendors, and clinicians, via emails to complete a survey on identifying PDDI evidence. We created a Web-based survey that included questions regarding the (1) development and conduct of searches; (2) resources used, for example, databases, compendia, search engines, etc; (3) types of keywords used to search for the specific PDDI information; (4) study types included and excluded in searches; and (5) search terms used. Search strategy questions focused on 6 topics of the PDDI information—(1) that a PDDI exists; (2) seriousness; (3) clinical consequences; (4) management options; (5) mechanism; and (6) health outcomes. Results Twenty participants (response rate, 20/70, 29%) completed the survey. The majority (17/20, 85%) were drug information specialists, drug interaction researchers, compendia editors, or clinical pharmacists, with 60% (12/20) having >10 years’ experience. Over half (11/20, 55%) worked for clinical solutions vendors or knowledge-base vendors. Most participants developed (18/20, 90%) and conducted (19/20, 95%) search strategies without librarian assistance. PubMed (20/20, 100%) and Google Scholar (11/20, 55%) were most commonly searched for papers, followed by Google Web Search (7/20, 35%) and EMBASE (3/20, 15%). No respondents reported using Scopus. A variety of subscription and open-access databases were used, most commonly Lexicomp (9/20, 45%), Micromedex (8/20, 40%), Drugs@FDA (17/20, 85%), and DailyMed (13/20, 65%). Facts and Comparisons was the most commonly used compendia (8/20, 40%). Across the 6 attributes of interest, generic drug name was the most common keyword used. Respondents reported using more types of keywords when searching to identify the existence of PDDIs and determine their mechanism than when searching for the other 4 attributes (seriousness, consequences, management, and health outcomes). Regarding the types of evidence useful for evaluating a PDDI, clinical trials, case reports, and systematic reviews were considered relevant, while animal and in vitro data studies were not. Conclusions This study suggests that drug interaction experts use various keyword strategies and various database and Web resources depending on the...
Taxonomy alignment is a way to integrate two or more taxonomies. Semantic interoperability between datasets, information systems, and knowledge bases is facilitated by combining the different input taxonomies into merged taxonomies that reconcile apparent differences or conflicts. We show how alignment problems can be solved with a logic-based region connection calculus (RCC-5) approach, using five base relations to compare concepts: congruence, inclusion, inverse inclusion, overlap, and disjointness. To illustrate this method, we use different "geo-taxonomies", which organize the United States into several, apparently conflicting, geospatial hierarchies. For example, we align T CEN , a taxonomy derived from the Census Bureau's regions map, with T NDC , from the National Diversity Council (NDC), and with T TZ , a taxonomy capturing the U.S. time zones. Using these case studies, we show how this logicbased approach can reconcile conflicts between taxonomies. We have implemented these case studies with an open source tool called Euler/X which has been applied primarily for solving complex alignment problems in biological classification. In this paper, we demonstrate the feasibility and broad applicability of this approach to other domains and alignment problems in support of semantic interoperability.
Social media have enabled a revolution in user-generated content. They allow users to connect, build community, produce and share content, and publish opinions. To better understand online users' attitudes and opinions, we use stance classification. Stance classification is a relatively new and challenging approach to deepen opinion mining by classifying a user's stance in a debate. Our stance classification use case is tweets that were related to the spring 2016 debate over the FBI's request that Apple decrypt a user's iPhone. In this "encryption debate," public opinion was polarized between advocates for individual privacy and advocates for national security. We propose a machine learning approach to classify stance in the debate, and a topic classification that uses lexical, syntactic, Twitter-specific, and argumentative features as a predictor for classifications. Models trained on these feature sets showed significant increases in accuracy relative to the unigram baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.