Scientist learn early on how to cite scientific sources to support their claims. Sometimes, however, scientists have challenges determining where a citation should be situated-or, even worse, fail to cite a source altogether. Automatically detecting sentences that need a citation (i.e., citation worthiness) could solve both of these issues, leading to more robust and well-constructed scientific arguments. Previous researchers have applied machine learning to this task but have used small datasets and models that do not take advantage of recent algorithmic developments such as attention mechanisms in deep learning. We hypothesize that we can develop significantly accurate deep learning architectures that learn from large supervised datasets constructed from open access publications. In this work, we propose a Bidirectional Long Short-Term Memory (BiLSTM) network with attention mechanism and contextual information to detect sentences that need citations. We also produce a new, large dataset (PMOA-CITE) based on PubMed Open Access Subset, which is orders of magnitude larger than previous datasets. Our experiments show that our architecture achieves state of the art performance on the standard ACL-ARC dataset (F 1 = 0.507) and exhibits high performance (F 1 = 0.856) on the new PMOA-CITE. Moreover, we show that it can transfer learning across these datasets. We further use interpretable models to illuminate how specific language is used to promote and inhibit citations. We discover that sections and surrounding sentences are crucial for our improved predictions. We further examined purported mispredictions of the model, and uncovered systematic human mistakes in citation behavior and source data. This opens the door for our model to check documents during pre-submission and pre-archival procedures. We discuss limitations of our work and make this new dataset, the code, and a web-based tool available to the community.
Modeling citation worthiness by using attention-based Bidirectional Long Short-Term Memory networks and interpretable modelsData sources and data pre-processing ACL Anthology Reference Corpus. The ACL Anthology Reference Corpus (ACL-ARC) is a collection of scientific articles in Computational Linguistics. The ACL-ARC 1.0 dataset consists of 10,921 articles up to February 2007, including the source PDF, automatically extracted full text, and the metadata for the articles. In order to use the ACL-ARC dataset, we need to remove some noisy sentences, such as footnotes, mathematical equations, and URLs. Bonab et al. (2018) carried out all these pre-processing steps and made the data available on the Internet 1 . This dataset consists of 85,778 sentences with citations and 1,142,275 sentences without citations. More statistics are presented in Table 2.PubMed Central Open Access Subset. PubMed Central Open Access subset (PMOAS) is a full-text collection of scientific literature in bio-medical and life sciences. PMOAS is created by the US's National Institutes of Health. We obtain a snapshot of PMOAS on August, 2...