2020
DOI: 10.1007/s11192-020-03421-9
|View full text |Cite
|
Sign up to set email alerts
|

Modeling citation worthiness by using attention-based bidirectional long short-term memory networks and interpretable models

Abstract: Scientist learn early on how to cite scientific sources to support their claims. Sometimes, however, scientists have challenges determining where a citation should be situated-or, even worse, fail to cite a source altogether. Automatically detecting sentences that need a citation (i.e., citation worthiness) could solve both of these issues, leading to more robust and well-constructed scientific arguments. Previous researchers have applied machine learning to this task but have used small datasets and models th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 50 publications
0
10
0
Order By: Relevance
“…Table 2 summarizes the results in terms of the precision, recall, F1 score for l c , and overall weighted F1 score. The baseline numbers reported here are either from prior works (Färber et al, 2018;Bonab et al, 2018;Zeng and Acuna, 2020) or based on architectures very similar to those used in these prior works. On the SEPID-cite dataset, our SC model obtained significantly better performance than the state-of-the-art results from (Zeng and The results on the ACL-cite dataset clearly show the importance of context in this domain.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 2 summarizes the results in terms of the precision, recall, F1 score for l c , and overall weighted F1 score. The baseline numbers reported here are either from prior works (Färber et al, 2018;Bonab et al, 2018;Zeng and Acuna, 2020) or based on architectures very similar to those used in these prior works. On the SEPID-cite dataset, our SC model obtained significantly better performance than the state-of-the-art results from (Zeng and The results on the ACL-cite dataset clearly show the importance of context in this domain.…”
Section: Resultsmentioning
confidence: 99%
“…This vector is then passed through a feed-forward layer to obtain the class label. This approach is similar to (Zeng and Acuna, 2020), where the authors used Glove embeddings (Pennington et al, 2014) to obtain sentence representations, and BiLSTMs for context representations. This formulation has also been used previously for question-answering (Devlin et al, 2019) and passage re-ranking (Nogueira and Cho, 2019).…”
Section: Sentence-pair Classificationmentioning
confidence: 99%
“…This vector is then passed through a feed-forward layer to obtain the class label. This approach is similar to (Zeng and Acuna, 2020), where the authors used Glove embeddings (Pennington et al, 2014) to obtain sentence representations, and BiLSTMs for context representations. This formulation has also been used previously for question-answering (Devlin et al, 2019) and passage re-ranking (Nogueira and Cho, 2019).…”
Section: Sentence-pair Classificationmentioning
confidence: 99%
“…Subsequent works from (Färber et al, 2018;Bonab et al, 2018) use similar approach but employ deep learning models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). More recently, Zeng et al (Zeng and Acuna, 2020) proposed a Bidirectional Long shortterm memory (BiLSTM) based architecture and demonstrated that context, specifically the two adjacent sentences, can help improve the prediction of citation worthiness.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation