Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1057
|View full text |Cite
|
Sign up to set email alerts
|

TutorialBank: A Manually-Collected Corpus for Prerequisite Chains, Survey Extraction and Resource Recommendation

Abstract: The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(23 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…Modeling approaches in scientific document summarization include models that exploit citation contexts (Qazvinian et al, 2013;Goharian, 2015, 2017;Zerva et al, 2020), automated survey generation (Mohammad et al, 2009;Jha et al, 2015;Fabbri et al, 2018;Wang et al, 2018), and other techniques focusing on exploiting the unique properties of scientific documents such as long length and structure (Conroy and Davis, 2017;Nikolov et al, 2018;Cohan et al, 2018;Xiao and Carenini, 2019). Yet, such methods have not been studied in the setting of extreme summarization (i.e.…”
Section: Tldr-prmentioning
confidence: 99%
“…Modeling approaches in scientific document summarization include models that exploit citation contexts (Qazvinian et al, 2013;Goharian, 2015, 2017;Zerva et al, 2020), automated survey generation (Mohammad et al, 2009;Jha et al, 2015;Fabbri et al, 2018;Wang et al, 2018), and other techniques focusing on exploiting the unique properties of scientific documents such as long length and structure (Conroy and Davis, 2017;Nikolov et al, 2018;Cohan et al, 2018;Xiao and Carenini, 2019). Yet, such methods have not been studied in the setting of extreme summarization (i.e.…”
Section: Tldr-prmentioning
confidence: 99%
“…Prerequisite relation is defined as: If concept A can help understanding concept B, then there is a prerequisite relation from A to B (Gordon et al, 2016). Prerequisite relation has received much attention in recent years (Pan et al, 2017a;Fabbri et al, 2018; and has a direct help for teaching applications. To build prerequisite chains, we first reduce the amount of candidate concept pairs by utilizing taxonomy information (Liang et al, 2015) and video dependency (Roy et al, 2019), and then lead manual annotation.…”
Section: Concept and Concept Graphmentioning
confidence: 99%
“…Consequently, they are not feasible enough to support ideas that demand more types of information. Moreover, these datasets only contain a small size of specific entities or relation instances, e.g., prerequisite relation of TutorialBank (Fabbri et al, 2018) only has 794 cases, making it insufficient for advanced models (such as graph neural networks).…”
Section: Introductionmentioning
confidence: 99%
“…For supporting the ACL community, CL Scholar (Singh et al, 2018) presents a graph mining tool on top of the ACL anthology and enables exploration of research progress. Tutori-alBank (Fabbri et al, 2018) helps researchers to learn or stay up-to-date in the NLP field. Recently, paperswithcode 4 is an open resource for ML papers, code and leaderboards.…”
Section: Related Workmentioning
confidence: 99%