2020
DOI: 10.1016/j.knosys.2020.106492
|View full text |Cite
|
Sign up to set email alerts
|

Joint event extraction along shortest dependency paths using graph convolutional networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(23 citation statements)
references
References 53 publications
0
19
0
Order By: Relevance
“…(Rao et al, 2017) proposes a subgraph matching based method to extract biomedical events from AMR graphs, while uses an additional GCN based encoder for obtaining better word representations. Besides, graph neural networks are also widely used for event extraction (Liu et al, 2018;Balali et al, 2020;Zhang et al, 2021) and relation and entity extraction (Zhang et al, 2018;Sun et al, 2020). Graph neural networks also demonstrate effectiveness to encode other types of intrinsic structures of a sentence, such as knowledge graph (Zhang et al, 2019a;, document-level relations (Sahu et al, 2019;Lockard et al, 2020;, and selfconstructed graphs (Kim and Lee, 2012;Zhu et al, 2019;Qian et al, 2019;Sahu et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…(Rao et al, 2017) proposes a subgraph matching based method to extract biomedical events from AMR graphs, while uses an additional GCN based encoder for obtaining better word representations. Besides, graph neural networks are also widely used for event extraction (Liu et al, 2018;Balali et al, 2020;Zhang et al, 2021) and relation and entity extraction (Zhang et al, 2018;Sun et al, 2020). Graph neural networks also demonstrate effectiveness to encode other types of intrinsic structures of a sentence, such as knowledge graph (Zhang et al, 2019a;, document-level relations (Sahu et al, 2019;Lockard et al, 2020;, and selfconstructed graphs (Kim and Lee, 2012;Zhu et al, 2019;Qian et al, 2019;Sahu et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Initial attempts on event extraction relied on hand-crafted features and a pipeline architecture (Ahn, 2006;Gupta and Ji, 2009;Li et al, 2013). Later studies gained significant improvement from neural approaches, especially large pre-trained language models (Wadden et al, 2019;Nguyen et al, 2016;Lin et al, 2020;Balali et al, 2020). Recently, event extraction at the document level gains more attention.…”
Section: Document-level Event Extractionmentioning
confidence: 99%
“…(Ahn, 2006;Gupta and Ji, 2009;. Later studies gained significant improvement from neural approaches, especially large pre-trained language models (Wadden et al, 2019;Nguyen et al, 2016;Lin et al, 2020;Balali et al, 2020). Recently, event extraction at the document level gains more attention.…”
Section: Document-level Information Extractionmentioning
confidence: 99%