Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.126
|View full text |Cite
|
Sign up to set email alerts
|

Context Tracking Network: Graph-based Context Modeling for Implicit Discourse Relation Recognition

Abstract: Implicit discourse relation recognition (IDRR)aims to identify logical relations between two adjacent sentences in the discourse. Existing models fail to fully utilize the contextual information which plays an important role in interpreting each local sentence. In this paper, we thus propose a novel graph-based Context Tracking Network (CT-Net) to model the discourse context for IDRR. The CT-Net firstly converts the discourse into the paragraph association graph (PAG), where each sentence tracks their closely … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…Some other hybrid neural networks have been designed through combining various models such as simplified topic model, gated convolutional network, factored tensor network, graph convolutional network, directed graphic model, sequence to sequence model, etc. [122,155,166,168]. For example, Xu et al [155] proposed a Topic Tensor Network (TTN) model which combines a simplified topic model, a gated convolutional network and a factored tensor network.…”
Section: Hybrid Neural Network Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some other hybrid neural networks have been designed through combining various models such as simplified topic model, gated convolutional network, factored tensor network, graph convolutional network, directed graphic model, sequence to sequence model, etc. [122,155,166,168]. For example, Xu et al [155] proposed a Topic Tensor Network (TTN) model which combines a simplified topic model, a gated convolutional network and a factored tensor network.…”
Section: Hybrid Neural Network Modelsmentioning
confidence: 99%
“…Zhang et al [168] proposed a Semantic Graph Convolutional Network (SGCN) to enhance the inter-argument semantic interaction. They first encoded each argument into a representation vector by a BiLSTM network.…”
Section: Hybrid Neural Network Modelsmentioning
confidence: 99%
“…More recent work on implicit relation classification has adopted a graph-based context tracking network to model the necessary context for interpreting the discourse and has gained better performance on PDTB-2 (Zhang et al, 2021). In addition, the increase in the number of implicit relation instances in PDTB-3 (Prasad et al, 2019) has sparked more interest in exploring their recognition, such as Kim et al (2020) and Liang et al (2020).…”
Section: Previous Workmentioning
confidence: 99%
“…Kido and Aizawa 2016) and implicit (e.g. Liu et al 2016;Wang and Lan 2016;Rutherford et al 2017;Kim et al 2020;Liang et al 2020;Zhang et al 2021) discourse relation classification in English PDTB-2 (Prasad et al, 2008) and PDTB-3 (Prasad et al, 2019) as well as on the PDTB-style Chinese newswire corpus (CDTB, Zhou and Xue 2012;Zhou et al 2014) such as Schenk et al (2016) and Weiss and Bajec (2016).…”
Section: Introductionmentioning
confidence: 99%
“…The main difference between KG-augmented models is how they use messaging (Schlichtkrull et al, 2018; or edge/path aggregation (Lin et al, 2019;Bosselut and Choi, 2019;Ma et al, 2019;Wang et al, 2020a) used in NLP. Some works use GNN to model text structure (Yasunaga and Liang, 2020;Zhang et al, 2021), extract relations and entities (Xu et al, 2021a;, solve multi-hop question answering (Saxena et al, 2020;Fang et al, 2020), and so on. Our work is based on the Graph Attention Networks (GATs) (Veličković et al, 2018) framework.…”
Section: Commonsense Question Answeringmentioning
confidence: 99%