2017
DOI: 10.1007/978-3-319-69005-6_34
|View full text |Cite
|
Sign up to set email alerts
|

Memory Augmented Attention Model for Chinese Implicit Discourse Relation Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…Fan et al [21] proposed a BiLSTM-based model combining self-attention mechanism and syntactic information. Liu and Li [83] Some other neural models also adopt attention mechanisms for the IDRR task [3,28,29,85,113,115]. For example, Bai and Zhao [3] proposed to learn different granularity levels of representations via an attention-augment neural model.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Fan et al [21] proposed a BiLSTM-based model combining self-attention mechanism and syntactic information. Liu and Li [83] Some other neural models also adopt attention mechanisms for the IDRR task [3,28,29,85,113,115]. For example, Bai and Zhao [3] proposed to learn different granularity levels of representations via an attention-augment neural model.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Rönnqvist et al (2017) proposed a Bi-LSTM model with attention mechanism to link two arguments by inserting special labels. Liu et al (2017) provided a memory augmented attention model that used memory slots to store the interactions between two input arguments.…”
Section: Related Workmentioning
confidence: 99%
“…In the literature, most of previous studies focused on English, with only a few on Chinese. Compared with traditional feature-based methods (Pitler et al, 2009;Lin et al, 2009;Kong and Zhou, 2017) that directly rely on feature engineering, recent neural network models (Liu et al, 2017;Qin et al, 2017;Guo et al, 2018;Bai and Zhao, 2018) can capture deeper semantic cues and learn better representations (Zhang et al, 2015). In particular, most neural network-based methods encode arguments using variants of Bi-LSTM or CNN (Qin et al, 2016;Guo et al, 2018) and propose various models (e.g., the gated relevance network, the encoder-decoder model, and interactive attention) to measure the semantic relevance (Chen et al, 2016;Cianflone and Kosseim, 2018;Guo et al, 2018) Due to the large differences between the hypotactic English language and the paratactic Chinese language, English-based models, which rely heavily on sentence-level representations, may not function well on Chinese.…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned above, various methods have been proposed for English (PDTB), while the Chinese task has received little attention in the literature. Liu et al [47] utilized an attention-based neural network to represent arguments and employed an external memory network to preserve crucial information for Chinese task. We argue that Chinese needs word segmentation and other pre-processing, which is more sensitive to multi-granularity information.…”
Section: Hybrid Neural Networkmentioning
confidence: 99%