Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.225
|View full text |Cite
|
Sign up to set email alerts
|

Discourse Self-Attention for Discourse Element Identification in Argumentative Student Essays

Abstract: This paper proposes to adapt self-attention to discourse level for modeling discourse elements in argumentative student essays. Specifically, we focus on two issues. First, we propose structural sentence positional encodings to explicitly represent sentence positions. Second, we propose to use inter-sentence attentions to capture sentence interactions and enhance sentence representation. We conduct experiments on two datasets: a Chinese dataset and an English dataset. We find that (i) sentence positional encod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 31 publications
(43 reference statements)
0
8
0
Order By: Relevance
“…Following Song et al (2020) , we also used the position information as the feature input. The sentence, part-of-speech tagging and position information was then input to BiLSTM for feature extraction as a whole to better understand the contextual information.…”
Section: The Model and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Following Song et al (2020) , we also used the position information as the feature input. The sentence, part-of-speech tagging and position information was then input to BiLSTM for feature extraction as a whole to better understand the contextual information.…”
Section: The Model and Resultsmentioning
confidence: 99%
“…They also designed an effective coarse-to-fine argument fusion mechanism to further improve the precision rate. Song et al (2020) took a step further and identified four argument components (main claim, claim, premise, and other) in two datasets (a Chinese dataset and an English dataset) with structural sentence positional encodings to explicitly represent sentence positions and inter-sentence attentions to capture sentence interactions and enhance sentence representation. To the best of our knowledge, most existing models are rule-based or feature-based ( Persing and Ng, 2016 ; Stab and Gurevych, 2017 ), which require considerable manual efforts and are not flexible or robust in cross-domain scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…We also propose to use sentence position (spos) embedding as an input feature because it has been proved to be useful in other studies (e.g., Song et al, 2020). The sentence position encoding is…”
Section: Multi-task Learning With Structural Signalmentioning
confidence: 99%
“…Note that ICNALE-AS2R was annotated at the sentence level while PEC was annotated at the segment level. Therefore, we convert the PEC annotation to the sentence level, following the strategy described by Song et al (2020). If a sentence contains only one AC, we use the whole sentence as an AC; if a sentence contains two or more ACs, we split it into multiple sentences while including the preceding tokens into each AC.…”
Section: Multi-corpora Trainingmentioning
confidence: 99%
“…The existing neural methods obtain a generic representation of the text through a hierarchical model using convolutional neural networks (CNN) for word-level representation and long short-term memory (LSTM) for sentence-level representation [ 4 ], which is not specific to different features. To enhance the representation of the essay, some studies have attempted to incorporate features such as prompt [ 3 , 13 ], organization [ 14 ], coherence [ 2 ], and discourse structure [ 15 , 16 , 17 ] into the neural model. These features are critical for the AES task because they help the model understand the essay while also making the essay scoring more interpretable.…”
Section: Introductionmentioning
confidence: 99%