Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1002
|View full text |Cite
|
Sign up to set email alerts
|

Representation Learning for Text-level Discourse Parsing

Abstract: Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features. In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing. By combining the machinery of large-margin transition-based structured prediction with representation learning, our method jointly learns to parse discourse while at t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
255
1
2

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 216 publications
(261 citation statements)
references
References 28 publications
(42 reference statements)
2
255
1
2
Order By: Relevance
“…a Scores reported from (Li et al, 2014), and DPLP (Ji and Eisenstein, 2014). b For Brazilian Portuguese, inter-annotator agreement scores are only available for the CST-news corpus ; For Spanish, only precision scores are reported ; For Basque, the scores reported are different (Iruskieta et al, 2015).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…a Scores reported from (Li et al, 2014), and DPLP (Ji and Eisenstein, 2014). b For Brazilian Portuguese, inter-annotator agreement scores are only available for the CST-news corpus ; For Spanish, only precision scores are reported ; For Basque, the scores reported are different (Iruskieta et al, 2015).…”
Section: Resultsmentioning
confidence: 99%
“…Note that we do not use all the words in the EDUs as features, contrary to (Li et al, 2014;Ji and Eisenstein, 2014). Our only word features are the words in the head set and at the boundaries, thus 7 words per EDU.…”
Section: Lexical Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the advantages is the ability of automatic representation learning. Representing words or relations with continuous vectors (Mikolov et al, 2013;Ji and Eisenstein, 2014) embeds semantics in the same space, which benefits alleviating the data sparseness problem and enables end-to-end and multi-task learning. Recurrent neural networks (RNNs) (Graves, 2012) and the variants like Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent (GRU) (Cho et al, 2014) neural networks show good performance for capturing long distance dependencies on tasks like Named Entity Recognition (NER) (Chiu and Nichols, 2016;Ma and Hovy, 2016), dependency parsing (Dyer et al, 2015) and semantic composition of documents (Tang et al, 2015).…”
Section: Neural Sequence Modelingmentioning
confidence: 99%
“…Most work on discourse parsing has focused on English and on the RST-DT (Ji and Eisenstein, 2014;Feng and Hirst, 2014;Li et al, 2014;Joty et al, 2013), and so discourse segmentation (Xuan Bach et al, 2012;Fisher and Roark, 2007;Subba and Di Eugenio, 2007). And while discourse parsing is a document level task, discourse segmentation is done at the sentence level, assuming that sentence boundaries are known.…”
Section: Introductionmentioning
confidence: 99%