Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1263
|View full text |Cite
|
Sign up to set email alerts
|

Better Document-level Sentiment Analysis from RST Discourse Parsing

Abstract: Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist , including Spanish, German, Basque, Dutch and Brazilian Portuguese. The tree-banks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parse… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
123
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 135 publications
(132 citation statements)
references
References 35 publications
(35 reference statements)
0
123
0
Order By: Relevance
“…This theory guided the annotation of the RST Discourse Treebank (RST-DT) for English, from which several textlevel discourse parsers have been induced (Hernault et al, 2010;Joty et al, 2012;Feng and Hirst, 2014;Li et al, 2014;Ji and Eisenstein, 2014). Such parsers have proven to be useful for various downstream applications (Daumé III and Marcu, 2009;Burstein et al, 2003;Higgins et al, 2004;Thione et al, 2004;Sporleder and Lapata, 2005;Taboada and Mann, 2006;Louis et al, 2010;Bhatia et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…This theory guided the annotation of the RST Discourse Treebank (RST-DT) for English, from which several textlevel discourse parsers have been induced (Hernault et al, 2010;Joty et al, 2012;Feng and Hirst, 2014;Li et al, 2014;Ji and Eisenstein, 2014). Such parsers have proven to be useful for various downstream applications (Daumé III and Marcu, 2009;Burstein et al, 2003;Higgins et al, 2004;Thione et al, 2004;Sporleder and Lapata, 2005;Taboada and Mann, 2006;Louis et al, 2010;Bhatia et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…In the Positive/Negative setting, if the prediction and the target had the same sign, they were considered equal. Notice that this is different from training a classifier for binary classification, which is a much easier task (see (Bhatia et al, 2015)). The difference in accuracy between these two settings signals that distinguishing between very positive and positive and distinguishing between very negative and negative is rather hard.…”
Section: Training and Evaluating The Modelsmentioning
confidence: 99%
“…However, these approaches usually rely exclusively on the number of occurrences of certain relevant words or phrases. As such, these methods are not capable of taking into account the actual semantic relationships between parts of the document, individual sentences or even subclauses [2], [3]. Accordingly, these methods often struggle to achieve a favorable performance for longer documents and, hence, new approaches are desired [4].…”
Section: Introductionmentioning
confidence: 99%
“…Proceedings of the 50th Hawaii International Conference on System Sciences | 2017 URI: http://hdl.handle.net/10125/41288 ISBN: 978-0-9981331-0-2 CC-BY-NC-ND The only investigation of machine learning for rhetoricstructure-based sentiment analysis is given in [2], which utilizes recursive neural networks (RNN). As such, it struggles with small datasets, as in this case, where we need to rely on hand-crafted feature engineering.…”
mentioning
confidence: 99%