Proceedings of the 6th Workshop on Argument Mining 2019
DOI: 10.18653/v1/w19-4517
|View full text |Cite
|
Sign up to set email alerts
|

Ranking Passages for Argument Convincingness

Abstract: In data ranking applications, pairwise annotation is often more consistent than cardinal annotation for learning ranking models. We examine this in a case study on ranking text passages for argument convincingness. Our task is to choose text passages that provide the highest-quality, most-convincing arguments for opposing sides of a topic. Using data from a deployed system within the Bing search engine, we construct a pairwiselabeled dataset for argument convincingness that is substantially more comprehensive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Following Simpson and Gurevych (2018), we report average Pearson (r) and Spearman (ρ) correlations, and compare the results of our methods to the Bi-LSTM and GPPL methods published there, as well as to the EviConvNet method, the best result from Gleize et al (2019), and to the SWE+FFNN method, the best result from Potash, Ferguson, and Hazen (2019).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following Simpson and Gurevych (2018), we report average Pearson (r) and Spearman (ρ) correlations, and compare the results of our methods to the Bi-LSTM and GPPL methods published there, as well as to the EviConvNet method, the best result from Gleize et al (2019), and to the SWE+FFNN method, the best result from Potash, Ferguson, and Hazen (2019).…”
Section: Resultsmentioning
confidence: 99%
“…An alternative approach to assess arguments is to focus on their relative convincingness, by comparing pairs of arguments with similar stance. This approach is introduced in Habernal and Gurevych; Simpson and Gurevych (2016;, and further assessed in Potash, Bhattacharya, and Rumshisky (2017), Gleize et al (2019), and Potash, Ferguson, and Hazen (2019). As part of their work, Habernal and Gurevych (2016) introduce two datasets: UKPConvAr-gRank (henceforth, UKPRank) and UKPConvArgAll, which contain 1k and 16k arguments and argument-pairs, respectively.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They can be expressed explicitly or implicitly [146]. Fragments can be messages such as tweets or posts [55,86], paragraphs [144] or complete articles [70]. Joseph et al [86] see stances as latent properties of users rather than text fragments.…”
Section: Claims Vs Stances Vs Viewpointsmentioning
confidence: 99%
“…The more convincing argument is then predicted using a feature-rich SVM and a simple bidirectional LSTM. Other approaches to the same task map passage representations to real-valued scores using Gaussian Process Preference Learning (Simpson and Gurevych, 2018) or represent arguments by the sum of their token embeddings (Potash et al, 2017), later extended by a Feed Forward Neural Network (Potash et al, 2019). Recently, Gleize et al (2019) employed a Siamese neural network to rank arguments by the convincingness of evidence.…”
Section: Related Workmentioning
confidence: 99%