Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1564
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Argument Quality Assessment - New Datasets and Methods

Abstract: We explore the task of automatic assessment of argument quality. To that end, we actively collected 6.3k arguments, more than a factor of five compared to previously examined data. Each argument was explicitly and carefully annotated for its quality. In addition, 14k pairs of arguments were annotated independently, identifying the higher quality argument in each pair. In spite of the inherent subjective nature of the task, both annotation schemes led to surprisingly consistent results. We release the labeled d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
69
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 46 publications
(76 citation statements)
references
References 20 publications
(23 reference statements)
1
69
0
Order By: Relevance
“…Arguments in their datasets, collected in the context of Speech by Crowd experiments, are suited to the use-case of civic engagement platforms, giving premium to the usability of an argument in oral communication. Our dataset differs in three respects: (1) our dataset is larger by a factor of 5 compared to previous datasets annotated for point-wise quality; (2) our data were collected mainly from crowd contributors that presumably better represent the general population compared to targeted audiences such as debate clubs; (3) we performed an extensive analysis of argument scoring methods and introduce superior scoring methods that consider annotators credibility without removing them entirely from the labeled data, as is done in Toledo et al (2019).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Arguments in their datasets, collected in the context of Speech by Crowd experiments, are suited to the use-case of civic engagement platforms, giving premium to the usability of an argument in oral communication. Our dataset differs in three respects: (1) our dataset is larger by a factor of 5 compared to previous datasets annotated for point-wise quality; (2) our data were collected mainly from crowd contributors that presumably better represent the general population compared to targeted audiences such as debate clubs; (3) we performed an extensive analysis of argument scoring methods and introduce superior scoring methods that consider annotators credibility without removing them entirely from the labeled data, as is done in Toledo et al (2019).…”
Section: Related Workmentioning
confidence: 99%
“…More recently, IBM also introduced Speech by Crowd, a service which supports the collection of free-text arguments from large audiences on debatable topics to generate meaningful narratives (Toledo et al 2019). An important sub-task of this service is automatic assessment of argu-ment quality, which is the focus of the present work.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations