Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411960
|View full text |Cite
|
Sign up to set email alerts
|

Quality-Aware Ranking of Arguments

Abstract: Argument search engines identify, extract, and rank the most important arguments for and against a given controversial topic. A number of such systems have recently been developed, usually focusing on classic information retrieval ranking methods that are based on frequency information. An important aspect that has been ignored so far by search engines is the quality of arguments. We present a quality-aware ranking framework for arguments already extracted from texts and represented as argument graphs, conside… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 31 publications
(77 reference statements)
0
4
0
Order By: Relevance
“…Dumani and Schenkel [24] applied clustering with NLP by creating a quality-aware ranking framework for arguments extracted from texts and represented in graphs. To achieve that, they used a (claim, premise) dataset based on debates taken on online portals in which they used SBERT instead of BERT, previously used on [25], to obtain the embeddings of the claims and premises.…”
Section: Related Workmentioning
confidence: 99%
“…Dumani and Schenkel [24] applied clustering with NLP by creating a quality-aware ranking framework for arguments extracted from texts and represented in graphs. To achieve that, they used a (claim, premise) dataset based on debates taken on online portals in which they used SBERT instead of BERT, previously used on [25], to obtain the embeddings of the claims and premises.…”
Section: Related Workmentioning
confidence: 99%
“…For ranking, they utilize ElasticSearch together with the scoring model Okapi BM25 [11]. 7 In our work, we also use the corpus from Ajjour et al [2] as it is the official dataset of Touché. In contrast to the previous mentioned models, ours works more strictly with a two-step retrieval and does not take into account textual similarity between query and premise because convincing premises do not need to have much textual overlap to the query.…”
Section: Related Workmentioning
confidence: 99%
“…In this section, we examine the process of finding arguments that runs in the background when a query comes in. For this GUI we implemented the probabilistic framework described in detail in our groundworks [6,7] and presented in the CLEF lab Touché. Below, we revise, due to space limitations, only the most important points.…”
Section: Backendmentioning
confidence: 99%
See 2 more Smart Citations