2020
DOI: 10.1007/978-3-030-58219-7_26
|View full text |Cite
|
Sign up to set email alerts
|

Overview of Touché 2020: Argument Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 26 publications
0
16
0
1
Order By: Relevance
“…In retrieval tasks, human relevance judgments of retrieval results for a fixed set of topics allows for evaluating the effectiveness of competing retrieval models. Known as the Cranfield paradigm or TREC-style evaluation (Voorhees, 2001), it is also employed in textual argument retrieval within the Touché shared tasks (Bondarenko et al, 2020).…”
Section: Crowdsourcing Relevance Judgementsmentioning
confidence: 99%
“…In retrieval tasks, human relevance judgments of retrieval results for a fixed set of topics allows for evaluating the effectiveness of competing retrieval models. Known as the Cranfield paradigm or TREC-style evaluation (Voorhees, 2001), it is also employed in textual argument retrieval within the Touché shared tasks (Bondarenko et al, 2020).…”
Section: Crowdsourcing Relevance Judgementsmentioning
confidence: 99%
“…In this section, we briefly review the most popular argument search engines and explain both similarities as well as differences to our GUI. For work in the area of argument retrieval, we refer to the CLEF lab Touché [3][4][5].…”
Section: Related Workmentioning
confidence: 99%
“…Wachsmuth et al [15] present Args, one of the first argument search engine prototypes. 3 Args runs on the dataset from Ajjour et al [2] which is now also the official dataset of the CLEF lab Touché [3][4][5]. The dataset draws its arguments from five debate portals indexed by the Java framework Apache Lucene.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The input to this system is not structured but rather a query in a free textual form. The Touché shared task on argument retrieval at CLEF (Bondarenko et al, 2020b(Bondarenko et al, , 2021 featured a related track. The task was to retrieve from a large web corpus documents answering comparative question queries like "What IDE is better for Java: NetBeans or Eclipse?".…”
Section: Related Workmentioning
confidence: 99%