2022
DOI: 10.1007/978-3-031-13643-6_21
|View full text |Cite
|
Sign up to set email alerts
|

Overview of Touché 2022: Argument Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 58 publications
0
10
0
Order By: Relevance
“…Spacerini can also be leveraged by organizers of shared tasks such as MIRACL (Zhang et al, 2022) and Touché (Bondarenko et al, 2022), who want to help participants explore the datasets without forcing them to download large volumes of data or giving participants full access to the data: it is indeed possible to host the index privately on the Hugging Face Hub and only expose access to it through a search interface. Spacerini can also be used as a platform for participants to deploy working prototypes of their submissions with a unified interface provided by the organizer as a cookiecutter template.…”
Section: Shared Task Organizersmentioning
confidence: 99%
“…Spacerini can also be leveraged by organizers of shared tasks such as MIRACL (Zhang et al, 2022) and Touché (Bondarenko et al, 2022), who want to help participants explore the datasets without forcing them to download large volumes of data or giving participants full access to the data: it is indeed possible to host the index privately on the Hugging Face Hub and only expose access to it through a search interface. Spacerini can also be used as a platform for participants to deploy working prototypes of their submissions with a unified interface provided by the organizer as a cookiecutter template.…”
Section: Shared Task Organizersmentioning
confidence: 99%
“…Ajjour and Al-Khatib (2021) analyzed several stance classifiers for textual arguments, which achieved an accuracy between 0.50 to 0.77, and identified as challenges an inadequate topic knowledge of classifiers or when arguments only partial agree or disagree. Similarly, Carnot et al (2023) identified several challenges for detecting the stance expressed in images when analyzing the submissions to the Touché 2022 shared task on image retrieval for argumentation (Bondarenko et al, 2022): bridging the seman- tic gap for diagrams, ambiguity arising from diverse valuations leading to varied interpretations, the dependence of image understanding on background knowledge, regional relevance, the presence of both stances in one image, irony, and more. All of these also apply here, but maybe to a lesser degree as classifiers were trained for each topic.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, the analysis of the argumentative stance of images and texts has gained significant attention. Several shared tasks have been conducted in this area, like the same-side stance classification (Körner et al, 2021) on texts, and the image retrieval for arguments (Bondarenko et al, 2022(Bondarenko et al, , 2023 on images. However, especially for images, the task of stance detection is far from being solved (Carnot et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…Such information needs were in the focus of the comparative argument retrieval task at the Touché 2022 lab (Bondarenko et al, 2022b). Given a query with two comparison objects (e.g., the London vs. Paris example), the goal was to retrieve results that contain arguments for or against either object.…”
Section: Introductionmentioning
confidence: 99%
“…For our experiments, we use the 26 runs (ranked lists of results) submitted to the task, as well as the relevance + quality assessments and the stance labels that the task organizers provided (Bondarenko et al, 2022b). In the task, the retrieval effectiveness of the submitted runs was evaluated using nDCG@5 (Järvelin and Kekäläinen, 2002) for topical relevance and for argument quality, and the stance detection effectiveness was evaluated using macro-avg.…”
Section: Introductionmentioning
confidence: 99%