Proceedings of the 4th Workshop on Argument Mining 2017
DOI: 10.18653/v1/w17-5106
|View full text |Cite
|
Sign up to set email alerts
|

Building an Argument Search Engine for the Web

Abstract: Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
98
0
2

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 118 publications
(101 citation statements)
references
References 29 publications
1
98
0
2
Order By: Relevance
“…3 As part of the development of Speech by Crowd, 6.3k arguments were collected from contributors of various levels, and are released as part of this work. An important sub-task of such a service is the automatic assessment of argument quality, which has already shown its importance for prospective applications such as automated decision making (Bench-Capon et al, 2009), argument search (Wachsmuth et al, 2017b), and writing support (Stab and Gurevych, 2014). Identifying argument quality in the context of Speech by Crowd allows for the top-quality arguments to surface out of many contributions.…”
Section: Introductionmentioning
confidence: 99%
“…3 As part of the development of Speech by Crowd, 6.3k arguments were collected from contributors of various levels, and are released as part of this work. An important sub-task of such a service is the automatic assessment of argument quality, which has already shown its importance for prospective applications such as automated decision making (Bench-Capon et al, 2009), argument search (Wachsmuth et al, 2017b), and writing support (Stab and Gurevych, 2014). Identifying argument quality in the context of Speech by Crowd allows for the top-quality arguments to surface out of many contributions.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, the authors came to the conclusion that certain argument components (backing, warrant) as introduced in [37], and other argumentation schemes are often only stated implicitly in common argumentation documents on the internet. In more recent work, argumentation schemes became simpler and more flexible [34,42]. This enables broader applicability and topic-dependent argument search across multiple text types.…”
Section: Related Workmentioning
confidence: 99%
“…The DiGAT tool [109] has been developed alongside with an annotation scheme and a graph-based inter-annotator agreement measure based on semantic similarity. Similarly to GraPAT, DiGAT also relies on graph structures for Tool Web UI Manual Annotation Arg Retrieval Arg Evaluation WebAnno [112] Yes Yes BRAT [102] Yes Yes GraPAT [105] Yes Yes DiGAT [109] Yes Yes MARGOT [113] Yes Yes Yes OVA+ [114] Yes Yes TOAST [115] Yes Yes Yes GATE Temware [108] Yes Yes Args [116] Yes Yes Yes ArgumenText [117] Yes Yes Yes Rationale [118] Yes Yes the annotation process, aiming at simple and accurate annotation of relations between entities in long text. The establishment of the TextCoop platform alongside the Dislog language is presented in [110].…”
Section: General-purpose Nlp Toolsmentioning
confidence: 99%
“…The existing tools can be classified into three categories, tools that aid the manual annotation process, general-purpose NLP tools and tools that offer an entire mechanism for argument search, retrieval and evaluation. It has to be underlined the evaluation of the argument differs to the different approaches, as in [113] the number of claims and premises are presented, in [115] the weight of the argument is calculated and in [116,117] the arguments are categorized as pro and con.…”
Section: Argument Search Retrieval and Automatic Annotationmentioning
confidence: 99%