IEEE/WIC/ACM International Conference on Web Intelligence 2019
DOI: 10.1145/3350546.3352506
|View full text |Cite
|
Sign up to set email alerts
|

TACAM: Topic And Context Aware Argument Mining

Abstract: In this work we address the problem of argument search. The purpose of argument search is the distillation of pro and contra arguments for requested topics from large text corpora. In previous works, the usual approach is to use a standard search engine to extract text parts which are relevant to the given topic and subsequently use an argument recognition algorithm to select arguments from them. The main challenge in the argument recognition task, which is also known as argument mining, is that often sentence… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 35 publications
(59 reference statements)
0
21
0
Order By: Relevance
“…As the experiments in our previous work ( [9], see also Table 1) showed, there is still a huge gap of 16% Macro-F 1 score between the two-class and the three-class crosstopic scenario and of 8% in the in-topic scenario. The reason is that stance detection is a complex task.…”
Section: Same-side Stance Classificationmentioning
confidence: 78%
See 2 more Smart Citations
“…As the experiments in our previous work ( [9], see also Table 1) showed, there is still a huge gap of 16% Macro-F 1 score between the two-class and the three-class crosstopic scenario and of 8% in the in-topic scenario. The reason is that stance detection is a complex task.…”
Section: Same-side Stance Classificationmentioning
confidence: 78%
“…In previous work [9], some of us addressed the problem of topic-focused argument extraction on the sentence-level. Examples of the type of sentences that we extract can be seen in Fig.…”
Section: Sentence-level Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…We investigate two approaches to obtain vector representations on which we compute similarities using l1, l2 or cos similarity. Previous works demonstrated that BERT models pre-trained on the task of language modeling can capture argumentative context [10]. Thus, our first BERT similarity function employs a BERT model without fine-tuning to encode the premises.…”
Section: Premise Representationmentioning
confidence: 99%
“…The query can be defined as a topic, e.g. Energy in which case the ARS retrieves all possible arguments without further specification [10,15,17]. Our work deals with a more advanced case, where a query is formulated in form of a claim and the user expects premises attacking or supporting this query claim.…”
Section: Introductionmentioning
confidence: 99%