Proceedings of the 2nd Workshop on Argumentation Mining 2015
DOI: 10.3115/v1/w15-0501
|View full text |Cite
|
Sign up to set email alerts
|

Linking the Thoughts: Analysis of Argumentation Structures in Scientific Publications

Abstract: This paper presents the results of an annotation study focused on the fine-grained analysis of argumentation structures in scientific publications. Our new annotation scheme specifies four types of binary argumentative relations between sentences, resulting in the representation of arguments as small graph structures. We developed an annotation tool that supports the annotation of such graphs and carried out an annotation study with four annotators on 24 scientific articles from the domain of educational resea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
82
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(83 citation statements)
references
References 24 publications
1
82
0
Order By: Relevance
“…[24,142,157,163], and (2) connecting said arguments through attack and support relations, as in e.g. [105,163]. Continuing our above example, say we wanted to analyse the following excerpt of a fictional news article:…”
Section: Relation-based Argumentation Mining (Rbam)mentioning
confidence: 99%
“…[24,142,157,163], and (2) connecting said arguments through attack and support relations, as in e.g. [105,163]. Continuing our above example, say we wanted to analyse the following excerpt of a fictional news article:…”
Section: Relation-based Argumentation Mining (Rbam)mentioning
confidence: 99%
“…As far as agreement on detection of argumentative components is concerned, Kirschner et al (2015) point out that measures such as kappa and F1 score may cause some inappropriate penalty for slight differences of annotation between annotators, and proposed a graph-based metric based on pair-wise comparison of predefined argument components. This particular metric, while addressing some of the problems of kappa and F1, is not directly applicable to our annotation where annotators can freely chose the beginnings and ends of spans.…”
Section: Related Workmentioning
confidence: 99%
“…(Sardianos et al, 2015;Oraby et al, 2015)) and three others used different methods (cf. (Kirschner et al, 2015;Yanase et al, 2015;). To calculate the results of argument mining, four papers used accuracy (cf.…”
Section: Related Workmentioning
confidence: 99%
“…In the comparative statistics module we look to extend the solution in (Kirschner et al, 2015) in seven ways, by: (i) Calculating the segmentation differences between two annotations; (ii) Calculating propositional content relations using confusion matrices, accounting for all the nodes within an argument map and accounting for a differing segmentation; (iii) Calculating dialogical content relations (if they are contained in an argument map) using confusion matrices, accounting for all the nodes within an argument map and accounting for a differing segmentation; (iv) Defining the CASS technique to allow calculation scores to be combined; (v) Allowing the use of any metric for the CASS technique, which uses a confusion matrix, to give consistency to the area of argument mining; (vi) Providing results for not just inter-annotator agreement, but also, the comparison of manually annotated corpora against corpora automatically created by argument mining; (vii) Allowing the comparison of analysis given in different annotation schemes but migrated to AIF (e.g. compare text annotated in IAC to the annotation scheme from the Microtext corpus).…”
Section: Comparing Analysismentioning
confidence: 99%
See 1 more Smart Citation