RANLP 2017 - Recent Advances in Natural Language Processing Meet Deep Learning 2017
DOI: 10.26615/978-954-452-049-6_015
|View full text |Cite
|
Sign up to set email alerts
|

Inter-Annotator Agreement in Sentiment Analysis: Machine Learning Perspective

Abstract: Manual text annotation is an essential part of Big Text analytics. Although annotators work with limited parts of data sets, their results are extrapolated by automated text classification and affect the final classification results. Reliability of annotations and adequacy of assigned labels are especially important in the case of sentiment annotations. In the current study we examine inter-annotator agreement in multiclass, multi-label sentiment annotation of messages. We used several annotation agreement mea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(29 citation statements)
references
References 10 publications
1
22
2
Order By: Relevance
“…Each document was labeled manually and independently by three Italian native speakers, who are researchers in the e-health domain, with the agreement among the annotators calculated by majority. The global agreement for the entire annotation procedure was measured using the Observed Agreement index [71] which provides a good approximation in multi-annotator contexts, also offering robustness against imperfect (textual) data [72]. In addition to the Observed Agreement index, in order to take into account the level of Inter Annotator Agreement (IAA) in terms of excess over the agreement obtained by chance, the Krippendorff coefficient α [73] was also calculated.…”
Section: B: Annotation Proceduresmentioning
confidence: 99%
“…Each document was labeled manually and independently by three Italian native speakers, who are researchers in the e-health domain, with the agreement among the annotators calculated by majority. The global agreement for the entire annotation procedure was measured using the Observed Agreement index [71] which provides a good approximation in multi-annotator contexts, also offering robustness against imperfect (textual) data [72]. In addition to the Observed Agreement index, in order to take into account the level of Inter Annotator Agreement (IAA) in terms of excess over the agreement obtained by chance, the Krippendorff coefficient α [73] was also calculated.…”
Section: B: Annotation Proceduresmentioning
confidence: 99%
“…The Cohen kappa coefficient [ 36 ] and percent of agreement [ 37 ] were calculated to measure the interobserver agreement between the pediatric radiologists in all investigated ROIs. Statistical analyses were performed using SPSS Statistics (version 24; IBM Corp).…”
Section: Methodsmentioning
confidence: 99%
“…The tweets analyzed were related to emotional responses of individuals to urban green space. Bobicev and Sokolova [Bobicev and Sokolova 2017] used a limited set of three annotators to analyze texts extracted from an online health forum. Each text can be labeled with one or more sentiment labels: gratitude, encouragement, confusion and facts (this last one indicating neutral content).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Measuring divergence among annotators -Inter-rater Agreement: We adopted the Krippendorff's alpha (α) [Krippendorff 2011] to measure the general agreement level among the independent annotators for each one of the manual labeling tasks, namely: SA classification, OS classification and CA classification. The Krippendorff's alpha (α) agreement coefficient looks at the overall distribution of annotations/labels, not considering which annotators produced these annotations [Bobicev and Sokolova 2017]. Differently from other metrics such as Cohen's Kappa [Cohen 1960] (which computes agreement level between a pair of annotators) and Fleiss' Kappa [Fleiss et al 1981] (which is a generalization of Cohen Kappa and allows more than two annotators), the metric Krippendorff's alpha α can be applied to evaluate labeling agreement among multiple annotators even when there are missing values.…”
Section: Divergence Analysismentioning
confidence: 99%