Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1265
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains

Abstract: Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 26 publications
0
25
1
Order By: Relevance
“…Annotation suggestions. To evaluate the effects of providing annotation suggestions, we have conducted an extensive study (Schulz et al, 2019b) considering annotation time, annotation quality, potential biases, and the ease of use. To this end, we asked five Med and four TEd instructors to annotate diagnostic texts.…”
Section: Discussionmentioning
confidence: 99%
“…Annotation suggestions. To evaluate the effects of providing annotation suggestions, we have conducted an extensive study (Schulz et al, 2019b) considering annotation time, annotation quality, potential biases, and the ease of use. To this end, we asked five Med and four TEd instructors to annotate diagnostic texts.…”
Section: Discussionmentioning
confidence: 99%
“…Whereas existing work reports no measurable bias for expert annotators (Fort and Sagot, 2010;Lingren et al, 2014;Schulz et al, 2019), it remains unclear for annotators who have no prior experience in similar annotation tasks; especially for scenarios where -besides annotation guidelines -no further training is provided. However, the use of novice annotators is common for sce-0 1 10 100 1.000 10.000 2 0 1 9 -1 2 -0 1 2 0 2 0 -0 1 -0 1 2 0 2 0 -0 2 -0 1 2 0 2 0 -0 3 -0 1 2 0 2 0 -0 4 -0 1 Number of tweets narios where no linguistic or domain expertise is required.…”
Section: Related Workmentioning
confidence: 99%
“…The last group of students receives expert label suggestions in the first round and interactively updated label suggestions in the second round. In contrast to existing work (Schulz et al, 2019), this setup allows us to directly quantify effects of bias amplification that may occur with interactive label suggestions.…”
Section: Static Label Suggestions (G2)mentioning
confidence: 99%
“…Zhang et al (2020) investigate a human-in-the-loop approach for image segmentation and annotation. Schulz et al (2019) examine the use of suggestion models to support human experts with seg-mentation and classification of epistemic activities in diagnostic reasoning texts. Zhang and Chaudhuri (2015) suggest active learning from weak and strong labelers where these labelers can be humans with different levels of expertise in the labelling task.…”
Section: Related Workmentioning
confidence: 99%