2012
DOI: 10.1016/j.langsci.2012.04.010
|View full text |Cite
|
Sign up to set email alerts
|

Circularity effects in corpus studies – why annotations sometimes go round in circles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 5 publications
(5 reference statements)
0
2
0
Order By: Relevance
“…The annotated data gained in this way can be used as (a part of) a training corpus for large-scale ML methods for automatic text processing (Ahmed et al, 2019), methods which can be regarded a standard in bioinformatic contexts by now (Blaschke et al, 2002). The development of an annotation scheme has to be put to test and should be regimented by annotation guidelines, since annotation is a data generating rather than a data documenting process (Consten & Loll, 2012). Accordingly, in BIOfid, annotation guidelines are collected as part of an annotation manual (Lücking et al, 2020).…”
Section: Developing the Biofid Annotation Schemementioning
confidence: 99%
“…The annotated data gained in this way can be used as (a part of) a training corpus for large-scale ML methods for automatic text processing (Ahmed et al, 2019), methods which can be regarded a standard in bioinformatic contexts by now (Blaschke et al, 2002). The development of an annotation scheme has to be put to test and should be regimented by annotation guidelines, since annotation is a data generating rather than a data documenting process (Consten & Loll, 2012). Accordingly, in BIOfid, annotation guidelines are collected as part of an annotation manual (Lücking et al, 2020).…”
Section: Developing the Biofid Annotation Schemementioning
confidence: 99%
“…as [A_A1 V_A2] with an underscore separating the two analytical alternatives of each element), and the CLU is categorised as indefinite until such a time when further annotation of other tiers may (or may not) help to disambiguate the analysis, at a structural level. It is difficult (if not impossible) to differentiate between ambiguity that arises from the acts of interpretation required of annotators as they identify and tag corpus data at this structural level, and ambiguity that may have been perceived and experienced by interactants in the discourse event as it occurred in real time (Consten & Loll 2012). For now, it would be true to say that these indefinite CLUs appear to lack clearly defined structure at this level with respect to the categories used in our initial analysis.…”
Section: Identifying and Annotating Core Elements Of Clause-like Unitsmentioning
confidence: 99%