Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.1007/978-3-030-15742-5_18
|View full text |Cite
|
Sign up to set email alerts
|

Toward Three-Stage Automation of Annotation for Human Values

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…The experiments evaluated four well-known supervised learning methods, namely, support vector machines, random forest, multi-layer perceptron, and logistic regression. This study used these methods due to an earlier study on the identification of human values in text documents that reported that a deep learning approach performs less well in smaller datasets and 'achieve [s] good results in data-rich settings' (Ishita et al 2019). All of these methods have been used in previous studies to classify the content of GitHub repositories (Golzadeh et al 2021;Arya et al 2019;Fan et al 2017;Eluri et al 2019;Trockman et al 2019;Munaiah et al 2017;Kikas et al 2016;Song and Chaparro 2020).…”
Section: Classification Experimentsmentioning
confidence: 99%
“…The experiments evaluated four well-known supervised learning methods, namely, support vector machines, random forest, multi-layer perceptron, and logistic regression. This study used these methods due to an earlier study on the identification of human values in text documents that reported that a deep learning approach performs less well in smaller datasets and 'achieve [s] good results in data-rich settings' (Ishita et al 2019). All of these methods have been used in previous studies to classify the content of GitHub repositories (Golzadeh et al 2021;Arya et al 2019;Fan et al 2017;Eluri et al 2019;Trockman et al 2019;Munaiah et al 2017;Kikas et al 2016;Song and Chaparro 2020).…”
Section: Classification Experimentsmentioning
confidence: 99%
“…In our research, we follow the three stage annotation process introduced by Ishita et al (2019) in which the first step is to determine which documents in the corpus are actually on topic 1 . That binary (yes/no) annotation task is our focus for the experiments that we report in this paper.…”
Section: Courpus and Annotaton Taskmentioning
confidence: 99%