Proceedings of the First ACL Workshop on Ethics in Natural Language Processing 2017
DOI: 10.18653/v1/w17-1608
|View full text |Cite
|
Sign up to set email alerts
|

Ethical Considerations in NLP Shared Tasks

Abstract: Shared tasks are increasingly common in our field, and new challenges are suggested at almost every conference and workshop. However, as this has become an established way of pushing research forward, it is important to discuss how we researchers organise and participate in shared tasks, and make that information available to the community to allow further research improvements. In this paper, we present a number of ethical issues along with other areas of concern that are related to the competitive nature of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(27 citation statements)
references
References 10 publications
0
26
0
1
Order By: Relevance
“…For future work, since the Socio Demographic performed best, we could apply methods such as User-Factor Adaptation which focus on the author of the content in addition to the content (Lynn et al, 2017;Zhu et al, 2018). It would also be interesting to investigate if word clusters trained on historical sources (for e.g.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…For future work, since the Socio Demographic performed best, we could apply methods such as User-Factor Adaptation which focus on the author of the content in addition to the content (Lynn et al, 2017;Zhu et al, 2018). It would also be interesting to investigate if word clusters trained on historical sources (for e.g.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Homan et al (2014) found that two novice annotators were more likely to assign their expert's "low distress" tweets to the "no distress" category. Conversely, on a related but coarser-grained categorization task, Liu et al (2017) find "some evidence that multiple crowdsourcing workers, when they reach high inter-annotator agreement, can provide reliable quality of annotations". …”
Section: Long Experts Short Experts Crowdflowermentioning
confidence: 95%
See 2 more Smart Citations
“…Replicability has started to appear as a topic in NLP and machine learning, for example as an IJ-CAI 2015 Workshop on Replicability and Reproducibility in Natural Language Processing 1 , and it has been described as one of the potentially negative factors of shared tasks in NLP by Parra Escartín et al (2017). However, there is little work on more specific areas such as parsing.…”
Section: State Of the Artmentioning
confidence: 99%