CHI Conference on Human Factors in Computing Systems 2022
DOI: 10.1145/3491102.3502004
|View full text |Cite
|
Sign up to set email alerts
|

Jury Learning: Integrating Dissenting Voices into Machine Learning Models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(74 citation statements)
references
References 50 publications
0
45
0
Order By: Relevance
“…social norm prediction (Jiang et al, 2021) and toxicity detection (Halevy et al, 2021;. We encourage conscious efforts to recruit diverse pools of annotators so multiple perspectives are considered, and future work on modeling reaction frames can consider learning algorithms that mitigate harmful effects of biases, depending on use case (Khalifa et al, 2021;Gordon et al, 2022).…”
Section: Future Directions and Limitations Of Reaction Framesmentioning
confidence: 99%
“…social norm prediction (Jiang et al, 2021) and toxicity detection (Halevy et al, 2021;. We encourage conscious efforts to recruit diverse pools of annotators so multiple perspectives are considered, and future work on modeling reaction frames can consider learning algorithms that mitigate harmful effects of biases, depending on use case (Khalifa et al, 2021;Gordon et al, 2022).…”
Section: Future Directions and Limitations Of Reaction Framesmentioning
confidence: 99%
“…For Simplicity and Misunderstanding, one could maintain a list of everyday concepts and a list of concepts with multiple interpretations for ease of automatic check. Recent work on jury learning (Gordon et al 2022) proposed a method to conduct automatic pseudo-human value judgement with machine learning models, which can be an alternative to expertbased quality evaluation, while accounting for the subjectivity of each dimension.…”
Section: Discussionmentioning
confidence: 99%
“…Training on soft instead of hard labels can improve robustness and generalization (Pereyra et al 2017;Müller, Kornblith, and Hinton 2019). Soft labels have been constructed using smoothing mechanisms (Szegedy et al 2016), auxiliary teacher networks as in knowledge distillation (Hinton et al 2015;Gou et al 2021), and aggregate human annotations (Sharmanska et al 2016;Peterson et al 2019;Recht et al 2019;Uma et al 2020;Gordon et al 2021Gordon et al , 2022Uma, Almanea, and Poesio 2022;Koller, Kauermann, and Zhu 2022). While the first two methods have led to significant advances in model performance, hand-crafted or learned soft labels often rely on hard labels, which tend to be impoverished representations of human precepts over datapoints.…”
Section: Related Workmentioning
confidence: 99%