Interspeech 2020 2020
DOI: 10.21437/interspeech.2020-2862
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Machine Support for Paper Reviewing on the Interspeech 2019 Submission Corpus

Abstract: The evaluation of scientific submissions through peer review is both the most fundamental component of the publication process, as well as the most frequently criticised and questioned. Academic journals and conferences request reviews from multiple reviewers per submission, which an editor, or area chair aggregates into the final acceptance decision. Reviewers are often in disagreement due to varying levels of domain expertise, confidence, levels of motivation, as well as due to the heavy workload and the dif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 28 publications
0
14
2
Order By: Relevance
“…In this short paper, we offer solutions to three particularities of this task that the above approaches do not address: a) Often, the recommendations given by the area chair and the reviewers are in disagreement. Whereas previous studies have used either the former (Kang et al, 2018;Wang and Wan, 2018;Ghosal et al, 2019) or a soft label average of the latter (Stappen et al, 2020) for supervision, we show that both signals comprise complementary information. b) Whereas soft labels de-emphasise subjective articles with disagreeing reviews during training (Stappen et al, 2020), we manage to outperform the latter study by explicitly modelling aleatory uncertainty as an auxiliary prediction task.…”
Section: Contributionscontrasting
confidence: 77%
See 4 more Smart Citations
“…In this short paper, we offer solutions to three particularities of this task that the above approaches do not address: a) Often, the recommendations given by the area chair and the reviewers are in disagreement. Whereas previous studies have used either the former (Kang et al, 2018;Wang and Wan, 2018;Ghosal et al, 2019) or a soft label average of the latter (Stappen et al, 2020) for supervision, we show that both signals comprise complementary information. b) Whereas soft labels de-emphasise subjective articles with disagreeing reviews during training (Stappen et al, 2020), we manage to outperform the latter study by explicitly modelling aleatory uncertainty as an auxiliary prediction task.…”
Section: Contributionscontrasting
confidence: 77%
“…Whereas previous studies have used either the former (Kang et al, 2018;Wang and Wan, 2018;Ghosal et al, 2019) or a soft label average of the latter (Stappen et al, 2020) for supervision, we show that both signals comprise complementary information. b) Whereas soft labels de-emphasise subjective articles with disagreeing reviews during training (Stappen et al, 2020), we manage to outperform the latter study by explicitly modelling aleatory uncertainty as an auxiliary prediction task. c) A model that aims to support the editorial decision process should only assume the availability of human review text during training, and be able to make recommendations in their absence.…”
Section: Contributionscontrasting
confidence: 77%
See 3 more Smart Citations