Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1236
|View full text |Cite
|
Sign up to set email alerts
|

The Myth of Double-Blind Review Revisited: ACL vs. EMNLP

Abstract: The review and selection process for scientific paper publication is essential for the quality of scholarly publications in a scientific field. The double-blind review system, which enforces author anonymity during the review period, is widely used by prestigious conferences and journals to ensure the integrity of this process. Although the notion of anonymity in the double-blind review has been questioned before, the availability of full text paper collections brings new opportunities for exploring the questi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…To illustrate the latter point: a quick search in the ACL anthology revealed only four conference papers on peer review from a meta-research perspective: a paper-reviewer matching tool (Anjum et al, 2019), a corpus of reviews (Kang et al, 2018), and two experimental studies using NLP to explain the observed reviews (Caragea et al, 2019;Gao et al, 2019). We could not find any ACL-published 2019) offers an actionable insight: ACL reviewers appear to be victims of conformity bias, converging to the mean of reviews.…”
Section: So What Can We Do?mentioning
confidence: 94%
“…To illustrate the latter point: a quick search in the ACL anthology revealed only four conference papers on peer review from a meta-research perspective: a paper-reviewer matching tool (Anjum et al, 2019), a corpus of reviews (Kang et al, 2018), and two experimental studies using NLP to explain the observed reviews (Caragea et al, 2019;Gao et al, 2019). We could not find any ACL-published 2019) offers an actionable insight: ACL reviewers appear to be victims of conformity bias, converging to the mean of reviews.…”
Section: So What Can We Do?mentioning
confidence: 94%
“…To illustrate the latter point: a quick search in the ACL anthology revealed only four conference papers on peer review from a meta-research perspective: a paper-reviewer matching tool (Anjum et al, 2019), a corpus of reviews (Kang et al, 2018), and two experimental studies using NLP to explain the observed reviews (Caragea et al, 2019;Gao et al, 2019). We could not find any ACL-published 9 E.g.…”
Section: So What Can We Do?mentioning
confidence: 98%
“…Therefore, debiasing and attribute removal are techniques to mitigate the undesired discriminatory effects arising from unfair models. Examples of use case tasks covering bias and fairness encompass learning gender-neutral word embeddings (Zhao et al 2018;Bolukbasi et al 2016), analysis and reduction of gender bias in multi-lingual word embeddings Font and Costa-jussà 2019), text rewriting (Xu et al 2019), analysis of biases in contextualized word representations (Tan and Celis 2019; Gonen and Goldberg 2019;, detection, reduction and evaluation of biases for demographic attributes in word embeddings (Papakyriakopoulos et al 2020;Najafian 2020, 2019;Kaneko and Bollegala 2019), analogy detection (Nissim et al 2020), cyberbullying text detection (Gencoglu 2020), fair representation learning , protected attributes removal (Elazar and Goldberg 2018;, analysis of racial disparity in NLP (Blodgett and O'Connor 2017), and prediction of scientific papers authorship during double-blind review (Caragea et al 2019).…”
Section: Privacy Terminologymentioning
confidence: 99%