Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.340
|View full text |Cite
|
Sign up to set email alerts
|

On the Gap between Adoption and Understanding in NLP

Abstract: There are some issues with current research trends in NLP that can hamper the free development of scientific research. We identify five of particular concern: 1) the early adoption of methods without sufficient understanding or analysis; 2) the preference for computational methods regardless of risks associated with their limitations; 3) the resulting bias in the papers we publish; 4) the impossibility of re-running some experiments due to their cost; 5) the dangers of unexplainable methods. If these issues ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…They also seek to understand and explain how people behave, which means they also need to understand what drives the results of their statistical models. Interpretability allows researchers to scrutinize their models so that they might improve them and think about how well they might generalize to new contexts (Bianchi & Hovy, 2021). Improving interpretability can also improve fairness by allowing users (including regulatory bodies) to evaluate the model's strengths and failings in detail (Doshi-Velez et al, 2017;Rudin, 2019), and users generally trust models more when they understand them (Gilpin et al, 2018;Yeomans, Shah, et al, 2019).…”
Section: Feature-extraction Objectivesmentioning
confidence: 99%
“…They also seek to understand and explain how people behave, which means they also need to understand what drives the results of their statistical models. Interpretability allows researchers to scrutinize their models so that they might improve them and think about how well they might generalize to new contexts (Bianchi & Hovy, 2021). Improving interpretability can also improve fairness by allowing users (including regulatory bodies) to evaluate the model's strengths and failings in detail (Doshi-Velez et al, 2017;Rudin, 2019), and users generally trust models more when they understand them (Gilpin et al, 2018;Yeomans, Shah, et al, 2019).…”
Section: Feature-extraction Objectivesmentioning
confidence: 99%
“…The Cost of the Hyperparameter Optimization. Although optimizing the hyperparameters of a topic model guarantees a fair comparison with other models, this approach is computationally expensive, possibly making it difficult to replicate the results (Bianchi and Hovy, 2021). In our work, we used BO because it is more efficient than other methods (Snoek et al, 2012;Bergstra and Bengio, 2012).…”
Section: Multi-objective Hyperparameter Optimization (Rs #1)mentioning
confidence: 99%
“…In keeping with the spirit of the first version of OCTIS, the framework extension is open-source and easily accessible, in order to guarantee researchers and practitioners a fairer, accessible and reproducible comparison between the models (Bianchi and Hovy, 2021). OCTIS 2.0 is available as extension of the original library, at the following link: https://github.com/mind-Lab/octis.…”
Section: Octis 20mentioning
confidence: 99%