2022
DOI: 10.1109/access.2022.3169897
|View full text |Cite
|
Sign up to set email alerts
|

NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing

Abstract: Neural Architecture Search (NAS) is a promising and rapidly evolving research area. Training a large number of neural networks requires an exceptional amount of computational power, which makes NAS unreachable for those researchers who have limited or no access to high-performance clusters and supercomputers. A few benchmarks with precomputed neural architectures performances have been recently introduced to overcome this problem and ensure reproducible experiments. However, these benchmarks are only for the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(23 citation statements)
references
References 40 publications
0
22
0
Order By: Relevance
“…However, there are a plethora of other applications and tasks where robustness quantification of DANNs is important [51][52][53][54] . A possible future direction is to extend the presented analysis to more complex applications (e.g., natural language processing, graph data, and Deep Chip industry [55][56][57][58][59] ) and larger models (e.g., Transformers, and Vision Transformers 60,61 ). Given our current analysis, we anticipate that for the larger datasets, complex tasks, and huge models, the graph robustness measures will be even more relevant and will help users/autoML/NAS algorithms find robust DANN architectures.…”
Section: Discussionmentioning
confidence: 99%
“…However, there are a plethora of other applications and tasks where robustness quantification of DANNs is important [51][52][53][54] . A possible future direction is to extend the presented analysis to more complex applications (e.g., natural language processing, graph data, and Deep Chip industry [55][56][57][58][59] ) and larger models (e.g., Transformers, and Vision Transformers 60,61 ). Given our current analysis, we anticipate that for the larger datasets, complex tasks, and huge models, the graph robustness measures will be even more relevant and will help users/autoML/NAS algorithms find robust DANN architectures.…”
Section: Discussionmentioning
confidence: 99%
“…This highlights a problem also discussed in Elsken et al (2019b), namely that there are more factors than the architecture of a network which affect it's performance. This indicates a need for a common benchmark for NAS on HAR datasets, following the examples of Dong and Yang (2020) for computer vision and Klyuchnikov et al (2020) for natural language processing, which would allow us to test NAS methods on a search space of pre-trained and pre-evaluated models.…”
Section: Discussionmentioning
confidence: 99%
“…NAS Benchmarks. Several benchmarks [9], [23] have been released to facilitate research on Neural Architecture Search. A NAS Benchmark contains the set a of trained architectures from a search space with a detailed training information.…”
Section: Related Workmentioning
confidence: 99%