Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-srw.4
|View full text |Cite
|
Sign up to set email alerts
|

AutoRC: Improving BERT Based Relation Classification Models via Architecture Search

Abstract: Although BERT based relation classification (RC) models have achieved significant improvements over the traditional deep learning models, it seems that no consensus can be reached on what is the optimal architecture, since there are many design choices available. In this work, we design a comprehensive search space for BERT based RC models and employ a modified version of efficient neural architecture search (ENAS) method to automatically discover the design choices mentioned above. Experiments on eight benchm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 18 publications
0
1
0
Order By: Relevance
“…Luo et al [26] proposed a BERT-based approach for single-relation question answering (SR-QA), which consists of two models, entity linking and relation detection. Zhu et al [27] designed a comprehensive search space for BERT based relation classification models and employ neural architecture search method to automatically discover the design choices. However, in different situations, the best-performance model is also different.…”
Section: Related Workmentioning
confidence: 99%
“…Luo et al [26] proposed a BERT-based approach for single-relation question answering (SR-QA), which consists of two models, entity linking and relation detection. Zhu et al [27] designed a comprehensive search space for BERT based relation classification models and employ neural architecture search method to automatically discover the design choices. However, in different situations, the best-performance model is also different.…”
Section: Related Workmentioning
confidence: 99%
“…Luo et al [19] proposed a BERT-based approach for single-relation question answering (SR-QA), which consists of two models, entity linking, and relation detection. Zhu [20] designed a comprehensive search space for BERT-based relation classifcation models and employ neural architecture search method to automatically discover the design choices. However, in diferent situations, the best-performance model is also diferent.…”
Section: Related Workmentioning
confidence: 99%
“…if relationinR 2 and subjectnot in S 1 then S 1 .append(subject) (15) end for (16) for s in S 1 do (17) Calculate Score III (q, s) by Pruning Model III (18) ifScore III (q, s) > α then S 2 .append(s) (19) end for (20) for s in S 2 do (21) Calculate Score IV (q, s) by Pruning Model IV (22) X.append(Score IV ) (23) end for (24) s � argmax(X) (25) return s ALGORITHM 1: Te core algorithm of MGPM. performance on some specifc type datasets as the comparison of our model and experimental results show that our model achieves a better performance, which shows the efect of model.…”
Section: Experiments Settingmentioning
confidence: 99%
“…Luo et al [19] proposed a BERT-based approach for single-relation question answering (SR-QA), which consists of two models, entity linking, and relation detection. Zhu et al [20] designed a comprehensive search space for BERT-based relation classification models and employed the neural architecture search method to automatically discover the design choices. However, in different situations, the best-performance model is also different.…”
Section: Related Workmentioning
confidence: 99%