2022
DOI: 10.48550/arxiv.2203.10326
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

Abstract: We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language. We design artificial languages with structural properties that mimic natural language, pretrain encoders on the data, and see how much performance the encoder exhibits on downstream tasks in natural language. Our experimental results show that pretraining with an artificial language with a nesting dependency structure provides some knowledge transferable to natural language. A fol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 30 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?