2022
DOI: 10.48550/arxiv.2203.08979
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Speaker Information Can Guide Models to Better Inductive Biases: A Case Study On Predicting Code-Switching

Abstract: Natural language processing (NLP) models trained on people-generated data can be unreliable because, without any constraints, they can learn from spurious correlations that are not relevant to the task. We hypothesize that enriching models with speaker information in a controlled, educated way can guide them to pick up on relevant inductive biases. For the speaker-driven task of predicting code-switching points in English-Spanish bilingual dialogues, we show that adding sociolinguistically-grounded speaker fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 30 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?