Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1542
|View full text |Cite
|
Sign up to set email alerts
|

Personalizing Dialogue Agents via Meta-Learning

Abstract: Existing personalized dialogue models use human designed persona descriptions to improve dialogue consistency. Collecting such descriptions from existing dialogues is expensive and requires hand-crafted feature designs. In this paper, we propose to extend Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) to personalized dialogue learning without using any persona descriptions. Our model learns to quickly adapt to new personas by leveraging only a few dialogue samples collected from the same user, which i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
113
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 133 publications
(115 citation statements)
references
References 31 publications
2
113
0
Order By: Relevance
“…By applying such methods on NLG, the model can get better results in a low-resource setting and show better domain generalization [90,91]. Madotto et al [92] further extended this method for personalized dialog systems by leveraging only a few dialogue samples collected from the target user without using the persona-specific descriptions.…”
Section: Transfer Learningmentioning
confidence: 99%
“…By applying such methods on NLG, the model can get better results in a low-resource setting and show better domain generalization [90,91]. Madotto et al [92] further extended this method for personalized dialog systems by leveraging only a few dialogue samples collected from the target user without using the persona-specific descriptions.…”
Section: Transfer Learningmentioning
confidence: 99%
“…Persona-based neural conversation models can be categorized into two major research directions. One is to directly train a model from conversational data by considering the persona information (Li et al, 2016b;Kottur et al, 2017;Madotto et al, 2019), while the other approach makes use of the profiles or sideinformation of users to generate the aligned responses (Chu et al, 2018;Qian et al, 2018;Mazare et al, 2018;Song et al, 2019). The work described in this paper belongs to the first research direction.…”
Section: Persona-based Neural Modelsmentioning
confidence: 99%
“…Our prior work (Welch et al, 2019a;Welch et al, 2019b) explored predicting response time, common messages, and author relationships from personal conversation data. Zhang et al (2018) conditioned dialog systems on artificially constructed personas and Madotto et al (2019) used meta-learning to improve this process. Goal-oriented dialog has used demographics (i.e., age, gender) to condition system response generation, showing that this relatively coarse grained personalization improves system performance (Joshi et al, 2017).…”
Section: Related Workmentioning
confidence: 99%