Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue 2021
DOI: 10.18653/v1/2021.sigdial-1.12
|View full text |Cite
|
Sign up to set email alerts
|

Generative Conversational Networks

Alexandros Papangelis,
Karthik Gopalakrishnan,
Aishwarya Padmakumar
et al.

Abstract: Inspired by recent work in meta-learning and generative teaching networks, we propose a framework called Generative Conversational Networks, in which conversational agents learn to generate their own labelled training data (given some seed data) and then train themselves from that data to perform a given task. We use reinforcement learning to optimize the data generation process where the reward signal is the agent's performance on the task. The task can be any language-related task, from intent detection to f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…However, prompt-based augmentation strategies are uncontrolled forms of generation, which may result in generation mistakes for labeled datasets (Sahu et al, 2022;Chen et al, 2022;Meng et al, 2022). In contrast, other recent studies have instead proposed language augmentation strategies that use complex, highly-controlled frameworks that often involve fine-tuning generators (Papangelis et al, 2021;Kulhánek et al, 2021;. Such complex augmentation frameworks require larger amounts of seed data to maintain a ground-truth language distribution (Rosenbaum et al, 2022b;Kim et al, 2021b), and are more costly than prompting PLMs (Chen et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…However, prompt-based augmentation strategies are uncontrolled forms of generation, which may result in generation mistakes for labeled datasets (Sahu et al, 2022;Chen et al, 2022;Meng et al, 2022). In contrast, other recent studies have instead proposed language augmentation strategies that use complex, highly-controlled frameworks that often involve fine-tuning generators (Papangelis et al, 2021;Kulhánek et al, 2021;. Such complex augmentation frameworks require larger amounts of seed data to maintain a ground-truth language distribution (Rosenbaum et al, 2022b;Kim et al, 2021b), and are more costly than prompting PLMs (Chen et al, 2022).…”
Section: Related Workmentioning
confidence: 99%