2021
DOI: 10.48550/arxiv.2104.05218
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FUDGE: Controlled Text Generation With Future Discriminators

Kevin Yang,
Dan Klein

Abstract: We propose Future Discriminators for Generation (FUDGE), a flexible and modular method for controlled text generation. Given a preexisting model G for generating text from a distribution of interest, FUDGE enables conditioning on a desired attribute a (for example, formality) while requiring access only to G's output logits. FUDGE learns an attribute predictor operating on a partial sequence, and uses this predictor's outputs to adjust G's original probabilities. We show that FUDGE models terms corresponding t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…Instead, we want to control action generation to consistently produce actions of highly-rewarding behavior. This mirrors the problem of discriminator-guided generation in language models, for which a variety of methods have been proposed [39,76,56].…”
Section: Expert Action Inferencementioning
confidence: 95%
“…Instead, we want to control action generation to consistently produce actions of highly-rewarding behavior. This mirrors the problem of discriminator-guided generation in language models, for which a variety of methods have been proposed [39,76,56].…”
Section: Expert Action Inferencementioning
confidence: 95%
“…DEXPERTS [Liu et al 2021c] re-ranks the predictions of the PLM based on expert (and anti-expert) opinions during the decoding stage to steer the language model towards the desired generation. FUDGE [Yang and Klein 2021] learns an attribute predictor operating on a partial sequence to adjust the original PLM's probabilities, and obtain an improved performance on the tasks of couplet completion in poetry, topic control in language generation, and formality change in machine translation. Plug-and-Blend [Lin and Riedl 2021] extends the GEDI model to controlled story generation by introducing a planner module.…”
Section: Post-processingmentioning
confidence: 99%
“…[1502] surveys in detail the methods to improve dialogue safety. The roadmap of the methods include toxicity detection [1516,1501,1502], generation detoxifying [1517,1518,1519], topic avoidance [1502], and bias mitigation [1520,1521]. [1502] also proposes a bot-adversarial dialogue framework to collect unsafe samples in conversational testing, which would be modified and used to re-train conversational models as "safety layer".…”
Section: Safety and Ethical Riskmentioning
confidence: 99%