Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1126
|View full text |Cite
|
Sign up to set email alerts
|

Generating Topical Poetry

Abstract: We describe Hafez, a program that generates any number of distinct poems on a usersupplied topic. Poems obey rhythmic and rhyme constraints. We describe the poetrygeneration algorithm, give experimental data concerning its parameters, and show its generality with respect to language and poetic form.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
137
0
1

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 118 publications
(139 citation statements)
references
References 16 publications
1
137
0
1
Order By: Relevance
“…One main reason is that the generic Seq2Seq baseline model tends to generate short sentences, often less than five words, which decreases the meaningfulness and increases repeatability of generated sentences. While with our generation model, two encoders of structure and content will mutually promote the effect of decoding referred to (Ghazvininejad 2016). Moreover, the KG model owns higher scores than SG, because the encoded keyword represents global context of next generation, resulting in the increase of meaningfulness and diversity scores.…”
Section: Human Evaluation Resultsmentioning
confidence: 99%
“…One main reason is that the generic Seq2Seq baseline model tends to generate short sentences, often less than five words, which decreases the meaningfulness and increases repeatability of generated sentences. While with our generation model, two encoders of structure and content will mutually promote the effect of decoding referred to (Ghazvininejad 2016). Moreover, the KG model owns higher scores than SG, because the encoded keyword represents global context of next generation, resulting in the increase of meaningfulness and diversity scores.…”
Section: Human Evaluation Resultsmentioning
confidence: 99%
“…We focus here on an encoder-decoder architecture. Originally applied to machine translation , encoder-decoder models have been extended to other sequence modeling tasks like dialogue generation (Serban et al, 2016;Shang et al, 2015) and poetry generation (Ghazvininejad et al, 2016;. We propose that this technique could be similarly useful for our task in establishing a mapping between cause-effect sequence pairs.…”
Section: Neural Network Approachmentioning
confidence: 99%
“…The work of Zhang and Lapata (2014) is one such example, where they were able to outperform all other classical Chinese poetry generation systems with both manual and automatic evaluation. Ghazvininejad et al (2016) and Goyal et al (2016) apply neural language models with regularising finite state machines. However, in the former case the rhythm of the output cannot be defined at sample time, and in the latter case the finite state machine is not trained on rhythm at all, as it is trained on dialogue acts.…”
Section: Related Workmentioning
confidence: 99%