Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1637
|View full text |Cite
|
Sign up to set email alerts
|

Generating Classical Chinese Poems from Vernacular Chinese

Abstract: Classical Chinese poetry is a jewel in the treasure house of Chinese culture. Previous poem generation models only allow users to employ keywords to interfere the meaning of generated poems, leaving the dominion of generation to the model. In this paper, we propose a novel task of generating classical Chinese poems from vernacular, which allows users to have more control over the semantic of generated poems. We adapt the approach of unsupervised machine translation (UMT) to our task. We use segmentation-based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Reinforcement learning has been applied in various natural language generation tasks, including image caption (Rennie et al, 2017), automatic summarization (Paulus et al, 2018), machine translation (Kang et al, 2020) and poem generation (Yang et al, 2019). Specifically, when applying reinforcement learning in dialogue generation (Li et al, 2016;Zhao et al, 2019;Shi et al, 2019;Yamazaki and Aizawa, 2021;, self-play is often used to enable scoring multi-turn dialogues.…”
Section: Rl In Text Generationmentioning
confidence: 99%
“…Reinforcement learning has been applied in various natural language generation tasks, including image caption (Rennie et al, 2017), automatic summarization (Paulus et al, 2018), machine translation (Kang et al, 2020) and poem generation (Yang et al, 2019). Specifically, when applying reinforcement learning in dialogue generation (Li et al, 2016;Zhao et al, 2019;Shi et al, 2019;Yamazaki and Aizawa, 2021;, self-play is often used to enable scoring multi-turn dialogues.…”
Section: Rl In Text Generationmentioning
confidence: 99%
“…Our preliminary experiments on English showed that character-level models learn easily to generate acrostics by themselves, however do not follow the topic as coherently as word-level models. Zhang and Lapata (2014); Zhang et al (2017); ; Yi et al (2018a,c); Yang et al (2019) are other examples of work on generating Chinese poems, but did not focus on acrostics. Ghazvininejad et al (2016Ghazvininejad et al ( , 2017 built a model to generate poems based on topics in a similar fashion to ours.…”
Section: Related Workmentioning
confidence: 99%
“…They report relatively high scores (between 3.6 and 4.06 on the average) for their best model. Yang et al (2019) model poem generation as an unsupervised machine translation problem. Their system takes text written in vernacular Chinese as input and produces a poem in classical Chinese as output.…”
Section: R Elated W Ork On P Oem G Enerationmentioning
confidence: 99%