Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1208
|View full text |Cite
|
Sign up to set email alerts
|

Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards

Abstract: Generating keyphrases that summarize the main points of a document is a fundamental task in natural language processing. Although existing generative models are capable of predicting multiple keyphrases for an input document as well as determining the number of keyphrases to generate, they still suffer from the problem of generating too few keyphrases. To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
66
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(67 citation statements)
references
References 35 publications
0
66
1
Order By: Relevance
“…We acknowledge that F1@O scores ofChan et al (2019) andChen et al (2019a) might be not completely comparable with ours. This is due to additional post-processing and filtering methods might have been applied in different work.…”
contrasting
confidence: 64%
See 3 more Smart Citations
“…We acknowledge that F1@O scores ofChan et al (2019) andChen et al (2019a) might be not completely comparable with ours. This is due to additional post-processing and filtering methods might have been applied in different work.…”
contrasting
confidence: 64%
“…Chen et al (2018b); Ye and Wang (2018) proposed to use structure information (e.g., title of source text) to improve keyphrase generation performance. Chan et al (2019) introduced RL to the keyphrase generation task. Chen et al (2019a) retrieved similar documents from training data to help producing more accurate keyphrases.…”
Section: Keyphrase Extraction and Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…Given that the above catSeq model tends to generate fewer keywords than the ground-truth, authors of [79] reformulate it from the RL (Reinforcement Learning) perspective which has also been applied recently in several text summarization works like [80], [81] or [82] and similar seq2seq applications described in [82]. The model is stimulated to generate enough keyphrases employing an adaptive reward function that is based on recall (not penalized by incorrect predictions) in undergeneration scenarios and F 1 (penalized by more incorrect predictions) in overgeneration scenarios.…”
Section: Reinforcement Learning Perspectivementioning
confidence: 99%