Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1439
|View full text |Cite
|
Sign up to set email alerts
|

Keyphrase Generation with Correlation Constraints

Abstract: In this paper, we study automatic keyphrase generation.Although conventional approaches to this task show promising results, they neglect correlation among keyphrases, resulting in duplication and coverage issues. To solve these problems, we propose a new sequence-to-sequence architecture for keyphrase generation named CorrRNN, which captures correlation among multiple keyphrases in two ways. First, we employ a coverage vector to indicate whether the word in the source document has been summarized by previous … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
118
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 106 publications
(119 citation statements)
references
References 19 publications
1
118
0
Order By: Relevance
“…These models include catSeq, catSeqD (Yuan et al, 2018), catSeqCorr (Chen et al, 2018a), and catSeqTG (Chen et al, 2018b). For all baselines, we use the method in Yuan et al (2018) Table 2: Results of present keyphrase prediction on five datasets.…”
Section: Baseline and Deep Reinforced Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…These models include catSeq, catSeqD (Yuan et al, 2018), catSeqCorr (Chen et al, 2018a), and catSeqTG (Chen et al, 2018b). For all baselines, we use the method in Yuan et al (2018) Table 2: Results of present keyphrase prediction on five datasets.…”
Section: Baseline and Deep Reinforced Modelsmentioning
confidence: 99%
“…With this setup, all baselines can determine the number of keyphrases to generate. The catSeqCorr and catSeqTG models are the CorrRNN (Chen et al, 2018a) and TG-Net (Chen et al, 2018b) models trained under this setup, respectively. For the reinforced models, we follow the method in Section 3.2 to concatenate keyphrases.…”
Section: Baseline and Deep Reinforced Modelsmentioning
confidence: 99%
“…A reduced vocabulary size is important to have decent AKG resutls within a reasonable computation time. For this reason, authors of many recent AKG studies like [12], [71], [68], [73], [75] and [79] replace all digit tokens with the symbol digit . Stemming is also commonly used in studies like [12], [75], [74], [24], [71] and [73] to have the predicted and golden keywords properly compared during evaluation.…”
Section: A Experimental Patternsmentioning
confidence: 99%
“…Alternative types of context meaning can be captured by positional information incorporating the relative position of the phrase in a given text , that is, the position of the first occurrence of a phrase normalized by the length of the target publication (Caragea et al, ) (CeKE), (Hulth, ), (Zhang et al, ) (MIKE), and (Wang & Li, ) (PCU‐ICL). Recently, McIlraith and Weinberger () (SurfKE) as well as Meng et al (), Chen et al (), Wang et al (), Alzaidy et al () and Ye and Wang () go one step beyond the context‐based features by learning features/embeddings using graph structures and neural networks, respectively.…”
Section: Supervised Methodsmentioning
confidence: 99%