2021 IEEE 15th International Conference on Semantic Computing (ICSC) 2021
DOI: 10.1109/icsc50631.2021.00009
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Title Generation for Text with Pre-trained Transformer Language Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Mishra et al [10] proposed a paper title generation method using GPT-2, which is a transformer-based natural language processing model similar to BERT. Because GPT-2 has a probabilistic characteristic, its output varies each time.…”
Section: Title Generation Methods Using Dnnmentioning
confidence: 99%
“…Mishra et al [10] proposed a paper title generation method using GPT-2, which is a transformer-based natural language processing model similar to BERT. Because GPT-2 has a probabilistic characteristic, its output varies each time.…”
Section: Title Generation Methods Using Dnnmentioning
confidence: 99%
“…Title generation and evaluation Mishra et al (2021) perform A2T with pre-trained GPT-2 finetuned on arxiv papers and subsequent (rule-based) modules of title selection and refinement. We compare many more text generation models for the task, use better evaluation (including more comprehensive human and automatic evaluation), do not make use of rule-based selection and also consider humor in title generation.…”
Section: Related Workmentioning
confidence: 99%
“…We also leverage the relationship to summarization by considering pre-trained models fine-tuned on summarization datasets. In contrast to Putra and Khodra (2017) and Mishra et al (2021), we only consider end-to-end models that do not involve pipelines. While refinement steps could be further helpful (but also error-prone), they additionally require potentially undesirable human intervention (Belouadi and Eger, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…For strictly extractive endeavors, decoders are typically substituted by a specific classifier determining which input tokens will appear in the final summary. Another strategy is to fine-tune a GPT-2 (Radford et al, 2019) style auto-regressive model for the summarization task; this approach was adopted by both Koppatz et al (2022) for headline generation and Mishra et al (2021) for title generation. Many contemporary title and headline generation methods have adopted metrics like BLEU or ROUGE to assess model performance (Matsumaru et al, 2020;Bukhtiyarov and Gusev, 2020;Tilk and Alumäe, 2017;Mishra et al, 2021); these are also standard for summarization evaluation.…”
Section: Related Workmentioning
confidence: 99%
“…Another strategy is to fine-tune a GPT-2 (Radford et al, 2019) style auto-regressive model for the summarization task; this approach was adopted by both Koppatz et al (2022) for headline generation and Mishra et al (2021) for title generation. Many contemporary title and headline generation methods have adopted metrics like BLEU or ROUGE to assess model performance (Matsumaru et al, 2020;Bukhtiyarov and Gusev, 2020;Tilk and Alumäe, 2017;Mishra et al, 2021); these are also standard for summarization evaluation. An exception is Koppatz et al (2022), who also rely on manual structured review by domain experts to assess the quality of their automatically generated headlines.…”
Section: Related Workmentioning
confidence: 99%