Proceedings of the 38th Annual Meeting on Association for Computational Linguistics - ACL '00 2000
DOI: 10.3115/1075218.1075259
|View full text |Cite
|
Sign up to set email alerts
|

Headline generation based on statistical translation

Abstract: Extractive summarization techniques cannot generate document summaries shorter than a single sentence, something that is often required. An ideal summarization system would understand each document and generate an appropriate summary directly from the results of that understanding. A more practical approach to this problem results in the use of an approximation: viewing summarization as a problem analogous to statistical machine translation. The issue then becomes one of generating a target document in a more … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
144
0
2

Year Published

2003
2003
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 184 publications
(151 citation statements)
references
References 19 publications
1
144
0
2
Order By: Relevance
“…The first statistical framework for Automatic title generation was proposed by Banko, Mittal and Witbrock [11]. In this paper, we refer it as the 'BMW model'.…”
Section: Bmw Model For Title Generationmentioning
confidence: 99%
“…The first statistical framework for Automatic title generation was proposed by Banko, Mittal and Witbrock [11]. In this paper, we refer it as the 'BMW model'.…”
Section: Bmw Model For Title Generationmentioning
confidence: 99%
“…It involves indentifying interest words in the text, and combining them into the title [20]. Therefore, the feature set for this model should capture selection constraints at the word level, and the contextual constraints at the word sequence level.…”
Section: Local Featuresmentioning
confidence: 99%
“…Given the verbose text, the system's task is to reconstruct the original message. The problem can be modeled in terms of simple word-level features, as in (Banko et al 2000), or in terms of parse tree structures, as in (Knight and Marcu 2000;Turner and Charniak 2005). One downside of these statistical approaches is the need for annotated training data to learn model parameters.…”
Section: Related Workmentioning
confidence: 99%