2002
DOI: 10.1162/089120102762671954
|View full text |Cite
|
Sign up to set email alerts
|

Efficiently Computed Lexical Chains as an Intermediate Representation for Automatic Text Summarization

Abstract: While automatic text summarization is an area that has received a great deal of attention in recent research, the problem of efficiency in this task has not been frequently addressed. When the size and quantity of documents available on the Internet and from other sources are considered, the need for a highly efficient tool that produces usable summaries is clear. We present a linear-time algorithm for lexical chain computation. The algorithm makes lexical chains a computationally feasible candidate as an inte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
88
0

Year Published

2004
2004
2013
2013

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 130 publications
(91 citation statements)
references
References 6 publications
2
88
0
Order By: Relevance
“…Lexical cohesion analysis has been used in such NLP applications as determining the structure of text (Morris and Hirst, 1991) and automatic text summarization (Barzilay and Elhadad, 1999). In recent lexical cohesion research in linguistics (Hasan, 1984;Halliday and Hasan, 1989;Martin, 1992) non-classical relations are largely ignored, and the same is true in implementations of lexical cohesion in computational linguistics (Barzilay and Elhadad, 1999;Silber and McCoy, 2002), as the lexical resource used is WordNet. It is notable, however, that the original view of lexical semantic relations in the lexical cohesion work of Halliday and Hasan (1976) was very broad and general; the only criterion was that there had to be a recognizable relation between two words.…”
Section: 1mentioning
confidence: 99%
“…Lexical cohesion analysis has been used in such NLP applications as determining the structure of text (Morris and Hirst, 1991) and automatic text summarization (Barzilay and Elhadad, 1999). In recent lexical cohesion research in linguistics (Hasan, 1984;Halliday and Hasan, 1989;Martin, 1992) non-classical relations are largely ignored, and the same is true in implementations of lexical cohesion in computational linguistics (Barzilay and Elhadad, 1999;Silber and McCoy, 2002), as the lexical resource used is WordNet. It is notable, however, that the original view of lexical semantic relations in the lexical cohesion work of Halliday and Hasan (1976) was very broad and general; the only criterion was that there had to be a recognizable relation between two words.…”
Section: 1mentioning
confidence: 99%
“…For practical purposes text summarisation techniques can be divided into statistical [19,4] and linguistic [10,17] techniques. In this context CGUSD can be thought of as a statistical technique.…”
Section: Related Workmentioning
confidence: 99%
“…An example of an unsupervised technique is that proposed in [16] where a graph-based ranking algorithm is applied to extract important sentences taking into account the local context of a word and information recursively produced from the entire text. Another example of an unsupervised text summerisation technique can be found in [17], where an improved version of a linear time algorithm for lexical chain computation is proposed, together with a method for evaluating lexical chains as an intermediate step. An alternative approach, and that advocated by CGUSD, is to generate a text summarisation classifier using an alternative pre-labelled data set.…”
Section: Related Workmentioning
confidence: 99%
“…There also exist proposals for a chaining algorithm in linear time (Silber, McCoy, 2002). However, this approach cannot be applied to the Wikipedia as it misses the rich type system of WordNet utilized by Silber & McCoy.…”
Section: Lexical Content Analysismentioning
confidence: 99%