Proceedings of the ACL-02 Workshop on Automatic Summarization - 2002
DOI: 10.3115/1118162.1118163
|View full text |Cite
|
Sign up to set email alerts
|

Using maximum entropy for sentence extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
54
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 97 publications
(55 citation statements)
references
References 10 publications
1
54
0
Order By: Relevance
“…We hypothesise that this is due to the very carefully constructed feature set optimised for naïve Bayes. Results from Osborne (2002), where maximum entropy was shown to perform much better than naïve Bayes when features are highly dependent, support this hypothesis. Our results also support this hypothesis.…”
Section: Resultsmentioning
confidence: 77%
“…We hypothesise that this is due to the very carefully constructed feature set optimised for naïve Bayes. Results from Osborne (2002), where maximum entropy was shown to perform much better than naïve Bayes when features are highly dependent, support this hypothesis. Our results also support this hypothesis.…”
Section: Resultsmentioning
confidence: 77%
“…Log Linear Model [5]: Osborne used log-linear models and showed that existing approaches used feature independence and these models produce better extracts than naïve-bayes model.…”
Section: Classification Of Automatic Text Summarizationmentioning
confidence: 99%
“…Weight learning method (Osborne, 2002) Conjugate gradient decent search method (Fattah and Ren, 2009) Mathematical Regression (MR) model (Binwahlan et al, 2009) Particle Swarm Optimization (PSO) (Dehkordi et al, 2009), Genetic Algorithm (GA) model (Bossard and Rodrigues, 2011) and (Suanmali et al, 2011) Besides optimizing feature weights, the impact of combining different features has been investigated by Hariharan (2010) for multi document summarization. In his study, the author showed that term frequency weight combined with position and node weight feature yields significantly better results.…”
Section: Authorsmentioning
confidence: 99%