Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics - 1999
DOI: 10.3115/1034678.1034755
|View full text |Cite
|
Sign up to set email alerts
|

Automatic compensation for parser figure-of-merit flaws

Abstract: Best-first chart parsing utilises a figure of merit (FOM) to efficiently guide a parse by first attending to those edges judged better. In the past it has usually been static; this paper will show that with some extra information, a parser can compensate for FOM flaws which otherwise slow it down. Our results are faster than the prior best by a factor of 2.5; and the speedup is won with no significant decrease in parser accuracy.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2004
2004
2009
2009

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…There we showed a nice speedup of the parser versus the default, while maintaining accuracy levels. However, internal heuristics of the Charniak search, such as attention shifting (Blaheta and Charniak, 1999;Hall and Johnson, 2004), can make this accuracy/efficiency tradeoff somewhat difficult to interpret. Furthermore, one might ask whether O(N 2 ) complexity is as good as can be achieved through the paradigm of using finite-state constraints to close chart cells.…”
Section: Introductionmentioning
confidence: 99%
“…There we showed a nice speedup of the parser versus the default, while maintaining accuracy levels. However, internal heuristics of the Charniak search, such as attention shifting (Blaheta and Charniak, 1999;Hall and Johnson, 2004), can make this accuracy/efficiency tradeoff somewhat difficult to interpret. Furthermore, one might ask whether O(N 2 ) complexity is as good as can be achieved through the paradigm of using finite-state constraints to close chart cells.…”
Section: Introductionmentioning
confidence: 99%
“…Using simple retrieval and alignment operations, the model takes advantage of the statistics of word use. Unlike existing work (7,8,10), it need make no a priori commitment to particular grammars, heuristics, or sets of semantic roles, and it does not require an annotated corpus on which to train.…”
Section: Resultsmentioning
confidence: 99%
“…In recent years, there have been a number of attempts to build systems capable of extracting propositional information from sentences (7)(8)(9).…”
mentioning
confidence: 99%
“…This parser is implemented by transforming the grammar in a binary one, in which every rule is unary or binary. Blaheta and Charniak (1999) achieve a further improvement of the performance, with very little decrease in the accuracy. This improvement is based on the observation that parsers based on FOMs tend to spend too much time in one part of the sentence, finding multiple parses for the same substring, while other parts of the sentence are often ignored in the meantime.…”
Section: Introductionmentioning
confidence: 89%