Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval 2003
DOI: 10.1145/860435.860445
|View full text |Cite
|
Sign up to set email alerts
|

Quantitative evaluation of passage retrieval algorithms for question answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
85
0

Year Published

2006
2006
2010
2010

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 176 publications
(87 citation statements)
references
References 11 publications
0
85
0
Order By: Relevance
“…In this study, we employed five high-performance yet portable ranking models, namely, Okapi BM-25 [27], [42], [46], INQUERY [33], [34], language model (LM) [28], [29], and SiteQ's [35] approaches that were implemented in the Lemur toolkit [51] (with the exception of SiteQ's method for comparison.) Our previous method (suffix-tree-based [31]) was also included.…”
Section: B Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, we employed five high-performance yet portable ranking models, namely, Okapi BM-25 [27], [42], [46], INQUERY [33], [34], language model (LM) [28], [29], and SiteQ's [35] approaches that were implemented in the Lemur toolkit [51] (with the exception of SiteQ's method for comparison.) Our previous method (suffix-tree-based [31]) was also included.…”
Section: B Experimental Resultsmentioning
confidence: 99%
“…Tellex et al [42] had compared seven passage retrieval algorithms such as BM-25 [27], and SiteQ [35] for the TREC-Q/A task; however they excluded two ad-hoc methods which needed human-generated patterns and ontology. They reported that the term weighting methods such as BM-25 [27] were slightly worse than the SiteQ's approach [35] for integration with external resources such as named entity recognizers, WordNet, and thesauruses.…”
Section: Finding Optimal String Patterns For Weightingmentioning
confidence: 99%
“…For example, instead of returning documents in response to a query one can return passages that supposedly contain pertaining information (Denoyer et al 2001;Allan 2003;Jiang and Zhai 2004;Murdock and Croft 2005;Wade and Allan 2005). Retrieving passages is also common in work on question answering (Corrada-Emmanuel et al 2003;Lin et al 2003;Tellex et al 2003;Hussain 2004;Zhang and Lee 2004;Otterbacher et al 2005) wherein answers are being extracted (or compiled) from passages that are deemed relevant to the question at hand. While we demonstrate the merits in using our proposed passage language model for ad hoc document retrieval, it can also potentially be used in any task that calls for passage retrieval.…”
Section: Related Workmentioning
confidence: 99%
“…Tellex et al [4] have made a thorough quantitative evaluation of passage retrieval algorithms. Multitext [5] system takes a density-based algorithm that retrieve short passages which contain query terms with high idf values.…”
Section: Related Workmentioning
confidence: 99%