Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2004
DOI: 10.1145/1008992.1009025
|View full text |Cite
|
Sign up to set email alerts
|

Parsimonious language models for information retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
201
0

Year Published

2004
2004
2011
2011

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 159 publications
(203 citation statements)
references
References 21 publications
2
201
0
Order By: Relevance
“…We compare the performance of our model against the performance of a standard relevance-based language model (Rel LM). Our implementation of the Rel LM follows the description given in [7]. We also compare performance against a simple no feedback Lucene retrieval baseline [19].…”
Section: Pseudo-relevance Feedbackmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare the performance of our model against the performance of a standard relevance-based language model (Rel LM). Our implementation of the Rel LM follows the description given in [7]. We also compare performance against a simple no feedback Lucene retrieval baseline [19].…”
Section: Pseudo-relevance Feedbackmentioning
confidence: 99%
“…Relevance is a key concept in retrieval theory [7] [14]. Among the formal relevance models that have been proposed, relevance-based language models is perhaps the most popular one [9][10] [11].…”
Section: Introductionmentioning
confidence: 99%
“…In order to provide a better than uniform prior probability of relevance, for a number of experiments the text from search topic descriptions serves as a query to match against the speech transcripts, using a language model [9]. In another version of the system the prior distribution is determined by the number of neighbours in the association matrix for each document, so that a document with many neighbours has higher chance to be displayed.…”
Section: Interactive Experiments Setupmentioning
confidence: 99%
“…Probabilistic approaches from text retrieval (e.g. [9,12] gained less popularity among non-text CBIR researches with some notable exceptions [3,19,16]. One of the reasons lies in the difficulty of translating the lower-level features into probability values.…”
Section: Introductionmentioning
confidence: 99%
“…The language modeling framework was first introduced by Ponte and Croft [19], followed by many research activities related to this framework since then [1, 3, 4, 8, 10-12, 14-18, 20, 21, 23]. For example, query expansion techniques [3,11,12,17,18,21,23], pseudo-relevance feedback [4,11,12,17,18,21,23], parameter estimation methods [10], multi-word features [20], passage segmentations [16] and time constraints [14] have been proposed to improve the language modeling frameworks. Among them, query expansion with pseudo feedback can increase retrieval performance significantly [11,18,23].…”
Section: Introductionmentioning
confidence: 99%