Monographs in Computer Science
DOI: 10.1007/0-387-21821-1_35
|View full text |Cite
|
Sign up to set email alerts
|

Retrieval System Models: What’s New?

Abstract: In the postwar development of computing, most people thought of computers as machines for numerical applications. But some saw the potential for automatic text processing tasks, notably translation and document indexing and searching, even though words seemed much messier as data than numbers. For Roger, as one of these early researchers, building systems for language processing was both intellectually challenging and practically useful, and in the late 1950s he began to work on document retrieval (Needham 196… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Here we consider explicitly adding new terms to the existing query Q, to form Q', not simply replacing one term set by another. Candidate expansion terms are defined by their offer weights OW, and the terms in Q' by their combined iterative weights CIW [4]. Table 3 gives the performance for BRF when adding 3 new terms drawn from the top 10 documents.…”
Section: Query Expansionmentioning
confidence: 99%
“…Here we consider explicitly adding new terms to the existing query Q, to form Q', not simply replacing one term set by another. Candidate expansion terms are defined by their offer weights OW, and the terms in Q' by their combined iterative weights CIW [4]. Table 3 gives the performance for BRF when adding 3 new terms drawn from the top 10 documents.…”
Section: Query Expansionmentioning
confidence: 99%
“…Terms need to be scored and ranked according to their association with the relevance of the documents. We used Robertson's Offer Weight method [2] to score the terms contained in title or abstract fields of documents assumed or judged relevant. The term score is given by the following formula:…”
Section: Query Expansion Relevance and Pseudo-relevance Feedbackmentioning
confidence: 99%