1997
DOI: 10.1016/s0306-4573(96)00043-x
|View full text |Cite
|
Sign up to set email alerts
|

Performance standards and evaluations in IR test collections: Cluster-based retrieval models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

1999
1999
2012
2012

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 87 publications
(41 citation statements)
references
References 35 publications
0
41
0
Order By: Relevance
“…Most clustering research in IR is related to cluster search effectiveness [Griffiths et al, 1986;Willett, 1988;Burgin 1995;Shaw et al 1997;Schütze, Craig 1997]. The research on efficiency aspects of cluster searches is limited.…”
Section: Previous and Related Workmentioning
confidence: 99%
“…Most clustering research in IR is related to cluster search effectiveness [Griffiths et al, 1986;Willett, 1988;Burgin 1995;Shaw et al 1997;Schütze, Craig 1997]. The research on efficiency aspects of cluster searches is limited.…”
Section: Previous and Related Workmentioning
confidence: 99%
“…As evaluation metric, we used the F 1 score, 39 i.e., the harmonic mean of precision and recall, using the following formula:…”
Section: Resultsmentioning
confidence: 99%
“…The MED1033 database, composed of 1033 documents extracted from the National Library of Medicine's database, was used for other analyses. Because of the design of this database, it is easier for systems using MED1033 to retrieve documents labeled as relevant (Kwok, 1990;Shaw, Burgin, & Howell, 1997). The MED1033 Tagged database is part-of-speech tagged as with the CF POS Tagged database.…”
Section: Resultsmentioning
confidence: 99%