2000
DOI: 10.1007/3-540-40053-2_38
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Performance of Content-Based Image Retrieval Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2001
2001
2014
2014

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…As frequently pointed out by researchers in this field (e.g., see [21,61]), the image retrieval research community is using a wide variety of measures for evaluating image retrieval performance, in contrast to the text retrieval research community, which has long been using standard consolidated quality indicators, namely, the well known Precision and Recall measures (see, e.g., [19,25,26]). As pointed out for example by Smith in [61], there are many benefits in adopting Precision and Recall as the basis for image retrieval evaluation as well, not only because they are standard measures in information retrieval, but also because many of the existing measures for image retrieval can be computed by them.…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…As frequently pointed out by researchers in this field (e.g., see [21,61]), the image retrieval research community is using a wide variety of measures for evaluating image retrieval performance, in contrast to the text retrieval research community, which has long been using standard consolidated quality indicators, namely, the well known Precision and Recall measures (see, e.g., [19,25,26]). As pointed out for example by Smith in [61], there are many benefits in adopting Precision and Recall as the basis for image retrieval evaluation as well, not only because they are standard measures in information retrieval, but also because many of the existing measures for image retrieval can be computed by them.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…While researchers in text image retrieval have long been using a sophisticated set of tools for user-based evaluation, this does not yet apply to image retrieval [21]. In order to evaluate the effectiveness and the efficiency of VISTO, we use the two well known measures, Precision and Recall (see, e.g., [19,25,26,61]), and study the behavior of Precision at different values of Recall, that is the so-called Precision versus Recall curves. The effectiveness of our system is demonstrated by the fact that the behavior of these curves is always descendent.…”
Section: Features Of the Systemmentioning
confidence: 99%
“…This measure is 0 for perfect , 0.5 for random performance, and approaches 1 as performance worsens. We use several measures based on precision and recall despite criticism on precision and recall already in the 60s [16] and as well in [3,7] for the use in CBIR. Precision and recall, especially in form of the PR graph and in form of precision or recall at important cutoff point, are still the standard in TR and are easy to interpret.…”
Section: Rank = N---~r R~mentioning
confidence: 99%
“…It is also important not to compare the systems based on a single performance measure, but on several measures because for different application areas different characteristics are important. Koskela et al [7] describe performance measures to quantify how close together clusters of images are in feature space based on their ranks. This can be used to compare different features and techniques.…”
Section: Introductionmentioning
confidence: 99%
“…There exist no single widely accepted performance assessment method that can be used for objective and quantitative comparison between different CBIR systems based on different approaches. 2 Several single valued measures have been proposed by researchers. Precision and recall are the most common evaluation measure used by researchers.…”
Section: Introductionmentioning
confidence: 99%