2000
DOI: 10.1117/12.411898
|View full text |Cite
|
Sign up to set email alerts
|

<title>Benchmark for image retrieval using distributed systems over the Iinternet: BIRDS-I</title>

Abstract: Comparing the performance of CBIR (Content-Based Image Retrieval) algorithms is difficult. Private data sets are used so it is controversial to compare CBIR algorithms developed by different researchers. Also, the performance of CBIR algorithms is usually measured on an isolated, well-tuned PC or workstation. In a real-world environment, however, the CBIR algorithms would only constitute a minor component among the many interacting components needed to facilitate a useful CBIR application e.g., Web-based appli… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 11 publications
0
20
0
Order By: Relevance
“…The dataset used consists of 1875 images taken from the Benchathlon dataset [31]. The dataset includes typical consumer photographs showing a very different distribution of concepts with respect to the dataset used to train the classifiers.…”
Section: Methodsmentioning
confidence: 99%
“…The dataset used consists of 1875 images taken from the Benchathlon dataset [31]. The dataset includes typical consumer photographs showing a very different distribution of concepts with respect to the dataset used to train the classifiers.…”
Section: Methodsmentioning
confidence: 99%
“…More automatic methods typically involve having sets of images tagged with high level concepts (e.g., sky, grass ), and retrieval is evaluated based on those labels [45,46,47], making performance evaluation similar to that in text retrieval [39]. The Benchatholon project proposes providing much more detailed and publicly available keywords of images using a controlled vocabulary [23,37,1]. A problem with both these approaches is that they are only indirectly connected to the task that they are trying to measure.…”
Section: Evaluation Of Image Retrieval Algorithmsmentioning
confidence: 99%
“…Evaluation benchmarks should therefore be designed to use sufficiently large data sets. For example, in Gunther and Beretta (2000), it was suggested that even an initial benchmark database should contain at least 10 000 images. The scalability requirement has also been long recognized in the IR community and TREC, for example, has always concentrated on retrieval from large test collections.…”
Section: What To Evaluate?mentioning
confidence: 99%
“…The aim of the initiative is to set up a collaborative environment where standard CBIR evaluation protocols and frameworks can be developed. The leading principle in designing the benchmark has been to use a distributed client-server architecture, as described in Gunther and Beretta (2000). The purpose is to divide CBIR systems into separate client and server parts, in order to be able to measure CBIR performance over the Internet.…”
Section: Mpegmentioning
confidence: 99%
See 1 more Smart Citation