Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval 2005
DOI: 10.1145/1101826.1101862
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation strategies for image understanding and retrieval

Abstract: We address evaluation of image understanding and retrieval large scale image data in the context of three evaluation projects. The first project is a comprehensive strategy for evaluating image retrieval algorithms and provides an open reference data set for doing so. The second project develops word prediction as a semantically relevant evaluation strategy, and applies it to the evaluation of of image processing methods for semantic image analysis. The third project evaluates words for suitability of their vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 53 publications
(47 reference statements)
0
5
0
Order By: Relevance
“…Automatic image annotation refers to the task of assigning a few relevant keywords to an unannotated image to describe its visual content; the keywords are then indexed and used to retrieve images (Feng et al, 2004;Guillaumin et al, 2009;Jeon et al, 2003;Makadia et al, 2008;Yanai et al, 2005). These keywords are often derived from a well-annotated image collection, and the latter serves as training examples for automatic image annotation.…”
Section: Image Annotation and Retrievalmentioning
confidence: 99%
See 2 more Smart Citations
“…Automatic image annotation refers to the task of assigning a few relevant keywords to an unannotated image to describe its visual content; the keywords are then indexed and used to retrieve images (Feng et al, 2004;Guillaumin et al, 2009;Jeon et al, 2003;Makadia et al, 2008;Yanai et al, 2005). These keywords are often derived from a well-annotated image collection, and the latter serves as training examples for automatic image annotation.…”
Section: Image Annotation and Retrievalmentioning
confidence: 99%
“…Moreover, in image annotation research, the keywords are carefully selected and the number of keywords is often very small. For instance, the number of keywords selected in the commonly used image annotation datasets, such as Corel5K, iarp tc12, and esp game datasets, ranges from 100 to 500 (Guillaumin et al, 2009;Jeon et al, 2003;Makadia et al, 2008;Yanai et al, 2005). The two larger datasets Corel30K and psu, containing 31K and 60K images, are annotated with 5,587 and 442 keywords, respectively (Carneiro et al, 2007).…”
Section: Image Annotation and Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…They did use a larger set of image features and segmentation, however, we suspect that differences can rather be attributed to corpus type. In fact, (Yanai, Shirahatti, and Barnard, 2005) noted that human evaluators rated images obtained via a keyword retrieval method higher compared to image-based retrieval methods, which they relate to the importance of semantics for what humans regard as matching, and because pictorial semantics is hard to detect. (Cai et al, 2004) use similar methods to rank visual search results.…”
Section: Comparison To Previous Workmentioning
confidence: 99%
“…The inefficiencies of CBIR systems were reported early in the literature. In [YSGB05] it is stated that "current image retrieval methods are well off the required mark". When compared to the performance of current text retrieval systems, CBIR systems have a much lower success rate at retrieving relative images to a query.…”
Section: The Semantic and Sensory Gapmentioning
confidence: 99%