2006
DOI: 10.1145/1119766.1119769
|View full text |Cite
|
Sign up to set email alerts
|

Image retrieval and perceptual similarity

Abstract: Simple, low-level visual features are extensively used for content-based image retrieval. Our goal was to evaluate an imageindexing system based on some of the known properties of the early stages of human vision. We quantitatively measured the relationship between the similarity order induced by the indexes and perceived similarity. In contrast to previous evaluation approaches, we objectively measured similarity both for the few best-matching images and also for relatively distinct images. The results show t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
36
0
1

Year Published

2007
2007
2013
2013

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 48 publications
(38 citation statements)
references
References 35 publications
1
36
0
1
Order By: Relevance
“…However, even with the apparent limitations of concept-based image retrieval, such as the cost (Layne, 1994) and the variable precision of manual annotations, contentbased image retrieval systems have failed thus far to bridge the semantic gap between indexer and end user (Datta, Li, & Wang, 2005;Enser, 2008;Lew, Sebe, Djeraba, & Jain, 2006;Neumann & Gegenfurtner, 2006;Tjondronegoro et al, 2009). Because words are the only means for describing the semantic content of images, at least for the foreseeable future, image retrieval systems will depend on text annotations, especially for image collections on social networking and photo-sharing services such as Flickr where users add textual tags and descriptions to images.…”
Section: Introductionmentioning
confidence: 99%
“…However, even with the apparent limitations of concept-based image retrieval, such as the cost (Layne, 1994) and the variable precision of manual annotations, contentbased image retrieval systems have failed thus far to bridge the semantic gap between indexer and end user (Datta, Li, & Wang, 2005;Enser, 2008;Lew, Sebe, Djeraba, & Jain, 2006;Neumann & Gegenfurtner, 2006;Tjondronegoro et al, 2009). Because words are the only means for describing the semantic content of images, at least for the foreseeable future, image retrieval systems will depend on text annotations, especially for image collections on social networking and photo-sharing services such as Flickr where users add textual tags and descriptions to images.…”
Section: Introductionmentioning
confidence: 99%
“…Estimated visual similarity has been shown to correspond well to subjectively perceived similarity [1]. Thus, initially, we did not additionally validate the similarity by having human observers make judgements between images from different categories.…”
Section: Methodsmentioning
confidence: 99%
“…For example, Neumann and Gegenfurtner [7] studied how perceptual similarity factored into image retrieval algorithm and system design. Sanchez et al [8] studied the modeling of subjectivity in the visual perception of orientation in image retrieval.…”
Section: Related Workmentioning
confidence: 99%