2011
DOI: 10.1016/j.ins.2011.06.025
|View full text |Cite
|
Sign up to set email alerts
|

Halfway through the semantic gap: Prosemantic features for image retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…One possibility, which we shall not analyze in this paper, is that of a prosemantic space, in which each dimension in the reduced dimensionality space is the output of a classifier, trained to recognize a specific category of images [8,9]. Here, we shall consider the individual feature spaces as given, and use the query to transform each one into a dimension of the semantic space.…”
Section: Semantic Partitionmentioning
confidence: 99%
“…One possibility, which we shall not analyze in this paper, is that of a prosemantic space, in which each dimension in the reduced dimensionality space is the output of a classifier, trained to recognize a specific category of images [8,9]. Here, we shall consider the individual feature spaces as given, and use the query to transform each one into a dimension of the semantic space.…”
Section: Semantic Partitionmentioning
confidence: 99%
“…These attribute detectors are then run on new images for high level recognition [46,47]. Researchers have explored creating a set (or bank) of detectors pretrained on objects such as Object Banks [27], an ontology of abstract concepts such as Classemes [45] or scene attributes [8,37].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the recognize scores are packed together as prosemantic features for indexing into image retrieval system. [4] When user wants to search for semantically similar images, it might be difficult using only low-level image features. For example, user wants to query "bear on a river".…”
Section: Introductionmentioning
confidence: 99%