2004
DOI: 10.1023/b:visi.0000004833.39906.33
|View full text |Cite
|
Sign up to set email alerts
|

Semantic-Friendly Indexing and Quering of Images Based on the Extraction of the Objective Semantic Cues

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0

Year Published

2008
2008
2015
2015

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(44 citation statements)
references
References 40 publications
1
43
0
Order By: Relevance
“…We think that our image-search results compare well to theirs (e.g. figure 8 (Heidemann, 2005); figures 12-14 (Mojsilovic et al, 2004)), yet our results are obtained without the use of any color information. An important constraint of such an applied image search is that it has to occur fast.…”
Section: System Performancesupporting
confidence: 66%
See 1 more Smart Citation
“…We think that our image-search results compare well to theirs (e.g. figure 8 (Heidemann, 2005); figures 12-14 (Mojsilovic et al, 2004)), yet our results are obtained without the use of any color information. An important constraint of such an applied image search is that it has to occur fast.…”
Section: System Performancesupporting
confidence: 66%
“…(Heidemann, 2005;Wang et al, 2001;Mojsilovic et al, 2004;Vogel and Schiele, 2007). These studies use traditional techniques such as image segmentation, template matching and interest points.…”
Section: System Performancementioning
confidence: 99%
“…So, the relative capacity of BAM recalling pattern pairs is 96 × 64 × 0.1998 = 1227 pairs [20]. On the other hand, according to the statement in [1] that about 40 to 240 lexicons are needed to define for perfectly detecting the concepts carried by images, the capacity of the constructed pixel-based BAM capacity is pretty sufficient in calculating the recalling values r n i , which are used to generate the associative values with given lexicons by (9), if the number of given lexicons for a domain is in the range of (40, 240).…”
Section: Calculating Associative Valuesmentioning
confidence: 99%
“…However, the user query as "find images with 10-30% of sky" is not a natural way to present the semantics of the images. In Mojsilovic et al [9], a semantic-friendly query language for searching diverse collections of images was proposed. However, same as Vogel and Schiele [8], the query language such as (nature <10 and contrast >800) is not easy to utilize for modeling the categories.…”
Section: Introductionmentioning
confidence: 99%
“…However, humans use semantic concepts to identify and describe colors [16]. Hence, we adopt both numeric features directly extracted from the image such as Region_Average_Color and semantic features like Region_Color_Name mapped from the visual features as described in [20].…”
Section: B Feature Extraction and Analysismentioning
confidence: 99%