2010 Data Compression Conference 2010
DOI: 10.1109/dcc.2010.53
|View full text |Cite
|
Sign up to set email alerts
|

Inverted Index Compression for Scalable Image Matching

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(27 citation statements)
references
References 19 publications
(30 reference statements)
0
27
0
Order By: Relevance
“…In recent years, visual search systems have been developed for applications such as product recognition [1,2] and landmark recognition [3]. In these systems, local image features [4,5,6] are extracted from images taken with a camera-phone and are matched to a large database using visual word indexing techniques [7,8]. Although current visual search technologies have reached a certain level of maturity, they have largely ignored a class of informative features often observed in images: text.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, visual search systems have been developed for applications such as product recognition [1,2] and landmark recognition [3]. In these systems, local image features [4,5,6] are extracted from images taken with a camera-phone and are matched to a large database using visual word indexing techniques [7,8]. Although current visual search technologies have reached a certain level of maturity, they have largely ignored a class of informative features often observed in images: text.…”
Section: Introductionmentioning
confidence: 99%
“…It largely outperforms directly sending the compact local descriptors (more than 5 KB in reported works). Their successive work in Chen et al (2010) further compressed the inverted indexing structure of visual vocabulary (Nister and Stewenius 2006) with arithmetic coding to reduce the memory and storage cost to maintain the scalable visual search system in server(s).…”
Section: State-of-the-art Mobile Landmark Search Frameworkmentioning
confidence: 99%
“…8(b) further shows the SS-0.5Million Evaluation in the 0.5 million Flickr data set. Similarly, there is a large margin from the learning codebook [16] [baseline (3)] and our final approach to the VT [1] [baseline (1)], adaptive vocabulary [16] [baseline (3)], and our alternative approaches [baseline (7)]. Note that baseline (3) outperforms our alternative approach [baseline (7)] with only visual discriminability embedding.…”
Section: ) Nd Image Retrieval In Ukbenchmentioning
confidence: 99%
“…Such a high dimensionality usually introduces obvious computational cost in both processing time (to match visual descriptors online) and storage space (to maintain the search model in the memory). This is extremely crucial for the state-ofthe-art mobile visual search scenario [6], [7], where mobile devices directly extract the BoW histogram and transmit over the wireless link. On the contrary, since the number of local features extracted from each image is limited (typically hundreds), each image is represented as a sparse histogram of nonzero codewords.…”
mentioning
confidence: 99%