Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2010
DOI: 10.1109/cvpr.2010.5539914
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale image categorization with explicit data embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
90
0
1

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 121 publications
(91 citation statements)
references
References 12 publications
0
90
0
1
Order By: Relevance
“…Actually, the square-rooting BoW histogram is the feature map of Hellinger kernel k x; y ð Þ ¼ ffiffiffiffi ffi xy p , which can improve the retrieval performance [33,34]. To develop the CBIR system of contrast-enhanced liver CT images, the visual vocabularies of each phase and the distance metrics were learned offline for each single-or multiphase CT image.…”
Section: Bag-of-visual-words Representation Of Lesionsmentioning
confidence: 99%
“…Actually, the square-rooting BoW histogram is the feature map of Hellinger kernel k x; y ð Þ ¼ ffiffiffiffi ffi xy p , which can improve the retrieval performance [33,34]. To develop the CBIR system of contrast-enhanced liver CT images, the visual vocabularies of each phase and the distance metrics were learned offline for each single-or multiphase CT image.…”
Section: Bag-of-visual-words Representation Of Lesionsmentioning
confidence: 99%
“…For smallscale datasets, it is feasible to learn a non-linear SVM, albeit the training complexity is somewhere between quadratic and cubic [20]. In general, however, nonlinear SVMs do not scale well with training set size.…”
Section: Data Embeddingmentioning
confidence: 99%
“…Since the semantic kernel of (4) is not additive, neither the embedding of [19], nor the embedding learning method of [20] are feasible. One alternative, that we explore, is to replace the arccos term by a first-order Taylor series around 0, i.e., arccos…”
Section: Data Embeddingmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it was recently shown that for some particular classes of kernels such approximations do exist. The main lines of research towards efficient approximation of features map include (a) exploiting particular kernel properties to find the approximation (e.g., in [27,18] the authors exploited various properties to propose efficient and effective approximations of large families of additive kernels); (b) the application of random sampling on Fourier features [15,16,2] (e.g., in [22] methodologies have been proposed for encoding stationary kernels by randomly sampling their Fourier features); (c) the application of the so-called Nystrom method, which is a data-dependent methodology that requires training [29,28,21]. Even though the above methods provide useful and general methodologies that are applicable to many kernels, their disadvantage is that they provide approximate solutions.…”
Section: Introductionmentioning
confidence: 99%