2013
DOI: 10.1007/978-3-642-41822-8_17
|View full text |Cite
|
Sign up to set email alerts
|

Encoding Classes of Unaligned Objects Using Structural Similarity Cross-Covariance Tensors

Abstract: Encoding an object essence in terms of self-similarities between its parts is becoming a popular strategy in Computer Vision. In this paper, a new similarity-based descriptor, dubbed Structural Similarity Cross-Covariance Tensor is proposed, aimed to encode relations among different regions of an image in terms of cross-covariance matrices. The latter are calculated between low-level feature vectors extracted from pairs of regions. The new descriptor retains the advantages of the widely used covariance matrix … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 10 publications
(18 reference statements)
0
6
0
Order By: Relevance
“…The present work considerably extends the study presented in 12 , by detailing how the SS-CCT can be quickly computed through integral images 5 , and showing numerical experiments. We also revise the experimental protocol for the scene recognition, obtaining results that are more generalizable.…”
Section: Introductionmentioning
confidence: 64%
See 1 more Smart Citation
“…The present work considerably extends the study presented in 12 , by detailing how the SS-CCT can be quickly computed through integral images 5 , and showing numerical experiments. We also revise the experimental protocol for the scene recognition, obtaining results that are more generalizable.…”
Section: Introductionmentioning
confidence: 64%
“…Assuming to sort the regions in a row wise manner, i.e. the first N h regions have ∆X Ri = 0, it holds that ∆Y Rj ≥ ∆Y Ri for each j ≥ i and consequently ∆Y ≥ 0, allowing to reduce the number of possible relative displacements to (2N h − 1)N v , as previously defined in (12). Overall the computation of Eq.…”
Section: Efficient Implementationmentioning
confidence: 99%
“…The complexity of our trial-specific kernelized covariance is O(M 2 T 2 ). Thus, differently from previous approaches [27], [33], [28], [29], the proposed framework is very efficient if compared to the cubic complexity of methods like [33] which require eigen-decomposition. Under a mathematical point of view, our kernelized covariance is a natural generalization of the classical covariance matrix, which can be retrieved as a particular case in our paradigm once fixed the kernel function (9) to be a linear one.…”
Section: Methodsmentioning
confidence: 97%
“…The usage of the covariance S to produce descriptors for classification tasks has been intensively studied [23], [24], [25], [26], [27], [28], [29], [17]. In particular, [23] proposed patch-specific covariance descriptors, efficiently computed with integral images.…”
Section: Introductionmentioning
confidence: 99%
“…This latter direction grounds on the mathematical properties of positive definite matrices, exploiting the Riemannian metrics on the manifold for image classification. Once moved from a finite to an infinitedimensional space, the performance enhances [40], [41] and only recently, the deep learning approaches have shown to be superior. However, one of the main limitation related to the covariance matrix is that it only enables to capture the linear inter-relationships [42].…”
Section: B Data Fusion Methodsmentioning
confidence: 99%