2014
DOI: 10.1007/978-3-319-11752-2_36
|View full text |Cite
|
Sign up to set email alerts
|

Encoding Spatial Arrangements of Visual Words for Rotation-Invariant Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…We previously proposed [23] to use all the distinct pairs of three descriptors from set Di to calculate angles between the spatial positions of the descriptors as shown in Figure 6. We call that method combinatorial triangulation, as the triangulation is done for all the three distinct pairs of descriptors belonging to a given visual word.…”
Section: [Fig6] Triangulation Methods (A) Delaunay Triangulation (Bmentioning
confidence: 99%
See 1 more Smart Citation
“…We previously proposed [23] to use all the distinct pairs of three descriptors from set Di to calculate angles between the spatial positions of the descriptors as shown in Figure 6. We call that method combinatorial triangulation, as the triangulation is done for all the three distinct pairs of descriptors belonging to a given visual word.…”
Section: [Fig6] Triangulation Methods (A) Delaunay Triangulation (Bmentioning
confidence: 99%
“…Second, we apply the circular tiling scheme [20] over the segmented coin image to increase the discriminative power of the model while maintaining rotation invariance. Finally, the rotation-invariant geometric relationships of visual words [23], [24] in each subregion of the circular tiling scheme are modeled to obtain the final image representation. In the following, we will give a brief description of each step of the proposed strategy.…”
Section: Rotation-invariant Spatial Extensions To the Bow Image Reprementioning
confidence: 99%
“…This allows to use the local features from the foreground to construct the visual vocabulary, thus making it more discriminating and accurate as shown in [1].…”
Section: Methodsmentioning
confidence: 99%
“…(2) We previously proposed [17] to use all the distinct pairs of three descriptors from set to calculate angles between the spatial positions of the descriptors as shown in Fig. 2.…”
Section: Scale-and Rotation-invariant Histogram Of Identical Visumentioning
confidence: 99%
“…Rotation-invariant SIFT features are extracted and concatenated at predefined scales of . We previously showed that the use of segmentation masks at the stage of vocabulary construction enhances the discriminating nature of the vocabulary, thus resulting in a higher classification rate [17]. Here we also use segmentation masks to extract the foreground features for vocabulary construction.…”
Section: A Number Of Scales For Local Features Extractionmentioning
confidence: 99%