2018
DOI: 10.1007/s11042-018-5841-8
|View full text |Cite
|
Sign up to set email alerts
|

A robust CBIR framework in between bags of visual words and phrases models for specific image datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…We consider the vector length of the proposed methods is 64 if during the build-ing of the signature the descriptor used is SURF or KAZE. For VLAD [17], N-BoVW [30], BoVW [9] the length of the vector depends on the number K which is used to calculate the visual words using the K-means algorithm.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We consider the vector length of the proposed methods is 64 if during the build-ing of the signature the descriptor used is SURF or KAZE. For VLAD [17], N-BoVW [30], BoVW [9] the length of the vector depends on the number K which is used to calculate the visual words using the K-means algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…The authors in [76], propose a robust and invariant descriptor to rotation and illumination. Another method inspired by BoVW is the Bag of visual phrase (BoVP) [30,2,31]. BoVP describe the image as a matrix of visual phrase occurrence instead of a vector in BoVW.…”
Section: Fig 2 Bag Of Visual Words Modelmentioning
confidence: 99%
“…MSRC v1 MSRC v2 Linnaeus Wang BoVW [8] 0,48 0.30 0,26 0.48 n-BoVW [17] 0.58 0.39 0.31 0.60 VLAD [10] 0.78 0.41 -0.74 N-Gram [18] ---0.37 AlexNet [12] 0.81 0.58 0,47 0.68 VGGNet [23] 0.76 0.63 0,48 0.76 ResNet [25] 0.83 0.70 0,69 0.82 Ruigang [22] --0.70 -Ours (best) 0.86 0.72 0.75 0.84 Table 6. Comparison of the accuracy of our approach with methods from the state of the art…”
Section: Methodsmentioning
confidence: 99%
“…VLAD and Fisher are similar but VLAD does not store second order information about the features and use K-MEANS instead GMM. Another inspiration from BoVW presented by Bag of visual phrase (BoVP) [17] [2] [18]. BoVP describe the image as a matrix of visual phrase occurrence instead of a vector in BoVW.…”
Section: Introductionmentioning
confidence: 99%
“…Visual phrase model is proposed to overcome the high-dimensional and quantization errors of the bag of visual words (BovW) model. Based on the well-known BovW model, Ouni et al propose three different methodologies [11] inspired by the visual phrase model effectiveness and a compression technique which ensures the same effectiveness for retrieval than the BoVW model. Due to the large-scale use of neural networks in the field of image classification and recognition, different types of deep neural networks are proposed for image retrieval.…”
Section: Introductionmentioning
confidence: 99%