2020
DOI: 10.1134/s1054661820040057
|View full text |Cite
|
Sign up to set email alerts
|

Local Tetra-Directional Pattern–A New Texture Descriptor for Content-Based Image Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…Recently, a local tetra-directional pattern was introduced by Bedi et al [63], in which they compared a pixel with the four most adjacent pixels (0 • , 45 • , 90 • , and 135 • ). Among all the above-described descriptors, the most appropriate was the local tridirectional or tetra-directional patterns.…”
Section: State Of the Artmentioning
confidence: 99%
See 3 more Smart Citations
“…Recently, a local tetra-directional pattern was introduced by Bedi et al [63], in which they compared a pixel with the four most adjacent pixels (0 • , 45 • , 90 • , and 135 • ). Among all the above-described descriptors, the most appropriate was the local tridirectional or tetra-directional patterns.…”
Section: State Of the Artmentioning
confidence: 99%
“…Each of the eight pixels are as important as the three angles used by [51] or the four angles used by [63]. For example, in Figure 2, considering the tri directional pattern for location I1, value 8 is only compared with the above, below, and central pixels, whereas we have higher intensity pixels available in the second radius, which could be used in making the strongest feature vectors which were left behind; on the other hand, in the tetra directional pattern, the second radius is not fully utilized; in I1, value 8 is only compared with angles (0 • , 45 • , 90 • , and 135 • ) which are (2, 8, 9 and 4), and again we have high intensity pixels available at angles 270 • and 315 • , which were left behind.…”
Section: State Of the Artmentioning
confidence: 99%
See 2 more Smart Citations
“…Ideally, the similarity score between two images should be discriminative, robust and efficient. Various methods based on hand-crafted descriptors [32], [33], [34], distance metric learning [35], [36], [37], deep learning models [38], [9] and unsupervised learning [11], [12], [13], [14] have been proposed for the image retrieval task. However, the deep learning has emerged as a dominating alternative of hand-designed feature engineering, the features being learned automatically from data.…”
Section: Image Retrievalmentioning
confidence: 99%