2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506441
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Supervised Hashing Algorithm Using Projected Gradient and Oppositional Weights

Abstract: Learning to hash is generating similarity-preserving binary representations of images, which is, among others, an efficient way for fast image retrieval. Two-step hashing has become a common approach because it simplifies the learning by separating binary code inference from hash function training. However, the binary code inference typically leads to an intractable optimization problem with binary constraints. Different relaxation methods, which are generally based on complicated optimization techniques, have… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…The results of LUAD/LUSC classification are shown in 3. In this section, we use the features extracted from a DenseNet model, 27 as per Hemati et al 9 We can observe that our suggested strategy outperformed earlier approaches for LUAD/LUSC classification by 2% (delivering 88 %), that underlines the performance of attention-pooling and constrastive learning. We have also, employed the SS-CAMIL blocks in this task and it improved the performance to 89%, but since we are not sure whether it has seen the data in the search task (both datasets are from TCGA repository), we did not report those numbers in the table.…”
Section: Experiments and Resultsmentioning
confidence: 87%
See 3 more Smart Citations
“…The results of LUAD/LUSC classification are shown in 3. In this section, we use the features extracted from a DenseNet model, 27 as per Hemati et al 9 We can observe that our suggested strategy outperformed earlier approaches for LUAD/LUSC classification by 2% (delivering 88 %), that underlines the performance of attention-pooling and constrastive learning. We have also, employed the SS-CAMIL blocks in this task and it improved the performance to 89%, but since we are not sure whether it has seen the data in the search task (both datasets are from TCGA repository), we did not report those numbers in the table.…”
Section: Experiments and Resultsmentioning
confidence: 87%
“…Table 1 , Table 2 show the horizontal and vertical search results, respectively. We compare our performance with Kalra et al 3 and Hemati et al 9 In both tables, CAMIL is the baseline attention-based MIL with CL and without self-supervision, and SS-CAMIL is the same as CAMIL setup but uses the weights of self-supervision of primary sites (See Table 3 ).…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“… 23 , 60 Learning tissue representation is a major step toward closing this gap. 21 , 47 Additionally, employing multimodal domain data, such as pathology reports, can help deep learning attach “context” to its otherwise black-box behaviors when it comes to hierarchical tissue representation in their connectionist topologies. 15 , 26
Fig.
…”
Section: Why Image Search?mentioning
confidence: 99%