2020
DOI: 10.1007/978-3-030-58558-7_21
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Margin Diversity Regularizer for Handling Data Imbalance in Zero-Shot SBIR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 30 publications
0
16
0
Order By: Relevance
“…IV. RESULTS AND DISCUSSIONS We compare the performance of the proposed model with some of the existing state-of-the-art frameworks of [9]- [12], [14], [17], [18], [21], [34]. We also lay down the performances of some of the notable works in SBIR that uses the same datasets to show how the proposed model which solves a more challenging ZS-SBIR problem achieves comparable performance.…”
Section: Sketchy-ext Tu Berlin-extmentioning
confidence: 99%
See 1 more Smart Citation
“…IV. RESULTS AND DISCUSSIONS We compare the performance of the proposed model with some of the existing state-of-the-art frameworks of [9]- [12], [14], [17], [18], [21], [34]. We also lay down the performances of some of the notable works in SBIR that uses the same datasets to show how the proposed model which solves a more challenging ZS-SBIR problem achieves comparable performance.…”
Section: Sketchy-ext Tu Berlin-extmentioning
confidence: 99%
“…The Doo-dle2search [10] uses a triplet architecture and uses gradient reversal layers to enforce learning domain agnostic features from image and sketches. On the other hand, [17] pointed out that the fall in the performance attributed to the imbalance in the class-wise samples can be tackled by using a novel adaptive margin regularizer. While the above-mentioned frameworks try to improve on the accuracy of the model by synthesizing a real-valued feature space, models such as [9], [16], [19] propose a hashed feature space which makes the retrieval a highly efficient process.…”
Section: Introductionmentioning
confidence: 99%
“…So in the training sequence, we need to carefully take care of this fact by either using the weighted average of the number of samples in each class distribution or cleverly select the batches while training the network. Some of the commonly used strategies for handling data imbalance in literature are based on re-balancing the dataset and cost-sensitive learning of classifier [51]. Naive-over and undersampling [52], selective decontamination [53], SMOTE [54], GAN based augmentation [55] are some of the commonly used re-balancing techniques.…”
Section: B Training Data Imbalancementioning
confidence: 99%
“…Naive-over and undersampling [52], selective decontamination [53], SMOTE [54], GAN based augmentation [55] are some of the commonly used re-balancing techniques. The cons-sensitive learning approach involves adding focal loss [56] and diversity regularizers [51].…”
Section: B Training Data Imbalancementioning
confidence: 99%
“…The use of LSTM to forecast inflows has not been attempted to the best of our knowledge. Considerable success has been achieved with the use of machine learning techniques which are able to solve wide range of problems in optimization and operations research [3,4] (Singh, 2019 [5,6](dutta 2020). Recently attempts have been made for successful determination of precipitation but naive deep learning based architectures are difficult to train them to learn and investigate temporal correlation over arbitrary length.…”
Section: Introductionmentioning
confidence: 99%