Proceedings of the 2018 International Conference on Signal Processing and Machine Learning 2018
DOI: 10.1145/3297067.3297092
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Hashing Method for Image Retrieval Based on Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
51
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(84 citation statements)
references
References 18 publications
0
51
0
Order By: Relevance
“…∀ : for all | ∈: in | s. t.: such that in many physics-enforced neural network studies (Wang, Teng, & Perdikaris, 2021;Wight & Zhao, 2020), across multitask learning problems (Caruana, 1997;Chen et al, 2018), and evidenced by the emergence of strategies that target learning under competing objectives (Elhamod et al, 2020;Heydari et al, 2019;Wang, Yu, & Perdikaris, 2022). Objectives in event-based training for binary seismic event classification are only directly adversarial in the case of mislabeling, but the unknown effects of their relative importance on optimization stability and dynamics comes with an additional burden on experiment design and hyperparameter search.…”
Section: Update F θmentioning
confidence: 99%
“…∀ : for all | ∈: in | s. t.: such that in many physics-enforced neural network studies (Wang, Teng, & Perdikaris, 2021;Wight & Zhao, 2020), across multitask learning problems (Caruana, 1997;Chen et al, 2018), and evidenced by the emergence of strategies that target learning under competing objectives (Elhamod et al, 2020;Heydari et al, 2019;Wang, Yu, & Perdikaris, 2022). Objectives in event-based training for binary seismic event classification are only directly adversarial in the case of mislabeling, but the unknown effects of their relative importance on optimization stability and dynamics comes with an additional burden on experiment design and hyperparameter search.…”
Section: Update F θmentioning
confidence: 99%
“…and dynamic loss reweighting strategies [143][144][145]. Instead of training all tasks together, task grouping trains only similar tasks together.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…We compare the performance, in terms of test loss and energy consumption, among the following methods: 1) one by one training of activities (i.e., the vanilla multi-tenant FL); 2) all-in-one training of activities (i.e., using only activity consolidation); 3) allin-one training with gradient normalization applied to tune the gradient magnitudes among activities (GradNorm [144]); 4) estimating higher-order of activity groupings from pair-wise activities performance (HOA [147]); 5) grouping training activities with only task a nity grouping method (TAG [4]); 6) MuFL with both activity consolidation and activity splitting. Carbontracker [216] is used to measure energy consumption and carbon footprint (provided in Appendix B).…”
Section: Performance Evaluationmentioning
confidence: 99%
“…The first approach is an approximation that allows using the standard formulation of a single optimization function. This is commonly done by a linear combination of the loss functions of all tasks (SERMANET et al, 2013;MISRA et al, 2016;KOKKINOS, 2017;TEICHMANN et al, 2018;CHENNUPATI et al, 2019;SANCHEZ et al, 2019;LI et al, 2021), or some variation with adaptive weights (CIPOLLA; GAL; KENDALL, 2018;CHEN et al, 2018;JOHNS;DAVISON, 2019;LI et al, 2016;GUO et al, 2018). Although this approach is simple and has shown promising results, there are two inherent problems, as Gunantara (2018) pointed out.…”
Section: Optimization For Multi-task Learningmentioning
confidence: 99%
“…Moreover, MTL can also be used to explore the idea of creating an inductive bias between tasks (CARUANA, 1997). Due to its potential to improve generalization performance, it has gained attention in several areas of scientific and industrial communities, such as computer vision (SENER; KOLTUN, 2018;GAL;KENDALL, 2018;CHEN et al, 2018; and natural language processing (MAO et al, 2020;XUANJING, 2016;.…”
Section: Introductionmentioning
confidence: 99%