2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01190
|View full text |Cite
|
Sign up to set email alerts
|

Student-Teacher Learning from Clean Inputs to Noisy Inputs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…It has also been shown to work for distilling knowledge of non-neural network machine-learning models [11]. In [8], [12] KD was used to transfer knowledge across different types of sensors for either image reconstruction or high-level tasks, and in [17] it was theoretically analyzed. In this work, we use KD to distill not just a deep model, but also the non-learned (manually engineered) algorithms in the ISP and the information gained in the physical process of acquiring a better signal (that has a better SNR).…”
Section: Isp For Visionmentioning
confidence: 99%
“…It has also been shown to work for distilling knowledge of non-neural network machine-learning models [11]. In [8], [12] KD was used to transfer knowledge across different types of sensors for either image reconstruction or high-level tasks, and in [17] it was theoretically analyzed. In this work, we use KD to distill not just a deep model, but also the non-learned (manually engineered) algorithms in the ISP and the information gained in the physical process of acquiring a better signal (that has a better SNR).…”
Section: Isp For Visionmentioning
confidence: 99%
“…BiT [1] and BERT [2]); 2) neural networks architecture searching (NAS), which designs or searches an optimal architecture for various downstream tasks (e.g. EfficientNet [3] and NASNet [4]); 3) noise learning, which introduces noises into original inputs to help neural networks learning a generalized representation capability on noisy inputs (Noisy Student-Teacher Learning [5]), adversarial noise (FreeLB [6]) etc. However, these methods either require additional data like transfer learning or incur huge computational cost in training like NAS and noisy learning.…”
Section: Introductionmentioning
confidence: 99%