2021
DOI: 10.48550/arxiv.2108.11845
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Consistent Relative Confidence and Label-Free Model Selection for Convolutional Neural Networks

Abstract: This paper is concerned with image classification based on deep convolutional neural networks (CNNs). The focus is centered around the following question: given a set of candidate CNN models, how to select the right one that has the best generalization property for the current task? Present model selection methods require access to a batch of labeled data for defining a performance metric, such as the cross-entropy loss, the classification error rate, the negative log-likelihood, and so on. In many practical c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Different metrics are selected based on the type of problem, i.e., classification or regression. Some of the widely used loss functions to capture the learning of DNN at EDGE while training are Mean Absolute Error [60,64,273], Mean Square Error Loss [269,289,316], Negative Log-Likelihood Loss [138,147,219], Cross-Entropy Loss [54,57,128,162], Kullback-Leibler divergence [47,70,237,294] etc.…”
Section: Training Lossmentioning
confidence: 99%
“…Different metrics are selected based on the type of problem, i.e., classification or regression. Some of the widely used loss functions to capture the learning of DNN at EDGE while training are Mean Absolute Error [60,64,273], Mean Square Error Loss [269,289,316], Negative Log-Likelihood Loss [138,147,219], Cross-Entropy Loss [54,57,128,162], Kullback-Leibler divergence [47,70,237,294] etc.…”
Section: Training Lossmentioning
confidence: 99%