DOI: 10.1007/978-3-540-74958-5_30
|View full text |Cite
|
Sign up to set email alerts
|

Classifier Loss Under Metric Uncertainty

Abstract: Abstract. Classifiers that are deployed in the field can be used and evaluated in ways that were not anticipated when the model was trained. The final evaluation metric may not have been known at training time, additional performance criteria may have been added, the evaluation metric may have changed over time, or the real-world evaluation procedure may have been impossible to simulate. Unforeseen ways of measuring model utility can degrade performance. Our objective is to provide experimental support for mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…Several studies have found that optimizing a metric different from the metric being evaluated can bring better results than optimizing the same metric [15,27,29,33]. On the contrary, a recent study claim that such findings come from the use of wrong statistical testing strategy [16].…”
Section: Related Workmentioning
confidence: 99%
“…Several studies have found that optimizing a metric different from the metric being evaluated can bring better results than optimizing the same metric [15,27,29,33]. On the contrary, a recent study claim that such findings come from the use of wrong statistical testing strategy [16].…”
Section: Related Workmentioning
confidence: 99%
“…The results are quite surprising and puzzling, and no convincing explanations were given. Nevertheless, several other papers [1,3,4] have since confirmed his finding. For example, Huang and Ling [3] studied model selection with many popular machine learning metrics and claimed that often an evaluation metric different from the goal metric can better select the correct models.…”
Section: Introductionmentioning
confidence: 64%
“…For example, Huang and Ling [3] studied model selection with many popular machine learning metrics and claimed that often an evaluation metric different from the goal metric can better select the correct models. Skalak and Caruana [1] used absolute loss to compare model selection abilities of various metrics, and drew similar conclusions. It now seems to be a well-regarded conclusion in the machine learning community that a different metric can do a better job in model selection.…”
Section: Introductionmentioning
confidence: 74%
See 2 more Smart Citations