2020
DOI: 10.1007/s40745-020-00253-5
|View full text |Cite
|
Sign up to set email alerts
|

A Comprehensive Survey of Loss Functions in Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
131
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 291 publications
(176 citation statements)
references
References 47 publications
0
131
0
1
Order By: Relevance
“…In the future, we plan to compare R 2 with other regression rates such as Huber metric H δ ( Huber, 1992 ), LogCosh loss ( Wang et al, 2020 ) and Quantile Q γ ( Yue & Rue, 2011 ). We will also study some variants of the coefficient of determination, such as the adjusted R -squared ( Miles, 2014 ) and the coefficient of partial determination ( Zhang, 2017 ).…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we plan to compare R 2 with other regression rates such as Huber metric H δ ( Huber, 1992 ), LogCosh loss ( Wang et al, 2020 ) and Quantile Q γ ( Yue & Rue, 2011 ). We will also study some variants of the coefficient of determination, such as the adjusted R -squared ( Miles, 2014 ) and the coefficient of partial determination ( Zhang, 2017 ).…”
Section: Discussionmentioning
confidence: 99%
“…Loss functions. A survey of loss functions used in machine learning is provided by [10]. In total, 31 loss functions are analyzed with respect to their purpose task and application scenario.…”
Section: Related Workmentioning
confidence: 99%
“…DNNs are trained to minimize a loss function such as mean squared error (MSE) on the training data [67]. The loss minimization is typically performed using one of the popular gradient descent algorithms such as stochastic gradient descent (SGD) [68] or adaptive moment estimation (Adam) [69].…”
Section: Gradient Descent Optimizationmentioning
confidence: 99%