2022
DOI: 10.48550/arxiv.2202.12150
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tighter Expected Generalization Error Bounds via Convexity of Information Measures

Abstract: Generalization error bounds are essential to understanding machine learning algorithms. This paper presents novel expected generalization error upper bounds based on the average joint distribution between the output hypothesis and each input training sample. Multiple generalization error upper bounds based on different information measures are provided, including Wasserstein distance, total variation distance, KL divergence, and Jensen-Shannon divergence. Due to the convexity of the information measures, the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 16 publications
(28 reference statements)
0
1
0
Order By: Relevance
“…7, Cor. 8 multiplies the dominant term of the bound by a nonuniform (data and algorithm-dependent) factor of D TV (Q, P ), which is guaranteed to tighten the bound.Note that the total-variation based bound ofAminian et al (2022) andRodríguez Gálvez et al (2021) assume Lipschitz loss function, while our TV bound allows non continuous loss functions such as the zero-one loss. The TV based bound ofWang et al (2019) (Thm.…”
mentioning
confidence: 99%
“…7, Cor. 8 multiplies the dominant term of the bound by a nonuniform (data and algorithm-dependent) factor of D TV (Q, P ), which is guaranteed to tighten the bound.Note that the total-variation based bound ofAminian et al (2022) andRodríguez Gálvez et al (2021) assume Lipschitz loss function, while our TV bound allows non continuous loss functions such as the zero-one loss. The TV based bound ofWang et al (2019) (Thm.…”
mentioning
confidence: 99%