2020
DOI: 10.1016/j.physa.2019.122527
|View full text |Cite
|
Sign up to set email alerts
|

On Jensen–Rényi and Jeffreys–Rényi type f-divergences induced by convex functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…Rényi used the simplest set of postulates that characterize Shannon's entropy and introduced his own entropy and divergence measures (parameterized by its order α) that generalize the Shannon entropy and the KL divergence, respectively (Rényi, 1961). Moreover, the original Jensen-Rényi divergence (He, Hamza, & Krim, 2003) as well as the identically named divergence (Kluza, 2019) used in this letter are non-f -divergence generalizations of the Jensen-Shannon divergence. Traditionally, Rényi's entropy and divergence have had applications in a wide range of problems, including lossless data compression (Campbell, 1965;Courtade & Verdú, 2014;Rached, Alajaji, & Campbell, 1999), hypothesis testing (Csiszár, 1995;Alajaji, Chen, & Rached, 2004), error probability (Ben-Bassat & Raviv, 2006), and guessing (Arikan, 1996;Verdú, 2015).…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Rényi used the simplest set of postulates that characterize Shannon's entropy and introduced his own entropy and divergence measures (parameterized by its order α) that generalize the Shannon entropy and the KL divergence, respectively (Rényi, 1961). Moreover, the original Jensen-Rényi divergence (He, Hamza, & Krim, 2003) as well as the identically named divergence (Kluza, 2019) used in this letter are non-f -divergence generalizations of the Jensen-Shannon divergence. Traditionally, Rényi's entropy and divergence have had applications in a wide range of problems, including lossless data compression (Campbell, 1965;Courtade & Verdú, 2014;Rached, Alajaji, & Campbell, 1999), hypothesis testing (Csiszár, 1995;Alajaji, Chen, & Rached, 2004), error probability (Ben-Bassat & Raviv, 2006), and guessing (Arikan, 1996;Verdú, 2015).…”
Section: Prior Workmentioning
confidence: 99%
“…With these Rényi measures in place, we propose a new GAN's generator loss function expressed in terms of the negative sum of two Rényi crossentropy functionals. We show that minimizing this α-parameterized loss function under an optimal discriminator results in the minimization of the Jensen-Rényi divergence (Kluza, 2019), which is a natural extension of the Jensen-Shannon divergence as it uses the Rényi divergence instead of the Kullback-Leibler (KL) divergence in its expression. 1 We also prove that our generator loss function of order α converges to the original GAN loss function in Goodfellow et al (2014) when α → 1.…”
Section: Contributionsmentioning
confidence: 99%
“…If, additionally, α tends to 1 then based on the proof of the Equation ( 11 ) from [ 16 ], Sharma–Mittal h-divergence tends to relative entropy (called Kullback–Leibler divergence).…”
Section: Sharma–mittal Type Divergencesmentioning
confidence: 99%
“…When in ( 8 ) and ( 9 ) and we substitute for then we obtain, defined in [ 16 ], the generalized Jensen–Rényi and Jeffreys–Rényi divergences, respectively: …”
Section: Jensen–sharma–mittal and Jeffreys–sharma–mittal Divergencesmentioning
confidence: 99%
See 1 more Smart Citation