2020
DOI: 10.1007/978-3-030-58517-4_19
|View full text |Cite
|
Sign up to set email alerts
|

How Does Lipschitz Regularization Influence GAN Training?

Abstract: Despite the success of Lipschitz regularization in stabilizing GAN training, the exact reason of its effectiveness remains poorly understood. The direct effect of K-Lipschitz regularization is to restrict the L2-norm of the neural network gradient to be smaller than a threshold K (e.g., K = 1) such that ∇f ≤ K. In this work, we uncover an even more important effect of Lipschitz regularization by examining its impact on the loss function: It degenerates GAN loss functions to almost linear ones by restricting th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 10 publications
0
8
0
1
Order By: Relevance
“…Instead, it is trained to learn a K -Lipschitz continuous function, which makes the neural network gradient smaller than a threshold value K , such that ||Δ f || ≤ K . The primary rationale for applying this condition is that gradient behaves better, making generator optimization easier(Qin et al, 2018). As the loss function decreases in training, the Wasserstein distance gets smaller, and the generator model’s output grows closer to the actual data distribution.…”
Section: Methodsmentioning
confidence: 99%
“…Instead, it is trained to learn a K -Lipschitz continuous function, which makes the neural network gradient smaller than a threshold value K , such that ||Δ f || ≤ K . The primary rationale for applying this condition is that gradient behaves better, making generator optimization easier(Qin et al, 2018). As the loss function decreases in training, the Wasserstein distance gets smaller, and the generator model’s output grows closer to the actual data distribution.…”
Section: Methodsmentioning
confidence: 99%
“…Conversely, adversarial defenses either enforce smoothness around embedding space or on potentially perturbed inputs themselves (Das et al, 2018). Similarly, GANs can enforce 1-Lipschitzness to improve coverage of sample generation (Qin et al, 2020) .…”
Section: Definition (Lipshitznessmentioning
confidence: 99%
“…Lipschitz regularization can be used to optimize a loss function to ensure that the learned function is Lipschitz continuous (smooth). It has been used to regularize the discriminator in GAN (Gulrajani et al 2017;Miyato et al 2018;Qin, Mitra, and Wonka 2020) and to defend against attacks in adversarial learning (Hein and Andriushchenko 2017). One way to constrain a function to be Lipschitz continuous is to penalize its gradient norm (Gulrajani et al 2017).…”
Section: Related Workmentioning
confidence: 99%