LatinX in AI at Neural Information Processing Systems Conference 2021 2021
DOI: 10.52591/202112071
|View full text |Cite
|
Sign up to set email alerts
|

Flexible Learning of Sparse Neural Networks via Constrained L0 Regularizations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The graphical criterion of Lachapelle et al [2022], which allows complete disentan-glement, can be derived as a special case of our theory. We also propose to enforce sparsity via constrained optimization instead of regularization, following Gallego-Posada et al [2021]. We finally illustrate our theory in simulations.…”
Section: Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The graphical criterion of Lachapelle et al [2022], which allows complete disentan-glement, can be derived as a special case of our theory. We also propose to enforce sparsity via constrained optimization instead of regularization, following Gallego-Posada et al [2021]. We finally illustrate our theory in simulations.…”
Section: Contributionsmentioning
confidence: 99%
“…Lagrangian multiplier α, which is forced to remain greater or equal to zero via a simple projection step. As suggested by Gallego-Posada et al [2021], we perform dual restarts which simply means that, as soon as the constraint is satisfied, the Lagrangian multiplier is reset to 0. We used the library Cooper [Gallego-Posada and Ramirez, 2022], which implement many constrained optimization procedure in Python, including the one described above.…”
Section: Author Contributionsmentioning
confidence: 99%