2022
DOI: 10.1214/22-aos2190
|View full text |Cite
|
Sign up to set email alerts
|

On the robustness of minimum norm interpolators and regularized empirical risk minimizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Note that we used the majorizing measure theorem [42] to get the equivalence between Talagrand's γ 2 -functional and the Gaussian mean width and that for the Gaussian measure µ, the Orlicz space L ψ 2 (µ) is equivalent to the Hilbert space L 2 (µ). One can check that diam(F, L 2 (µ)) = 1 and the result follows for t = N/(64C 2 ), κ RIP = 1/[8 √ C] and the definition of R N (Γ 1/2 B) in (14).…”
Section: Isomorphy and Restricted Isomorphy Propertiesmentioning
confidence: 88%
See 3 more Smart Citations
“…Note that we used the majorizing measure theorem [42] to get the equivalence between Talagrand's γ 2 -functional and the Gaussian mean width and that for the Gaussian measure µ, the Orlicz space L ψ 2 (µ) is equivalent to the Hilbert space L 2 (µ). One can check that diam(F, L 2 (µ)) = 1 and the result follows for t = N/(64C 2 ), κ RIP = 1/[8 √ C] and the definition of R N (Γ 1/2 B) in (14).…”
Section: Isomorphy and Restricted Isomorphy Propertiesmentioning
confidence: 88%
“…Theorem 6. There are absolute constants 0 < κ RIP < 1 and c 0 such that for the fixed point R N (Γ 1/2 B) defined in (14) and X 1 = G (N ×p) Γ 1/2 , the following holds. With probability at least…”
Section: Isomorphy and Restricted Isomorphy Propertiesmentioning
confidence: 99%
See 2 more Smart Citations
“…Because of its simplicity and well-develped theory in classical machine learning [58,15,16], sparse modeling is often used to provide theoretical understanding of modern large and over-parameterized models. This include works on implicit regularization [59,60,61,62,63], nonconvex optimization [64,65], noise interpolators [66,67,68], etc. However, the aforementioned work uses sparsity as a testbed or toy model to gain insights, without implication of existence of sparsity in DNNs.…”
Section: Sparsity For Robustnessmentioning
confidence: 99%