2008 49th Annual IEEE Symposium on Foundations of Computer Science 2008
DOI: 10.1109/focs.2008.64
|View full text |Cite
|
Sign up to set email alerts
|

Learning Geometric Concepts via Gaussian Surface Area

Abstract: We study the learnability of sets in R n under the Gaussian distribution, taking Gaussian surface area as the "complexity measure" of the sets being learned. Let C S denote the class of all (measurable) sets with surface area at most S. We first show that the class C S is learnable to any constant accuracy in time n O(S 2 ) , even in the arbitrary noise ("agnostic") model. Complementing this, we also show that any learning algorithm for C S information-theoretically requires 2 Ω(S 2 ) examples for learning to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
122
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 81 publications
(125 citation statements)
references
References 34 publications
1
122
0
Order By: Relevance
“…. , p, the assertion follows from Theorem 20 in [25], whose proof the authors credit to Nazarov [30].…”
Section: Lemma 51 (Key Lemma) Suppose That There Exists Some Constantmentioning
confidence: 89%
“…. , p, the assertion follows from Theorem 20 in [25], whose proof the authors credit to Nazarov [30].…”
Section: Lemma 51 (Key Lemma) Suppose That There Exists Some Constantmentioning
confidence: 89%
“…Our anti-concentration bounds do not require such positivity (or other) assumptions on covariances and hence are not implied by the results of [20]. Another method for deriving reverse isoperimetric inequalities is to use geometric results of [19], as shown by [13], which leads to dimension-dependent anti-concentration inequalities, which are essentially different from ours. Moreover, our density-bounding proof technique is substantially different from that of [20] based on Malliavin calculus or [19] based on geometric arguments.…”
Section: Introductionmentioning
confidence: 99%
“…Let M := max 1≤j≤p W j . The absolute continuity of the distribution of M is deduced from the fact that P(M ∈ A) ≤ p j=1 P(W j ∈ A) for every Borel measurable subset A of R. Hence, to show that a version of the density of M is given by (13), it is enough to show that lim ↓0 −1 P(x < M ≤ x + ) equals the right side on (13) for a.e. x ∈ R.…”
Section: Proof Of Theoremmentioning
confidence: 99%
“…The complexity of the algorithm is bounded by a fixed polynomial in n times a function of k and where k is the dimension of the normal subspace (the span of normal vectors to supporting hyperplanes of the convex set) and the output is a hypothesis that correctly classifies at least 1 − of the unknown Gaussian distribution. For the important case when the convex set is the intersection of k halfspaces, the complexity is poly(n,improving substantially on the state of the art [Vem04], [KOS08] for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization.…”
mentioning
confidence: 99%
“…improving substantially on the state of the art [Vem04], [KOS08] for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization.…”
mentioning
confidence: 99%