2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) 2020
DOI: 10.1109/focs46700.2020.00022
|View full text |Cite
|
Sign up to set email alerts
|

List Decodable Mean Estimation in Nearly Linear Time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(28 citation statements)
references
References 21 publications
2
26
0
Order By: Relevance
“…The concurrent work [CMY20] gives an SDP-based algorithm whose runtime is Õ(nd)/poly(α), i.e., near-optimal as a function of the dimension d, but suboptimal (by a polynomial factor) as a function of 1/α. We note that the dependence on 1/α is equally significant in some of the key applications of list-decodable learning (e.g., in learning mixture models).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The concurrent work [CMY20] gives an SDP-based algorithm whose runtime is Õ(nd)/poly(α), i.e., near-optimal as a function of the dimension d, but suboptimal (by a polynomial factor) as a function of 1/α. We note that the dependence on 1/α is equally significant in some of the key applications of list-decodable learning (e.g., in learning mixture models).…”
Section: Discussionmentioning
confidence: 99%
“…Concurrent and Independent Work. Contemporaneous work [CMY20], using different techniques, gave an algorithm for the same problem with asymptotic running time Õ(nd/α c ), for some (unspecified) constant c. At a high-level, the algorithm of [CMY20] builds on the convex optimization frameworks of [DKK + 16, CDG18], leveraging faster algorithms for solving structured SDPs.…”
Section: Our Contributionsmentioning
confidence: 99%
“…For systematic discussions on the basic problems in robust statistics, we refer to Huber (2004) and Hampel et al (2011). Recently, there is a resurgent interest in developing polynomial time robust estimators achieving near optimal recovery guarantees for many parametric estimation problems including mean estimation, linear regression and generalized linear models, where an analyst is given access to samples, in which a fraction of the samples may have been adversarially corrupted, see for example, Cherapanamjeri et al (2020), Jambulapati et al (2021) and the references therein.…”
Section: Introduction Consider a Nonparametric Regression Modelmentioning
confidence: 99%
“…While this runtime dramatically improves over the runtime in [CSV17], the quadratic dependence on n is not ideal in very high-dimensional problem settings. Concurrently to [DKK20], the work [CMY20] proposes a different, descent-based algorithm based on (approximate) positive semidefinite programming, achieving optimal error (up to constant factors) in time O(ndα −C ) for some constant C ≥ 6. When α = Θ(1), the [CMY20] runtime is nearly-linear in the problem input size.…”
Section: Introductionmentioning
confidence: 99%
“…This runtime matches up to logarithmic factors the cost of performing a single k-PCA on the data, which is a natural bottleneck of known algorithms for (very) special cases of our problem, such as clustering wellseparated mixtures. Prior to our work, the fastest list-decodable mean estimation algorithms had runtimes O(n 2 dk 2 ) [DKK20], and O(ndk C ) [CMY20] for an unspecified constant C ≥ 6.…”
mentioning
confidence: 99%