2016
DOI: 10.48550/arxiv.1604.06968
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Agnostic Estimation of Mean and Covariance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…It is worth noting that in contrast to prior work the sample complexity, has a logarithmic dependence on the ambient dimension d, allowing for high-dimensional scalings where d ≫ n, provided that the sparsity s 2 ≪ n. As in the work of Diakonikolas et al [2016a], we obtain near-optimal contamination dependence scaling upto a logarithmic factor as roughly ǫ 2 . Importantly, as emphasized in prior work [Diakonikolas et al, 2016a, Lai et al, 2016 and in stark contrast to other tractable robust estimators, the contamination dependence achieved by our algorithm is completely independent of the dimension of the problem.…”
Section: Re-visiting Illustrative Examplesmentioning
confidence: 88%
See 2 more Smart Citations
“…It is worth noting that in contrast to prior work the sample complexity, has a logarithmic dependence on the ambient dimension d, allowing for high-dimensional scalings where d ≫ n, provided that the sparsity s 2 ≪ n. As in the work of Diakonikolas et al [2016a], we obtain near-optimal contamination dependence scaling upto a logarithmic factor as roughly ǫ 2 . Importantly, as emphasized in prior work [Diakonikolas et al, 2016a, Lai et al, 2016 and in stark contrast to other tractable robust estimators, the contamination dependence achieved by our algorithm is completely independent of the dimension of the problem.…”
Section: Re-visiting Illustrative Examplesmentioning
confidence: 88%
“…, z m } to denote the pruned sample. We generalize a similar pruning step from prior works [Diakonikolas et al, 2016a, Lai et al, 2016 to deal with the generalized linear model settings. This in turn further ensures that the subsequent use of the ellipsoid algorithm, terminates in polynomial time.…”
Section: Unified Algorithm and Technical Insightsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although these mean estimates are robust, the estimation error ∼ √ d (d being the dimension) which is prohibitive in large dimension. There is a recent line of work on robust mean estimation that adapts nicely to high dimension [37,38]. In these results, the mean estimation error is either dimension-independent or very weakly dependent on dimension.…”
Section: Stage Ii-cluster the Ermsmentioning
confidence: 99%