Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms 2020
DOI: 10.1137/1.9781611975994.10
|View full text |Cite
|
Sign up to set email alerts
|

List Decodable Learning via Sum of Squares

Abstract: In the list-decodable learning setup, an overwhelming majority (say a 1 − β-fraction) of the input data consists of outliers and the goal of an algorithm is to output a small list L of hypotheses such that one of them agrees with inliers. We develop a framework for listdecodable learning via the Sum-of-Squares SDP hierarchy and demonstrate it on two basic statistical estimation problemswhere the corresponding labels y i are well-approximated by a linear function ℓ. We devise an algorithm that outputs a list L … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(57 citation statements)
references
References 10 publications
0
57
0
Order By: Relevance
“…A related problem to-indeed, a generalization of-the problem of learning MLRs is that of list-decodable regression [8,26,39]. Here, we assume that we are given a set of data points (x 1 ,y 1 ), .…”
Section: List-decodable Regressionmentioning
confidence: 99%
See 2 more Smart Citations
“…A related problem to-indeed, a generalization of-the problem of learning MLRs is that of list-decodable regression [8,26,39]. Here, we assume that we are given a set of data points (x 1 ,y 1 ), .…”
Section: List-decodable Regressionmentioning
confidence: 99%
“…Unfortunately, all known techniques, including the state of the art [26,39], either are too weak to be applied to our setting, or use the Sum-of-Squares SDP hierarchy and again interact through the data via estimating high-degree moments of the distribution. As a result, these latter algorithms still suffer runtimes which are exponential in k.…”
Section: List-decodable Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, it is used for amplifying hardness and constructing extractors, pseudorandom generators and other pseudorandom objects [20]. The idea of relaxing the problem by asking the solver to just output a list (ideally as small as possible) of solutions that is guaranteed to contain the correct one, instead of insisting on a unique answer, is also adopted in many other fields in computer science [19,35,28]. In the context of high-dimensional geometry over finite fields, list decoding is equivalent to multiple packing, just like error correction codes are equivalent to sphere packing.…”
Section: List Decoding and The List Decoding Plotkin Boundmentioning
confidence: 99%
“…We remark that the majority of this work focuses on estimation in ℓ 2 -norm or Frobenius norm, with two notable exceptions: [8] uses learning in a sparsity-inducing norm to improve the sample complexity for sparse mean estimation, and [72] gives an information-theoretic characterization of when mean estimation in general norms is possible, but they do not give efficient algorithms. Our techniques are most closely related to the sum-of-squares algorithms of [45,57], and this general technique has also found application in other robust learning problems such as robust regression [54] and list-decodable regression [52,68].…”
Section: Related Workmentioning
confidence: 99%