2022
DOI: 10.48550/arxiv.2204.02550
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

Abstract: We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(18 citation statements)
references
References 21 publications
0
18
0
Order By: Relevance
“…Finally, we remark that the family of hard distributions we construct can be thought of as a close cousin of the "parallel pancakes" construction of [DKS17]. This and slight modifications thereof are mixtures of Gaussians which are known to be computationally hard to known both in the SQ model [DKS17,BLPR19] and under cryptographic assumptions [BRST21,GVV22].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, we remark that the family of hard distributions we construct can be thought of as a close cousin of the "parallel pancakes" construction of [DKS17]. This and slight modifications thereof are mixtures of Gaussians which are known to be computationally hard to known both in the SQ model [DKS17,BLPR19] and under cryptographic assumptions [BRST21,GVV22].…”
Section: Related Workmentioning
confidence: 99%
“…In particular, we consider the problem where the x samples are taken uniformly from R n /Z n , y is either taken to be an independent random element of R/Z or is taken to be x, s mod 1 plus a small amount of (continuous) Gaussian noise, where s is some unknown vector in {±1} n . The reduction between these problems follows from existing techniques [Mic18a,GVV22].…”
Section: Brief Technical Overviewmentioning
confidence: 99%
“…Specifically, [BRST21] defined a continuous version of LWE (whose hardness they established) and reduced it to the problem of learning GMMs. More recently, [GVV22] obtained a direct reduction from LWE to a (different) continuous version of LWE; and leveraged this connection to obtain quantitatively stronger hardness for learning GMMs. It is worth noting that for the purposes of our reduction, we require as a starting point a continuous version of LWE that differs from the one defined in [BRST21].…”
Section: Prior and Related Workmentioning
confidence: 99%
See 2 more Smart Citations