1993
DOI: 10.1145/174130.174138
|View full text |Cite
|
Sign up to set email alerts
|

Constant depth circuits, Fourier transform, and learnability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
280
0

Year Published

1997
1997
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 438 publications
(289 citation statements)
references
References 7 publications
8
280
0
Order By: Relevance
“…Lower bounds for pseudorandom function generators were studied in [14,16,13]. The only papers we are aware of that give impossibility results or lower bounds for "plain" pseudorandom generators as defined above, are the papers by Kharitonov et al [12] and Yu and Yung [19].…”
Section: Definitions and Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Lower bounds for pseudorandom function generators were studied in [14,16,13]. The only papers we are aware of that give impossibility results or lower bounds for "plain" pseudorandom generators as defined above, are the papers by Kharitonov et al [12] and Yu and Yung [19].…”
Section: Definitions and Backgroundmentioning
confidence: 99%
“…In particular, they showed how to construct a generator with a non-trivial stretch function (expanding n bits to n + Θ(log(n)) bits) in the complexity class AC 0 . This suggests that even rudimentary computational resources are sufficient for producing pseudorandomness (it is worth noting that Linial, Mansour and Nisan [14] proved that there are no pseudorandom function generators in AC 0 with very good security parameters).…”
Section: Introductionmentioning
confidence: 99%
“…which implies the following Lemma from [LMN93]. Roughly, the lemma says that in order to sample the random variable S ∩ A, one can first pick a random sample of ω A c , and then take a sample from the spectral sample of the function we get by plugging in these values for the bits in A c .…”
Section: The Spectral Sample In Generalmentioning
confidence: 99%
“…The goal of the learner is to construct a high-accuracy hypothesis function h, i.e., one which satisfies Pr[f (x) = h(x)] ≤ ǫ where the probability is with respect to the uniform distribution and ǫ is an error parameter given to the learning algorithm. Algorithms and hardness results in this framework have interesting connections with topics such as discrete Fourier analysis [Man94], circuit complexity [LMN93], noise sensitivity and influence of variables in Boolean functions [KKL88,BKS99,KOS04,OS07], coding theory [FGKP06], privacy [BLR08,KLN + 08], and cryptography [BFKL93,Kha95]. For these reasons, and because the model is natural and elegant in its own right, the uniform distribution learning model has been intensively studied for almost two decades.…”
Section: Background and Motivationmentioning
confidence: 99%