Proceedings of the Ninth Annual Conference on Computational Learning Theory - COLT '96 1996
DOI: 10.1145/238061.238098
|View full text |Cite
|
Sign up to set email alerts
|

On restricted-focus-of-attention learnability of Boolean functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2000
2000
2013
2013

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(23 citation statements)
references
References 22 publications
0
23
0
Order By: Relevance
“…, x n ) used for learning (see Ben-David & Dichterman 1994 for details). As observed by Birkendorf et al (1998);Goldberg (2001), the class of linear threshold functions over {−1, 1} n is uniform-distribution information-theoretically learnable from cc 16 (2007) Every LTF has a low-weight approximator 199 poly(n) many examples in this framework if and only if any linear threshold function is information-theoretically specified to high accuracy from Chow parameter estimates which are accurate to an additive ±1/poly(n). With this motivation Birkendorf et al gave the following result:…”
Section: Approximating An Ltf From Noisy Versions Of Its Low-degree Fmentioning
confidence: 92%
See 3 more Smart Citations
“…, x n ) used for learning (see Ben-David & Dichterman 1994 for details). As observed by Birkendorf et al (1998);Goldberg (2001), the class of linear threshold functions over {−1, 1} n is uniform-distribution information-theoretically learnable from cc 16 (2007) Every LTF has a low-weight approximator 199 poly(n) many examples in this framework if and only if any linear threshold function is information-theoretically specified to high accuracy from Chow parameter estimates which are accurate to an additive ±1/poly(n). With this motivation Birkendorf et al gave the following result:…”
Section: Approximating An Ltf From Noisy Versions Of Its Low-degree Fmentioning
confidence: 92%
“…Theorem 6.1 gives a strong bound on the precision required in the Chow parameters if f has low weight, but a weak bound for arbitrary LTFs since W may need to be 2 Ω(n log n) . Subsequently Goldberg (2001) gave an incomparable result which can be rephrased as follows: In contrast, our bound in Theorem 1.2 has a worse dependence on but has a 1/n rather than 1/quasipoly(n) dependence on n. Theorem 1.2 yields an affirmative answer (at least for constant ) to the open question of whether arbitrary linear threshold functions can be learned in the uniform distribution 1-RFA model with polynomial sample complexity: Thus far we have followed the proof from Birkendorf et al (1998) (which is itself closely based on Bruck 1990), and indeed it is not difficult to complete the proof of Theorem 6.1 from here. Instead we will use our ideas from Section 4.…”
Section: Approximating An Ltf From Noisy Versions Of Its Low-degree Fmentioning
confidence: 93%
See 2 more Smart Citations
“…In the field of machine learning, there are also many algorithms introduced for learning Boolean functions [22][23][24][25][26][27][28][29][30]. If these algorithms are modified to learning BNs of a bounded indegree of k, i.e., n Boolean functions with k inputs, their complexities are at least O(N · n k+1 ).…”
Section: Introductionmentioning
confidence: 99%