2007
DOI: 10.1016/j.tcs.2007.05.014
|View full text |Cite
|
Sign up to set email alerts
|

Learning juntas in the presence of noise

Abstract: The combination of two major challenges in machine learning is investigated: dealing with large amounts of irrelevant information and learning from noisy data. It is shown that large classes of Boolean concepts that depend on a small number of variables-so-called juntas-can be learned efficiently from random examples corrupted by random attribute and classification noise. To accomplish this goal, a two-phase algorithm is presented that copes with several problems arising from the presence of noise: firstly, a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…Until now, there is still an open question about learning general BNs with a complexity better than n (1−o(1))(k+1) [31,32]. In this work, as an endeavor to meet the challenge, we prove that the complexity of the DFL algorithm is strictly O(k · (N + log n) · n 2 ) for learning the OR/AND BNs in the worst case, given enough noiseless random samples from the uniform distribution.…”
Section: Time Complexity Referencementioning
confidence: 99%
See 1 more Smart Citation
“…Until now, there is still an open question about learning general BNs with a complexity better than n (1−o(1))(k+1) [31,32]. In this work, as an endeavor to meet the challenge, we prove that the complexity of the DFL algorithm is strictly O(k · (N + log n) · n 2 ) for learning the OR/AND BNs in the worst case, given enough noiseless random samples from the uniform distribution.…”
Section: Time Complexity Referencementioning
confidence: 99%
“…An efficient algorithm was also proposed by Mossel et al [31] with a time complexity of O(n k+1 ) ω ω+1 , which is about O(n 0.7(k+1) ), for learning arbitrary BNs, where ω < 2.376. Arpe and Reischuk [32] showed that monotonic BNs could be learned with a complexity of poly(n 2 , 2 k , log(1/δ), γ…”
Section: Introductionmentioning
confidence: 99%
“…In fact, one can readily apply methods stemming from the area of PAC (probably approximately correct) learning theory [5], as the network identification problem can be reduced to the problem of learning Boolean juntas , i.e., Boolean functions that depend b only on a small number of their arguments. This problem was studied by Arpe and Reischuk [6] extending earlier work of Mossel et al [7,8]. …”
Section: Introductionmentioning
confidence: 95%
“…Hence, if MathClass-rel|ĥ(U)MathClass-rel| is larger than 2 -k , the variables corresponding to U are classified as essential. The algorithm was given by [6], but they used 2 - d -1 as threshold (see Line 8).…”
Section: Learning Essential Variables Of Regulatory Functionsmentioning
confidence: 99%
See 1 more Smart Citation