1988
DOI: 10.1145/48014.63140
|View full text |Cite
|
Sign up to set email alerts
|

Computational limitations on learning from examples

Abstract: The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distribution-free sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) Boolean threshold functions, and (c) Boolean formulas in which each variable occurs at most once. Relationships between learning of heuristics and finding approximate solutions to NP-hard optimization problems are given.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
223
1
1

Year Published

1990
1990
2011
2011

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 447 publications
(236 citation statements)
references
References 15 publications
1
223
1
1
Order By: Relevance
“…Although computational learning theorists frequently examine CNF representations (e.g. kCNF, k-clause CNF) (Valiant, 1984;Pitt & Valiant, 1988), there are no popular, practical algorithms for learning CNF. Actually, CNF is a quite natural representation for "nearly conjunctive" concepts.…”
Section: Lntroductionmentioning
confidence: 99%
“…Although computational learning theorists frequently examine CNF representations (e.g. kCNF, k-clause CNF) (Valiant, 1984;Pitt & Valiant, 1988), there are no popular, practical algorithms for learning CNF. Actually, CNF is a quite natural representation for "nearly conjunctive" concepts.…”
Section: Lntroductionmentioning
confidence: 99%
“…Although it seems natural to ask that the hypothesis chosen approach the best performance in the class 7-I (corresponding to the case 7-= ~) , we will see that in some circumstances it is interesting and important to relax this restriction. By leaving the class 7-fixed and increasing the power of 7-t, we may overcome certain representational hurdles presented by the choice 7" = ~, in the same way that k-term DNF (disjunctive normal form) formulas are efficiently learnable in the standard PAC model provided we allow the more expressive k-CNF (conjunctive normal form) hypothesis representation (Kearns, Li, Pitt & Valiant, 1987;Pitt & Valiant, 1988).…”
Section: The Hypothesis Class 7-¢ and The Touchstone Class 7"mentioning
confidence: 99%
“…membership queries alone are not sufficient for efficient learning. The list of these concept classes includes DFA (Angluin, 1987a;, one-counter languages (Berman & Roos, 1987), simple deterministic languages (Ishizaka, 1990), this algorithm uses extended equivalence queries), k-term DNF (Angluin, 1987a;Pitt & Valiant, 1988), read-once formulas (Angluin, Hellerstein & Karpinski, 1989), conjunctions of Horn clauses (Angluin, Frazier & Pitt, 1990) and intersections of halfspaces (Baum, 1990;Bultman & Maass, 1990). We note that in the first three examples the models considered also take into account the lengths of the counterexamples received, resp.…”
Section: Learning Modelsmentioning
confidence: 99%
“…Angluin (1987b) showed that k-term DNF formulas of n variables can be learned with polynomially many equivalence and membership queries for every fixed k. We are not aware of results showing that equivalence queries alone are not sufficient. The negative results of Angluin (1990) use DNF formulas with a growing number of terms, and the negative results of Pitt and Valiant (1988) are concerned with the computational complexity of finding a consistent hypothesis and therefore do not apply in our model. Let DFAn be the concept class of all regular languages over the domain {0, 1}* which are accepted by some DFA having at most n states.…”
Section: Some Open Problemsmentioning
confidence: 99%