[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science 1988
DOI: 10.1109/sfcs.1988.21928
|View full text |Cite
|
Sign up to set email alerts
|

Predicting (0, 1)-functions on randomly drawn points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
135
0

Year Published

1994
1994
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(135 citation statements)
references
References 20 publications
0
135
0
Order By: Relevance
“…Every concept in the class is the union of an initial segment from each of the subintervals. It is easy to see that VCdim(BASICn) = n. Our argument for the upper bound on BASICn uses ideas from earlier arguments by and Haussler, et al (1990) giving lower bounds on the probability of a mistake when predicting a stationary target function. The intuition behind the argument is as follows.…”
Section: Upper Bounds On the Tolerable Amount Of Driftmentioning
confidence: 99%
“…Every concept in the class is the union of an initial segment from each of the subintervals. It is easy to see that VCdim(BASICn) = n. Our argument for the upper bound on BASICn uses ideas from earlier arguments by and Haussler, et al (1990) giving lower bounds on the probability of a mistake when predicting a stationary target function. The intuition behind the argument is as follows.…”
Section: Upper Bounds On the Tolerable Amount Of Driftmentioning
confidence: 99%
“…It is easy to see that the closure algorithm corresponds to one particular orientation of the 1-inclusion graph algorithm (Haussler, Littlestone and Warmuth, 1994). Thus, besides our bound of O( 1 ε (d log d + log 1 δ )) and the bound O( 1 ε (log 1 δ + d log 1 ε )) of Blumer et al (1989), the bound of O( d ε log 1 δ ) for the 1-inclusion graph algorithm of Haussler, Littlestone and Warmuth (1994) holds as well for learning intersection-closed classes with the closure algorithm.…”
Section: Proofmentioning
confidence: 99%
“…The learning algorithm A then PAC learns a concept class, if for ε, δ > 0 there is an m = m (ε, δ), such that with probability at least 1 − δ the algorithm outputs a hypothesis with error smaller than ε, when m random examples are given to A. Bounds on m often depend on the VC-dimension, a combinatorial parameter of the concept class. For finite d the well-known bound of Blumer et al (1989) states that for any consistent learning algorithm O( 1 ε (log 1 δ + d log 1 ε )) examples suffice for PAC learning concept classes of VC-dimension d. On the other hand, for the 1-inclusion graph algorithm a bound of O( d ε log 1 δ ) was established by Haussler, Littlestone and Warmuth (1994).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We are now ready to define the absolute mistake-bound variant of the on-line learning model (Haussler, Littlestone, & Warmuth, 1988;Littlestone, 1988). An on-line algorithm (or incremental algorithm) for C is an algorithm that runs under the following scenario.…”
Section: Preliminary Definitionsmentioning
confidence: 99%