2002
DOI: 10.1002/cplx.10042
|View full text |Cite
|
Sign up to set email alerts
|

FIR Volterra kernel neural models and PAC learning

Abstract: The probably approximately correct (PAC) learning theory creates a framework to assess the learning properties of static models for which the data are assumed to be independently and identically distributed (i.i.d.). The present article first extends the idea of PAC learning to cover the learning of modeling tasks with m-dependent sequences of data. The data are assumed to be marginally distributed according to a fixed arbitrary probability.The resulting framework is then applied to evaluate learning of Volter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2002
2002
2006
2006

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…The above results are for the input data that are marginally distributed according to a uniform distribution. The new results in the field ((K. Najarian, 2001a) and (K. Najarian, 2001a)) provide the learning properties of the same families of neural networks assuming an arbitrary distribution. These results are more general and remove the need for the assumption of uniform distribution.…”
Section: Extension Of Pac Learning To M-dependent Casesmentioning
confidence: 99%
“…The above results are for the input data that are marginally distributed according to a uniform distribution. The new results in the field ((K. Najarian, 2001a) and (K. Najarian, 2001a)) provide the learning properties of the same families of neural networks assuming an arbitrary distribution. These results are more general and remove the need for the assumption of uniform distribution.…”
Section: Extension Of Pac Learning To M-dependent Casesmentioning
confidence: 99%