1999
DOI: 10.1007/978-1-4471-0559-6
|View full text |Cite
|
Sign up to set email alerts
|

Principles of Neural Model Identification, Selection and Adequacy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
67
0

Year Published

2009
2009
2014
2014

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(68 citation statements)
references
References 0 publications
1
67
0
Order By: Relevance
“…Once, the parameters of the neural model (16) are estimated, we have to deal with the presence of flat minima (potentially many combinations of the network parameters corresponding to the same level if the empirical loss), especially if the statistical properties of the model are of importance, as it is the case in this complex financial application. In order to identify a locally unique solution, we have to remove all the irrelevant parameters, that is the parameters that do not affect the level of the empirical loss.…”
Section: Removing the Irrelevant Connectionsmentioning
confidence: 99%
“…Once, the parameters of the neural model (16) are estimated, we have to deal with the presence of flat minima (potentially many combinations of the network parameters corresponding to the same level if the empirical loss), especially if the statistical properties of the model are of importance, as it is the case in this complex financial application. In order to identify a locally unique solution, we have to remove all the irrelevant parameters, that is the parameters that do not affect the level of the empirical loss.…”
Section: Removing the Irrelevant Connectionsmentioning
confidence: 99%
“…There is no information if X 2 should be removed from the model. In [24] a novel approach (parametric sampling) is presented in order to determine if a variable should be removed from the model. In parametric sampling new networks are created by bootstrapping the parameters of the initial network.…”
Section: Model Fitness and Sensitivity Criteriamentioning
confidence: 99%
“…In parametric sampling new networks are created by bootstrapping the parameters of the initial network. In order to reduce training times [24] use local bootstrap. Wavelets are local function and local bootstrapping may cannot be used.…”
Section: Model Fitness and Sensitivity Criteriamentioning
confidence: 99%
“…The application of VC theory to them is quite well-advanced [34,35], but there are many other approaches, including ones based on statistical mechanics [36]. It is notoriously hard to understand why they make the predictions they do.…”
Section: Choice Of Architecturementioning
confidence: 99%
“…However, it is often impractical to settle on a good parametric form beforehand. In these cases, one must turn to nonparametric models, as discussed in §1.2.2; neural networks are a particular favorite here [35]. The so-called kernel smoothing methods are also particularly welldeveloped for time series, and often perform almost as well as parametric models [66].…”
Section: Nonlinear and Nonparametric Modelsmentioning
confidence: 99%