2008
DOI: 10.1016/j.envsoft.2008.03.007
|View full text |Cite
|
Sign up to set email alerts
|

Non-linear variable selection for artificial neural networks using partial mutual information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
158
0
5

Year Published

2010
2010
2018
2018

Publication Types

Select...
10

Relationship

2
8

Authors

Journals

citations
Cited by 258 publications
(167 citation statements)
references
References 43 publications
0
158
0
5
Order By: Relevance
“…Moreover, it is able to capture all dependence between two variables and as it is a model-free strategy, it is not necessary to define a model structure a priori. PMI is a measure of the partial or additional dependence that a new input can add to the existing prediction model [25][26]. Given a dependent discrete variable Y (the output of the model), and an input X (the independent discrete variable), for a set of preexisting inputs Z, the discrete version of the PMI criterion is defined as: A good estimate of the expectation function is necessary for calculating the PMI criterion.…”
Section: Partial Mutual Informationmentioning
confidence: 99%
“…Moreover, it is able to capture all dependence between two variables and as it is a model-free strategy, it is not necessary to define a model structure a priori. PMI is a measure of the partial or additional dependence that a new input can add to the existing prediction model [25][26]. Given a dependent discrete variable Y (the output of the model), and an input X (the independent discrete variable), for a set of preexisting inputs Z, the discrete version of the PMI criterion is defined as: A good estimate of the expectation function is necessary for calculating the PMI criterion.…”
Section: Partial Mutual Informationmentioning
confidence: 99%
“…The use of sensitivity analysis, as was done by , is one way of achieving this. Alternatively, the use of techniques for the selection of inputs to data-driven models (Bowden et al, 2005a;Bowden et al, 2005b;Fernando et al, 2009;Galelli and Castelletti, 2013;Galelli et al, 2014;May et al, 2008a;May et al, 2008b) might be worthwhile. For example, these are at the core of selection-based dynamic emulation proposed by Castelletti et al (2012b).…”
Section: Since Inputs To Data-driven Sms Generally Include the Decisimentioning
confidence: 99%
“…This MI score is then applied in the next step of the input variable selection method. The principle of Hampel distance is applied in this work using the following calculation principle (May, Maier, Dandy & Fernando, 2008). Hampel distance is defined as follows.…”
Section: (3)mentioning
confidence: 99%