2012
DOI: 10.1016/j.asoc.2012.07.022
|View full text |Cite
|
Sign up to set email alerts
|

Combining features of negative correlation learning with mixture of experts in proposed ensemble methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
11
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 31 publications
0
11
0
Order By: Relevance
“…One approach to deal with complex, real-world problems is to combine AI prediction models and form an ensemble of predictors that exploit the different local behaviors of the base models to improve the overall prediction system's performance (Masoudnia et al, 2012). The main objective of ensemble learning methods is to simplify a difficult prediction task by dividing it into some relatively easy prediction subtasks and formulating a consensus prediction result for the original data (García-Pedrajas et al, 2012).…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…One approach to deal with complex, real-world problems is to combine AI prediction models and form an ensemble of predictors that exploit the different local behaviors of the base models to improve the overall prediction system's performance (Masoudnia et al, 2012). The main objective of ensemble learning methods is to simplify a difficult prediction task by dividing it into some relatively easy prediction subtasks and formulating a consensus prediction result for the original data (García-Pedrajas et al, 2012).…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
“…The key idea behind NCL is to introduce a correlation penalty term to the cost function of individual NN components so that each component minimizes its mean square error (MSE) together with the ensemble's error correlation (Masoudnia et al, 2012). Alhamdoosh and Wang (2014) incorporated random vector functional link (RVFL) networks as base components with the NCL strategy to build neural network ensembles.…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
“…In other words, MI provides a decision-making system that can be used to select the most effective inputs (variables which can represent the influence of other ones) and reduce the noises for the development of a model (Chelgani et al 2018). In a predictive modeling problem, various researches indicated that combination of intelligent predictor models and development of an ensemble of predictors (experts) can construct an accurate model to deal with complicated problems (Masoudnia et al 2012;Hadavandi et al 2015Hadavandi et al , 2016. One of the popular ensemble methods is the neural network ensemble (NNE) (Hansen and Salamon 1990) and an efficient approach for creating an NNE model is Adaptive Boosting (Adaboost) that can adaptively improve the probability of sampling cases for accurate training experts for the NNE model.…”
Section: Introductionmentioning
confidence: 99%
“…One of the popular ensemble methods is the neural network ensemble (NNE) (Hansen and Salamon 1990) and an efficient approach for creating an NNE model is Adaptive Boosting (Adaboost) that can adaptively improve the probability of sampling cases for accurate training experts for the NNE model. This approach can develop a model by using a wide distribution of inputs and reduce the prediction errors by considering the information of previous experts (Hansen and Salamon 1990;Freund and Schapire 1996;Solomatine and Shrestha 2004;Masoudnia et al 2012;Tian et al 2012;Zhai et al 2012). Although the last decade has witnessed increasing applications of MI and Adaboost-NNE models, they have not yet been used in the exploration and prediction of earth science sectors.…”
Section: Introductionmentioning
confidence: 99%
“…This advantage significantly reduces the training cost of producing the ensemble. However, this method does not guarantee enough diversity between different snapshot networks while diversity is key for the success of ensemble(Brown et al, 2005;Masoudnia, Ebrahimpour, & Arani, 2012a). The authors claimed that this method could visit multiple good and diverse local minimums, leading to increasingly accurate predictions over the course of training in several classification benchmarks.…”
mentioning
confidence: 99%