2014
DOI: 10.1016/j.neucom.2014.03.006
|View full text |Cite
|
Sign up to set email alerts
|

A methodology for training set instance selection using mutual information in time series prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0
4

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(15 citation statements)
references
References 38 publications
0
11
0
4
Order By: Relevance
“…Classical whitebox methods for forecasting demand include linear regression [15] and ARIMA [16] models. As mentioned in [17] and [18] these classical methods provide poor estimates if the consumption behavior is nonlinear, nonstationary, and not known prior to model construction. As a result black-box machine learning algorithms such as ANNs and support vector machines (SVMs) are of growing interest for forecasting the power demand of consumers [19].…”
Section: Background and Motivationmentioning
confidence: 99%
“…Classical whitebox methods for forecasting demand include linear regression [15] and ARIMA [16] models. As mentioned in [17] and [18] these classical methods provide poor estimates if the consumption behavior is nonlinear, nonstationary, and not known prior to model construction. As a result black-box machine learning algorithms such as ANNs and support vector machines (SVMs) are of growing interest for forecasting the power demand of consumers [19].…”
Section: Background and Motivationmentioning
confidence: 99%
“…However, their usage is commonly limited to functions of one or two variables because the number of samples needed for PDF estimation increases exponentially with the number of variables [19]. In this study, k-nearest neighbor-based MI estimation method proposed by Kraskov et al [20] is used.…”
Section: Estimation Of the Mutual Information Using Knearest Neighborsmentioning
confidence: 99%
“…The basic idea is to estimate I(X, Y) from the distances in spaces X, Y and Z from z i to its k-nearest neighbors, averaged over all z i [19]. Let us define…”
Section: Estimation Of the Mutual Information Using Knearest Neighborsmentioning
confidence: 99%
“…It is a key problem in the field of pattern recognition and widely used for processing high-dimensional dataset. Compared to feature extraction, feature selection could maintain physical properties of the original features and has better interpretability, which has been widely used in time series analysis [12] and pattern classification [13]. For example, in order to improve the prediction accuracy and reduce training time, selecting proper training set instance is an important preprocessing step for long-term time series prediction.…”
Section: Introductionmentioning
confidence: 99%