2011
DOI: 10.3233/his-2011-0134
|View full text |Cite
|
Sign up to set email alerts
|

Selecting variables with search algorithms and neural networks to improve the process of time series forecasting

Abstract: A time series is a sequence of observations of a random variable. Hence, it is a stochastic process. Forecasting time series data is important component of operations research because these data often provide the foundation for decision models. This models are used to predict data points before they are measured based on known past events. Researches in this subject have been done in many areas like economy, energy production, ecology and others. To improve the process of time series forecasting it is importan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 45 publications
(50 reference statements)
0
2
0
Order By: Relevance
“…Further, for each couple of features, DSF-STM discards one of them if their correlation exceeds a given threshold. In the context of ML, this approach is known as Correlation-based Feature Selection [44]. After having discarded some feature, DSF-STM changes the current ANN with another that has been previously trained for the specific reduced set of features.…”
Section: Sac-stm Self-adjusting Concurrency Stm (Sac-stm) [15] Is a mentioning
confidence: 99%
“…Further, for each couple of features, DSF-STM discards one of them if their correlation exceeds a given threshold. In the context of ML, this approach is known as Correlation-based Feature Selection [44]. After having discarded some feature, DSF-STM changes the current ANN with another that has been previously trained for the specific reduced set of features.…”
Section: Sac-stm Self-adjusting Concurrency Stm (Sac-stm) [15] Is a mentioning
confidence: 99%
“…Particularly, we can expect that (significant) variations of w time , if any, do not depend on any feature exhibiting small variance along the current observation interval. On the other hand, in case of correlation across a (sub)set of different features, the impact of variations of these values on w time is expected to be assessable by observing the variation of any individual feature in that (sub)set, an approach known as Correlation-based Feature Selection (CFS) [36] in the ML literature. If the above scenarios occur, we can build an estimating function for w time which, compared to the f function in Equation 4, relies on a reduced number of input parameters.…”
Section: Reducing the Sampling Overhead: Dynamic Feature Selectionmentioning
confidence: 99%