2014
DOI: 10.3103/s1060992x14020039
|View full text |Cite
|
Sign up to set email alerts
|

Training sample reduction based on association rules for neuro-fuzzy networks synthesis

Abstract: The problem of reduction of training samples for synthesizing diagnostic models has been solved in the paper. The method of dimension reduction of training sample based on association rules has been proposed. It includes the implementation of stages of reduction of instances, features and superfluous terms, uses information on extracted association rules for evaluation of informativeness of features. The proposed method allows to create a partition of feature space with a fewer number of instances compared to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 12 publications
(13 reference statements)
0
10
0
Order By: Relevance
“…where β 1 , β 2 are the hyperparameters indicating the exponential rate of decay at the time of evaluation; η is the initial level of training; ε is the small constant, introduced for numerical stability; m ω is the exponential movable mean of the gradient; v ω is the exponential mean of gradient square; ∇ ω L (t) is the gradient value over time t; ω is the vector of gradient descent parameters [23]. Typically, the architecture of a neural network model, its topology, and the values of macro parameters are chosen based on an expert evaluation or empirically.…”
Section: Development Of a Methods For Constructing Neural Network Modementioning
confidence: 99%
“…where β 1 , β 2 are the hyperparameters indicating the exponential rate of decay at the time of evaluation; η is the initial level of training; ε is the small constant, introduced for numerical stability; m ω is the exponential movable mean of the gradient; v ω is the exponential mean of gradient square; ∇ ω L (t) is the gradient value over time t; ω is the vector of gradient descent parameters [23]. Typically, the architecture of a neural network model, its topology, and the values of macro parameters are chosen based on an expert evaluation or empirically.…”
Section: Development Of a Methods For Constructing Neural Network Modementioning
confidence: 99%
“…In addition, we can use such methods for segmentation of images [20], improvement of blurred images [21], processing of x-rays [22] and low contrast [23,24]. Papers [25][26][27][28][29][30] present methods for modeling complex dependences based on computational intelligence [25], associative rules [26], negative selection [27], neural-fuzzy networks [28], agent technologies [29], stochastic search [30]. The methods proposed in [25][26][27][28][29][30] make it possible to process data presented in various formats efficiently: usual samples of multidimensional data [25,[28][29][30], transaction databases [26], samples containing missing values [26,27].…”
Section: Literature Review and Problem Statementmentioning
confidence: 99%
“…Papers [25][26][27][28][29][30] present methods for modeling complex dependences based on computational intelligence [25], associative rules [26], negative selection [27], neural-fuzzy networks [28], agent technologies [29], stochastic search [30]. The methods proposed in [25][26][27][28][29][30] make it possible to process data presented in various formats efficiently: usual samples of multidimensional data [25,[28][29][30], transaction databases [26], samples containing missing values [26,27]. However, the methods proposed in papers [25][26][27][28][29][30] do not allow solving the problems associated with processing data presented in the form of time series effectively.…”
Section: Literature Review and Problem Statementmentioning
confidence: 99%
See 2 more Smart Citations