2020
DOI: 10.1109/tfuzz.2019.2948586
|View full text |Cite
|
Sign up to set email alerts
|

Novel Incremental Algorithms for Attribute Reduction From Dynamic Decision Tables Using Hybrid Filter–Wrapper With Fuzzy Partition Distance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 37 publications
(9 citation statements)
references
References 52 publications
0
9
0
Order By: Relevance
“…To verify the performance of the proposed incrementally mining CCB method, it is tested on 5 datasets. The experiment scheme is similar to the scheme in [32]. Firstly, the whole dataset is equally divided into disjoint two subsets, namely, the original dataset and incremental dataset.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…To verify the performance of the proposed incrementally mining CCB method, it is tested on 5 datasets. The experiment scheme is similar to the scheme in [32]. Firstly, the whole dataset is equally divided into disjoint two subsets, namely, the original dataset and incremental dataset.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…These concepts will be an essential foundation for proposing an attribute reduction algorithm presented in the third part of the paper. Some basic concepts can be cited in [3], [31], [35]- [38].…”
Section: Preliminariesmentioning
confidence: 99%
“…Then they proposed a filtering algorithm (IFPR) to select attributes on the decision table. Experimental results show that the IFPR algorithm performs superior to the algorithms [31]- [34] according to the FRS approach. Giang et al in work [35] recently constructed a distance measure based on the intuitionistic fuzzy set (IFS) model and proposed the IFDBAR algorithm to find the reduct on the decision table.…”
Section: Introductionmentioning
confidence: 99%
“…There are two general feature selection strategies: wrappers [36] and filters [35]. While the wrapper strategy employs learning algorithms to evaluate selected attribute subsets, the Filter strategy selects attributes based on some measures such as information gain [24,26,27,28,29,33,34,39,40], consistency [1,23,25,30,47], distance [8,9,35,42,43], and dependency [27, 41,38,46]. These measures can be classified into distance and positive regions [30].…”
Section: Introductionmentioning
confidence: 99%
“…Making immediate attribute reduction in the numerical decision table is popular in attribute reduction, with the fuzzy set (FS) and IFS being popular sets. In the FS, many algorithms use different fuzzy measures such as fuzzy positive region [44,45,46,47], fuzzy entropy [26], and fuzzy distance [8]. Results show that the size and accuracy of classification of the reduct are gained, but for the noisy datasets, the IFS were often opted due to the nature of some constraints in the IFS approximation space [3, ?].…”
Section: Introductionmentioning
confidence: 99%