2020
DOI: 10.1016/j.neucom.2019.12.090
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of general fuzzy min-max neural networks for pattern classification problems

Abstract: General fuzzy min-max (GFMM) neural network is a generalization of fuzzy neural networks formed by hyperbox fuzzy sets for classification and clustering problems. Two principle algorithms are deployed to train this type of neural network, i.e., incremental learning and agglomerative learning. This paper presents a comprehensive empirical study of performance influencing factors, advantages, and drawbacks of the general fuzzy min-max neural network on pattern classification problems. The subjects of this study … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 26 publications
(36 reference statements)
0
9
0
Order By: Relevance
“…Therefore, we do not tune the maximum hyperbox size for different datasets but use the same maximum hyperbox size parameter for all experimental datasets. If we use a small value of the maximum hyperbox size, then the GFMM model is expected to have a high classification performance, but the complexity of the model is also high [24]. In contrast, with large values of the maximum hyperbox size, the complexity of the GFMM model is low, but the classification performance is often not high.…”
Section: Datasets Parameter Settings and Performance Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we do not tune the maximum hyperbox size for different datasets but use the same maximum hyperbox size parameter for all experimental datasets. If we use a small value of the maximum hyperbox size, then the GFMM model is expected to have a high classification performance, but the complexity of the model is also high [24]. In contrast, with large values of the maximum hyperbox size, the complexity of the GFMM model is low, but the classification performance is often not high.…”
Section: Datasets Parameter Settings and Performance Metricsmentioning
confidence: 99%
“…For several datasets, the encoded values in several dimensions contain only two values 0 and 1, so θ = 1 will ensure that the hyperboxes can be expanded to cover both of these extreme values. For the agglomerative learning algorithm, we used the "longest distance" [24] as a similarity measure and set σ = 0 so that the performance of the learning algorithm depends only on the values of θ. The sensitivity parameter γ in the membership function impacts the decreasing speed of the membership degrees for the numerical features.…”
Section: Datasets Parameter Settings and Performance Metricsmentioning
confidence: 99%
“…Fuzzy min-max neural networks are used to make the task easier and these implements fuzzy sets to accomplish tasks. 27,28…”
Section: Introductionmentioning
confidence: 99%
“…Numerous scholars have integrated fuzzy theory and neural networks, and have designed neuro-fuzzy networks such as ANFIS, (18) IT2FNN, (19) and FMM. (20) Unlike traditional neural networks, neuro-fuzzy networks combine fuzzy logic (similar to human reasoning) and the learning ability of the neural network, which can automatically construct rules and reduce the number of learnable parameters in the network. However, parameter design in the network architecture is also a key factor related to the overall performance.…”
Section: Introductionmentioning
confidence: 99%