2018
DOI: 10.1109/tfuzz.2017.2729507
|View full text |Cite
|
Sign up to set email alerts
|

Deep Takagi–Sugeno–Kang Fuzzy Classifier With Shared Linguistic Fuzzy Rules

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
56
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(56 citation statements)
references
References 48 publications
0
56
0
Order By: Relevance
“…Heuristic processing is utilized in mapping. A shared fuzzy rule based TSK type fuzzy classifier is developed in [29]. TSK Fuzzy systems are better for the interpretability and accuracy attainment.…”
Section: Related Workmentioning
confidence: 99%
“…Heuristic processing is utilized in mapping. A shared fuzzy rule based TSK type fuzzy classifier is developed in [29]. TSK Fuzzy systems are better for the interpretability and accuracy attainment.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, our approach is equipped by the online feature selection and hidden layer merging scenarios which dynamically compress the input space of each layer. This mechanism replaces the random feature selection of the original work in [48], [50]. Our approach also operates in the one-pass learning scenario being compatible for online learning setting whereas [48], [50] still applies an offline working scenario.…”
Section: B Deep Stacked Networkmentioning
confidence: 99%
“…This approach adopts the stacked deep fuzzy neural network concept in [50] where the feature space of the bottom layer to the top one is growing incorporating the outputs of previous layers as extra input information. DEVFNN differs itself from [5], [48], [50] where it characterizes a fully flexible network structure targeted to address the requirement of continual learning [22]. This property is capable of expanding the depth of the DNN whenever the drift is identified to adapt to rapidly changing environments.…”
Section: Introductionmentioning
confidence: 99%
“…It is evident that the notion of random shifts in each layer of DSSCN improves linear separability of given classification problem because it paves a way to move apart the manifolds of original problem in a stacked manner [48]. This structure is also perceived as a sort of hierarchical structure but it differs from the conventional hierarchical deep neural network which generally loses transparency of intermediate features because it can no longer be associated with physical semantics of original input features [47]. It is also worth mentioning that DSSCN is equipped by the online feature selection mechanism coupled in every building unit.…”
Section: Deep Stacked Stochastic Configuration Networkmentioning
confidence: 99%