2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2017
DOI: 10.1109/fuzz-ieee.2017.8015727
|View full text |Cite
|
Sign up to set email alerts
|

Distributed on-line learning for random-weight fuzzy neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…The authors in [8] proposed a DFNN model that randomly sets the parameters in antecedents and only updates the parameters in consequent layers. Later, they extended this work to an online DFNN model [9]. Their models assume that all clients share the information in antecedent layers, making this technically not a seriously distributed method.…”
Section: Related Work a Distributed Fuzzy Neural Networkmentioning
confidence: 99%
“…The authors in [8] proposed a DFNN model that randomly sets the parameters in antecedents and only updates the parameters in consequent layers. Later, they extended this work to an online DFNN model [9]. Their models assume that all clients share the information in antecedent layers, making this technically not a seriously distributed method.…”
Section: Related Work a Distributed Fuzzy Neural Networkmentioning
confidence: 99%
“…For example, Fierimonte et al [44] developed a decentralized FNN with random weights, where parameters in the fuzzy membership functions are chosen randomly as opposed to being trained. In subsequent work, they introduced an online implementation of the same FNN structure [45]. Notably, a random method of identifying parameters can result in very large deviations in accuracy during the learning process.…”
Section: A Distributed Learningmentioning
confidence: 99%
“…Notably, a random method of identifying parameters can result in very large deviations in accuracy during the learning process. Additionally, the algorithms are only applied in the consequent layers of the FNN [44], [45], which means that, strictly speaking, this decentralized FNN model is only partially distributed. A more recent proposition by Shi et al [46] involves a distributed FNN with a consensus learning strategy.…”
Section: A Distributed Learningmentioning
confidence: 99%
“…Recently, several distributed algorithms for FNN were proposed [23], [24]. The authors in [23] proposed a decentralized algorithm for random-weights FNN, where the parameters in the antecedent layer are randomly selected instead of being estimated.…”
mentioning
confidence: 99%
“…The authors in [23] proposed a decentralized algorithm for random-weights FNN, where the parameters in the antecedent layer are randomly selected instead of being estimated. An online implementation for the same FNN structure in [23] is further proposed in [24]. There's no doubt that such a random method for parameter identification can result in very large deviations during the learning process.…”
mentioning
confidence: 99%