2017
DOI: 10.1109/tnnls.2016.2536172
|View full text |Cite
|
Sign up to set email alerts
|

A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation

Abstract: Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 50 publications
0
17
0
Order By: Relevance
“…e reinforcement learning system also requires a strict policy so that the agent's behaviour can be regulated. It will also utilize reward function maps on each state-action pair of the environment to a numerical value which indicates the desirability of that state or the desirability of an action at that state [17,18].…”
Section: Ann Agents and Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…e reinforcement learning system also requires a strict policy so that the agent's behaviour can be regulated. It will also utilize reward function maps on each state-action pair of the environment to a numerical value which indicates the desirability of that state or the desirability of an action at that state [17,18].…”
Section: Ann Agents and Reinforcement Learningmentioning
confidence: 99%
“…In (18), N drop xy is the total number of dropped bursts, while sent xy is the total number of successfully transmitted bursts on the given link.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…2) CONVERGENCE AND COMPLEXITY By substituting the optimal β n into (22), the change of the objective value n becomes Increment n by 1.…”
Section: A Node Fault Tolerant I-elm 1) Algorithmmentioning
confidence: 99%
“…However, these two incremental algorithms were designed for fault-free situations only. We believed that fault and noise could greatly degrade the performance for the I-ELM and CI-ELM, if special procedures were not considered [21], [22].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation