2019
DOI: 10.1016/j.neucom.2018.10.049
|View full text |Cite
|
Sign up to set email alerts
|

Batch-normalized deep neural networks for achieving fast intelligent fault diagnosis of machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
96
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 214 publications
(104 citation statements)
references
References 13 publications
0
96
0
Order By: Relevance
“…To further improve vanishing and exploding gradients problem, batch normalization technique [45] is applied in the networks in this paper. The purpose of batch normalization is to linearly transform layer's inputs to ones with zero means and unit variances, making them de-correlated and be kept in the active region of the activation functions, without corrupting the learned features.…”
Section: Variable Initialization and Batch Normalizationmentioning
confidence: 99%
“…To further improve vanishing and exploding gradients problem, batch normalization technique [45] is applied in the networks in this paper. The purpose of batch normalization is to linearly transform layer's inputs to ones with zero means and unit variances, making them de-correlated and be kept in the active region of the activation functions, without corrupting the learned features.…”
Section: Variable Initialization and Batch Normalizationmentioning
confidence: 99%
“…Due to the time-space correlation of the vibration signal in the degradation process of rolling bearing, a new prognosis model based on MCLSTM is proposed. Specifically, the MCLSTM is constructed by stacking multiple ConvLSTM units, and then batch normalization [39] technology is added between each ConvLSTM layer. After that, a dense layer is employed.…”
Section: Architecture Of the Proposed Networkmentioning
confidence: 99%
“…Its use is to normalize and scale the inputs from previous layers which in turn results in faster training and lower validation errors [15]. Using BN layers before activation layers has been shown to be successful in mitigating the internal covariant shift problem in the task of predicting faults in bearings and gearboxes [16]. Keras provides various activation functions via the Activation layer implementation: RELU, LeakyRELU, ELU, SELU, as well as the classical sigmoid and tanh.…”
Section: Convolutional Neural Network (Cnn) Modelmentioning
confidence: 99%