2021
DOI: 10.1109/tie.2020.2972458
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Networks With Adaptively Parametric Rectifier Linear Units for Fault Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(45 citation statements)
references
References 45 publications
0
30
0
Order By: Relevance
“…This parameter can be channel-shared or channel-wise. The Adaptively Parametric Rectifier Linear Units (APReLU) [32] integrates a subnetwork to adaptively estimate the parameters of PReLU. The Exponential Linear Unit (ELU) [33] uses an exponential function in the negative part to solve the ReLU's problem.…”
Section: Activation Functionmentioning
confidence: 99%
“…This parameter can be channel-shared or channel-wise. The Adaptively Parametric Rectifier Linear Units (APReLU) [32] integrates a subnetwork to adaptively estimate the parameters of PReLU. The Exponential Linear Unit (ELU) [33] uses an exponential function in the negative part to solve the ReLU's problem.…”
Section: Activation Functionmentioning
confidence: 99%
“…The following research proves that the improvement of network structure is conducive to extracting detailed fault features and achieving high diagnostic accuracy. Zhao proved that deep residual network (DRN) can efficiently extract the high-level fault features contained in the wavelet packet coefficients (Zhao et al, 2017;Zhao et al, 2020b). On the basis of DRN, the dynamic weight module was introduced to weight the fault features of different frequency bands in the time-frequency images, which improved the diagnosis accuracy (Zhao et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…In the process of neural network learning, the activation function is considerably important, and can help neural networks to learn and understand. The activation function often used is the Relu activation function [32], also known as the modified linear unit. The convergence speed of Relu is faster than that of Sigmoid and TANh.…”
Section: Adaptive Parameterized Aprelu Activation Functionmentioning
confidence: 99%