The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.3390/s20143837
|View full text |Cite
|
Sign up to set email alerts
|

A Fault Diagnosis Method of Rotating Machinery Based on One-Dimensional, Self-Normalizing Convolutional Neural Networks

Abstract: Aiming at the fault diagnosis issue of rotating machinery, a novel method based on the deep learning theory is presented in this paper. By combining one-dimensional convolutional neural networks (1D-CNN) with self-normalizing neural networks (SNN), the proposed method can achieve high fault identification accuracy in a simple and compact architecture configuration. By taking advantage of the self-normalizing properties of the activation function SeLU, the stability and convergence of the fault diagnosi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…The 1D-CNN performs convolutional calculation on a 1D signal [ 19 ]. 1D-CNN is a good model because 1D filters can detect different spatial shapes in one dimensional matrix [ 20 ]. 1D-CNN utilizes several 1D convolutional layers followed by max-pooling layers, and dynamic fully connected layers with ReLu activation functions.…”
Section: Methodsmentioning
confidence: 99%
“…The 1D-CNN performs convolutional calculation on a 1D signal [ 19 ]. 1D-CNN is a good model because 1D filters can detect different spatial shapes in one dimensional matrix [ 20 ]. 1D-CNN utilizes several 1D convolutional layers followed by max-pooling layers, and dynamic fully connected layers with ReLu activation functions.…”
Section: Methodsmentioning
confidence: 99%
“…In this layer, the input features first pass through multiple one-dimensional convolutional layers. Next, in order to differentiate the output of the convolutional layer and solve the problems of overfitting and poor robustness due to perturbations and noise interference, the scaled exponential linear unit [24] is selected as the activation function to perform a non-linear transformation on the output of the convolutional layer. Subsequently, max-pooling with several kernel sizes is adopted to perform dimensionality reduction operations.…”
Section: Multi-feature Fusion Based On Weakly Supervised Learning 331...mentioning
confidence: 99%
“…1. Dropout 37,38 regularizes the neural network to reduce complex compatibility between neurons and prevent overfitting. Dropout is expressed as: where: P(P i =1)=p, belonging to Bernoulli random variable probability distribution; P is the probability that sample I generates 1; Ba is the number of models in class I neurons.…”
Section: Dropout Regularizationmentioning
confidence: 99%