2021
DOI: 10.1109/tim.2021.3067187
|View full text |Cite
|
Sign up to set email alerts
|

LEFE-Net: A Lightweight Efficient Feature Extraction Network With Strong Robustness for Bearing Fault Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(21 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…The results of the experiment mainly contain the total number of model parameters and model sizes for SVM, 1D-CNN, BS-net and MLS-net under the three dataset. The recently proposed bearing fault diagnosis models ANS-net [22] and LEFE-net [23] are also compared. We jointly determine the merit of a model based on the parameters and the accuracy rate.…”
Section: Model Lightweight Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The results of the experiment mainly contain the total number of model parameters and model sizes for SVM, 1D-CNN, BS-net and MLS-net under the three dataset. The recently proposed bearing fault diagnosis models ANS-net [22] and LEFE-net [23] are also compared. We jointly determine the merit of a model based on the parameters and the accuracy rate.…”
Section: Model Lightweight Comparisonmentioning
confidence: 99%
“…Hence, controlling the number of model parameters is extremely important in practical applications. Fang, H. et al proposed a lightweight fault diagnosis model that can solve the problem of too many model parameters [23]. However, it cannot perform fault diagnosis when there are insufficient samples.…”
Section: Introductionmentioning
confidence: 99%
“…Although these methods achieved higher accuracy, they also bring higher model complexity and more computation. Fang et al [32] extended an efficient feature extraction method based on CNN and used a lightweight network to complete high-precision fault diagnosis tasks. The spatial attention mechanism (SAM) is used to adjust the weight of the output feature map.…”
Section: Introductionmentioning
confidence: 99%
“…However, AAnNet is done at a single feature scale. Fang et al [25] proposed a CNN with better antinoise capability and domain adaptability and studied the splitting of feature maps to greatly reduce the number of parameters. But the proposed model only considers spatial attention.…”
Section: Introductionmentioning
confidence: 99%