The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023
DOI: 10.3390/s23041858
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Study on a Generalized Model Based on Self-Supervised Learning and Sparse Filtering for Intelligent Bearing Fault Diagnosis

Abstract: Recently, deep learning has become more and more extensive in the field of fault diagnosis. However, most deep learning methods rely on large amounts of labeled data to train the model, which leads to their poor generalized ability in the application of different scenarios. To overcome this deficiency, this paper proposes a novel generalized model based on self-supervised learning and sparse filtering (GSLSF). The proposed method includes two stages. Firstly (1), considering the representation of samples on fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 30 publications
0
1
0
Order By: Relevance
“…As shown in table 14, we have selected some existing models from the past 5 years to compare with the models in this paper. When using the same number of samples as [26], except for slightly lower accuracy under 0HP data, the accuracy under the other three single working conditions is 3.48%, 3.78%, and 0.8% higher than reference, respectively. In addition, when achieving the same level of accuracy as other literature models, the amount of data used in this paper is significantly smaller.…”
Section: Model Comparisonmentioning
confidence: 91%
“…As shown in table 14, we have selected some existing models from the past 5 years to compare with the models in this paper. When using the same number of samples as [26], except for slightly lower accuracy under 0HP data, the accuracy under the other three single working conditions is 3.48%, 3.78%, and 0.8% higher than reference, respectively. In addition, when achieving the same level of accuracy as other literature models, the amount of data used in this paper is significantly smaller.…”
Section: Model Comparisonmentioning
confidence: 91%
“…The main self-supervised task of their architecture is to predict future noise based on past noise. Nie et al [28] proposed a generalized model based on self-supervised learning and sparse filtering, which employs corresponding labels assigned to signals undergoing different feature transformations for self-supervised learning.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the model architectures employed exhibited several shortcomings, posing challenges to achieving high diagnostic accuracy in complex real-world settings for fault diagnosis purposes. Specifically, Wang and Nie's method [22,28] only identifies data augmentation categories without instance-level representation learning, limiting its ability to extract robust fault features effectively. In contrast, for contrastive learning methods, Ding's method [23] employs a time-domain transformation for vibration signal data augmentation but fails to leverage the time-and frequency-domain characteristics simultaneously.…”
Section: Introductionmentioning
confidence: 99%