2019
DOI: 10.1109/access.2019.2916935
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Transfer Model With Wasserstein Distance Guided Multi-Adversarial Networks for Bearing Fault Diagnosis Under Different Working Conditions

Abstract: In recent years, intelligent fault diagnosis technology with the deep learning algorithm has been widely used in the manufacturing industry for substituting time-consuming human analysis method to enhance the efficiency of fault diagnosis. The rolling bearing as the connection between the rotor and support is the crucial component in rotating equipment. However, the working condition of the rolling bearing is under changing with complex operation demand, which will significantly degrade the performance of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
50
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 101 publications
(51 citation statements)
references
References 32 publications
0
50
0
1
Order By: Relevance
“…In [15], Cheng et al utilized Wasserstein distance to minimize distribution discrepancy through adversarial training in fault diagnosis transfer learning scenarios. Instead of minimizing Wasserstein distance between one single layer of the neural network, Zhang et al [16] proposed to learn domain invariant representations through minimizing the Wasserstein distance between multi-layers of the deep neural network and achieves better accuracy on bearing fault diagnosis tasks.…”
Section: Wasserstein Distancementioning
confidence: 99%
See 1 more Smart Citation
“…In [15], Cheng et al utilized Wasserstein distance to minimize distribution discrepancy through adversarial training in fault diagnosis transfer learning scenarios. Instead of minimizing Wasserstein distance between one single layer of the neural network, Zhang et al [16] proposed to learn domain invariant representations through minimizing the Wasserstein distance between multi-layers of the deep neural network and achieves better accuracy on bearing fault diagnosis tasks.…”
Section: Wasserstein Distancementioning
confidence: 99%
“…The feature extractor is trained to confuse the domain discriminator while minimizing the classification loss. The Wasserstein distance has recently been introduced into domain adaptation of fault diagnosis and achieves competitive results [15,16]. In [15] Cheng et al utilized Wasserstein distance to minimize distribution discrepancy through adversarial training in fault diagnosis transfer learning scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the theory, several recent studies have focused on obtaining domain-invariant representations using maximum mean discrepancy (MMD) [15] or adversarial training [16]- [18]. In [22], using Wasserstein distance in adversarial training was suggested to minimize the dissimilarity between the source and target domain distributions.…”
Section: B Domain Adaptationmentioning
confidence: 99%
“…Wasserstein distance has recently gained popularity as an ingredient for loss functions in the field of artificial intelligence, due to its advantage over other discrepancy measures between probability distributions, such as total variation distance, Kullback-Leibler divergence, and Jensen-Shannon divergence. [22], [28]- [30]. Since Wasserstein distance takes into account the properties of the underlying geometry, unlike the other dissimilarity measures mentioned, it assigns finite distance value even when two distributions do not share support [28].…”
Section: ) Notationmentioning
confidence: 99%
“…The Regularized Convolutional Neural Networks (RCNN) [32] combines CNNs and multiple kernel learning to address the small-sample-size problem in bearing fault diagnosis under various working conditions. The Wasserstein Distance guided Multi-Adversarial Networks (WDMANs) [33] employ a multiple-domain critic network to learn the shared feature representation between the source domain and target domain. The Triplet Loss guided Adversarial Domain Adaptation (TLADA) [34] aligns the domain distributions using the Wasserstein distance to match the class distribution by assigning pseudolabels for target samples.…”
Section: Related Workmentioning
confidence: 99%