2020
DOI: 10.1109/access.2020.3016314
|View full text |Cite
|
Sign up to set email alerts
|

An Intelligent Fault Diagnosis for Rolling Bearing Based on Adversarial Semi-Supervised Method

Abstract: Intelligent fault diagnosis of rolling bearing issues have been well addressed with the rapid growth of data scale. However, the performance of most diagnostic algorithms heavily depends on sufficient labeled samples. How to ensure the fault diagnosis accuracy with scarce labeled samples is always a great challenge in real industrial scenarios. To alleviate this issue, a deep adversarial semi-supervised (DASS) method based on the prototype learning network is proposed for rolling bearing fault diagnosis in thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…The substitution of equation ( 14) and equation ( 15) into equation ( 12) leads to 17). If the current gradient is small enough, the iteration is stopped, otherwise the iteration is continued; c) compute extraction vector w according to equation (19) and equation (10); d) compute search direction of the next iteration according to formula equation ( 7) and equation (9).…”
Section: B Derivation Of Blind Signal Extraction Algorithmmentioning
confidence: 99%
See 4 more Smart Citations
“…The substitution of equation ( 14) and equation ( 15) into equation ( 12) leads to 17). If the current gradient is small enough, the iteration is stopped, otherwise the iteration is continued; c) compute extraction vector w according to equation (19) and equation (10); d) compute search direction of the next iteration according to formula equation ( 7) and equation (9).…”
Section: B Derivation Of Blind Signal Extraction Algorithmmentioning
confidence: 99%
“…Take equation (10) into consideration, the closed form solution for step size η t cannot be obtained by equation (19) because H t+1 also depends on step size η t . To overcome this and simplify the optimization with respect to step size, let us recall that a fixed small step size for gradient learning still performs well.…”
Section: B Derivation Of Blind Signal Extraction Algorithmmentioning
confidence: 99%
See 3 more Smart Citations