2019 Chinese Control Conference (CCC) 2019
DOI: 10.23919/chicc.2019.8866347
|View full text |Cite
|
Sign up to set email alerts
|

Transformer Fault Diagnosis based on Deep Brief Sparse Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…The results revealed better performance in comparison with KNN, SVM, BPNN. Similarly, the transformer fault is predicted using SAE and DBN algorithms to increase reliability and the stability of power systems [94]- [95]. The SAE algorithm is proposed in [96] for the same task.…”
Section: Tp Precision = Tp +Fpmentioning
confidence: 99%
“…The results revealed better performance in comparison with KNN, SVM, BPNN. Similarly, the transformer fault is predicted using SAE and DBN algorithms to increase reliability and the stability of power systems [94]- [95]. The SAE algorithm is proposed in [96] for the same task.…”
Section: Tp Precision = Tp +Fpmentioning
confidence: 99%
“…Lee et al [10] proposed a feature extraction method based on deep unsupervised sparse autoencoder (SAE) for data classification, which improved the classification performance and detection speed, but the performance of sparse classes was worse than that of other classes. Xu et al [11] proposed a deep belief sparse autoencoder (DBSAE), which captured features of label-free dissolved gas analysis (DGA) raw data, and a supervised trained back propagation network is used to implement transformer fault diagnosis. Marir et al [12] proposed a new stack denoising SAE method, which was implemented by using spark-based iterative simpli cation paradigm to improve detection performance and algorithm e ciency.…”
Section: Related Workmentioning
confidence: 99%
“…A. DEVICE IDENTIFICATION USING DEEP LEARNING Autoencoders (AE) are usually utilized to learn efficient data coding in an unsupervised learning manner [7][8][9][10][11]. Its application includes both feature extraction and sample variation diagnosis.…”
Section: Device Identification and Model Optimizationmentioning
confidence: 99%
“…Thus, the purpose of detecting damaged tools was achieved, producing experimental results with an accuracy of 95%. In [7][8][9][10], AE was also used for fault diagnosis, and it was difficult to collect all the samples because of variable fault conditions. Therefore, the training method that only fits the positive sample can achieve either normal or abnormal results.…”
Section: Introductionmentioning
confidence: 99%