2019
DOI: 10.1016/j.chemolab.2019.103814
|View full text |Cite
|
Sign up to set email alerts
|

Discriminant autoencoder for feature extraction in fault diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(17 citation statements)
references
References 16 publications
0
14
0
3
Order By: Relevance
“…Autoencoders are neural networks trained by unsupervised learning, which are trained to learn how to reconstruct data close to its original input ( Luo et al, 2019 ). The autoencoder consists of two parts, namely the encoder and the decoder, and its principle can be described as Eqs.…”
Section: Methodsmentioning
confidence: 99%
“…Autoencoders are neural networks trained by unsupervised learning, which are trained to learn how to reconstruct data close to its original input ( Luo et al, 2019 ). The autoencoder consists of two parts, namely the encoder and the decoder, and its principle can be described as Eqs.…”
Section: Methodsmentioning
confidence: 99%
“…Bilindiği gibi sınıflandırıcının performansı, özniteliklerin kalitesine büyük ölçüde bağlıdır [30]. Kodlama işlemi, sınıflandırma, regresyon, bilgi görselleştirme gibi birçok farklı alanda sıklıkla kullanılmaktadır [31]. Bu çalışmada üç farklı kodlama tekniğinin (AE, VAE ve FD) görsel üretme üzerindeki performansları incelenmiştir.…”
Section: Kodlama Teknikleriunclassified
“…Çözücü ise giriş vektörü ile çıkış vektörü arasındaki farkı minimize etmeye çalışır. Yeniden yapılanma kaybı yeterince küçük olduğunda; kodlanmış ℎ vektörünün giriş vektörünü en iyi temsil ettiği düşünülür [31], [32]. Kodlayıcı modül aşağıdaki gibi ifade edilebilir:…”
Section: Otomatik Kodlayıcılar (Auto Encoder-ae)unclassified
“…The loss function used in training this network penalizes the input reconstruction error. After convergence, the trained network can be used for input reconstruction with minimal noise [ 45 ]. One of the advantages of an AE is that it can be used in learning a lower-dimensional representation of input data with low reconstruction error even when it spans a non-linear manifold in a feature space.…”
Section: Summary Of Deep Learning Techniques That Are Applied In Lmentioning
confidence: 99%