2020
DOI: 10.1109/tii.2019.2951011
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Nonnegative Matrix Factorization Approach via Autoencoder for Nonlinear Fault Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 44 publications
(19 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…Others have examined NMF extensions on the basis of sparseness and other constraints for graphical analysis [56] and deeply enhanced weighted NMF [57]. Even more recent work has leveraged NMF in the context of deep learning [58][59][60]. These newer techniques have not been used as extensively and have not been included here.…”
Section: Discussionmentioning
confidence: 99%
“…Others have examined NMF extensions on the basis of sparseness and other constraints for graphical analysis [56] and deeply enhanced weighted NMF [57]. Even more recent work has leveraged NMF in the context of deep learning [58][59][60]. These newer techniques have not been used as extensively and have not been included here.…”
Section: Discussionmentioning
confidence: 99%
“…(25), we also can prove the convergence under the updating rule of U in Eq. (26), so the objective function ( 25) is nonincreasing under the updating rules (26) and (27). Therefore, Algorithm 1 can be proved to be convergent under its iterative formulas in a similar way.…”
Section: Convergence Analysismentioning
confidence: 95%
“…Recently, Guo et al [26] proposed a Sparse Deep Nonnegative Matrix Factorization (SDNMF) algorithm for more accurate classification and better feature interpretation, which can learn localized features or generate more discriminative representations for samples in distinct classes by imposing L 1 -norm penalty on the columns of certain factors. Although these multi-layer structure methods [27], [28] can extract more latent hierarchical features than onelayer structure methods, they still have two shortcomings:…”
Section: Introductionmentioning
confidence: 99%
“…Bando et al ( 2018 ) proposed a deep variational NMF by interpreting the spectrogram of input data as the sum of a speech spectrogram and a non-negative noisy spectrogram, which is modeled as the variational prior distribution. Ren et al ( 2019 ) introduced an end-to-end deep NMF architecture with non-negative constraints and a factorization loss conducted on middle layers. Those deep NMFs can further learn non-linear correlations hidden in data.…”
Section: Preliminaries and Related Workmentioning
confidence: 99%