2015 34th Chinese Control Conference (CCC) 2015
DOI: 10.1109/chicc.2015.7260634
|View full text |Cite
|
Sign up to set email alerts
|

Bearing fault diagnosis method based on stacked autoencoder and softmax regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(41 citation statements)
references
References 12 publications
0
40
0
Order By: Relevance
“…The softmax function is used as the cost function to transform the class classification problem, that is, the output of the previous layer, into the probability of each state when solving the state classification problem. The softmax function is as follows [20]:…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…The softmax function is used as the cost function to transform the class classification problem, that is, the output of the previous layer, into the probability of each state when solving the state classification problem. The softmax function is as follows [20]:…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…By reviewing the literatures on diagnosis of Alzheimer's disease, it was concluded that brain data and images with a low sample size and high dimension are one of the most important challenges in this study, and new research can be carried out in this area [12][13][14][15]. Most of the recently used methods are deep learning methods, including deep sparse multi-task learning [16], stacked auto-encoder [17], sparse regression models [18], etc., each attempting to overcome the aforementioned challenges. These methods are used more to select features.…”
Section: Related Workmentioning
confidence: 99%
“…In the features of this issue, there are features with information load and without information load that should be selected, which can improve the classification accuracy of the disease. In [17], a deep architecture for the removal of the features without information load has been recursively proposed by implementing sparse multi-task learning in a hierarchy. The optimal regression coefficients were assumed to reflect the relative importance of the features in representing the target response variables.…”
Section: Related Workmentioning
confidence: 99%
“…That is, for each sample in training input data set, what are the real classes of these samples should be known before the training. The trained SM estimates the appropriate classes for input samples with a process based on probabilistic computations [37]. SM structures are often encountered in deep models because they can be easily integrated into AEs [38].…”
Section: Softmax Classifiermentioning
confidence: 99%