2017
DOI: 10.3390/s17102279
|View full text |Cite
|
Sign up to set email alerts
|

Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection

Abstract: For an electronic nose (E-nose) in wound infection distinguishing, traditional learning methods have always needed large quantities of labeled wound infection samples, which are both limited and expensive; thus, we introduce self-taught learning combined with sparse autoencoder and radial basis function (RBF) into the field. Self-taught learning is a kind of transfer learning that can transfer knowledge from other fields to target fields, can solve such problems that labeled data (target fields) and unlabeled … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 28 publications
0
14
0
Order By: Relevance
“…Compared to dimension r duction with PCA, autoencoder can preserve more non-linear relationships in the resu ing feature space. A recent study [61] also showed that an autoencoder network built fro unlabeled data can generate highly discriminative features for another labeled datase Zhao et al [62] proposed a stacked sparse autoencoder model (SSAE), which was com bined with a backpropagation neural network (BPNN) to perform feature extraction f Chinese liquor classification (Figure 5). After the model was trained, an extra predictio layer was appended to the encoder of autoencoder for prediction.…”
Section: Feature Extraction Through Learningmentioning
confidence: 99%
“…Compared to dimension r duction with PCA, autoencoder can preserve more non-linear relationships in the resu ing feature space. A recent study [61] also showed that an autoencoder network built fro unlabeled data can generate highly discriminative features for another labeled datase Zhao et al [62] proposed a stacked sparse autoencoder model (SSAE), which was com bined with a backpropagation neural network (BPNN) to perform feature extraction f Chinese liquor classification (Figure 5). After the model was trained, an extra predictio layer was appended to the encoder of autoencoder for prediction.…”
Section: Feature Extraction Through Learningmentioning
confidence: 99%
“…Such an assumption has proven to learn good representations of the input data from the unlabeled data and help aid the classification task in a supervised setting [14]. Self taught learning has shown to be effective in various fields such as audio classification [15], [16], E-Nose in Wound Infection Detection [17], facial beauty prediction [18], image classification [14]. In a study, the authors created codebooks of features called basis vectors corresponding to each activity using sparse encoding [19] from the unlabeled data.…”
Section: Related Workmentioning
confidence: 99%
“…However, the majority of these studies use supervised learning, which often requires large amounts of labeled defective samples for model training [ 23 ]. The autoencoder (AE) network is a typical unsupervised method that has been widely used in shape retrieval [ 24 ], scene description [ 25 ], target recognition [ 26 , 27 ] and object detection [ 28 ]. It can be trained without any labeled ground truth or human intervention.…”
Section: Related Work and Foundationsmentioning
confidence: 99%