2019
DOI: 10.1016/j.asoc.2018.10.035
|View full text |Cite
|
Sign up to set email alerts
|

Captured multi-label relations via joint deep supervised autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…According to the structure of data, machine learning is divided into supervised, unsupervised, semi-supervised and reinforcement learning (Zhang et al 2022d;Tran and Ha 2022). The characteristic of supervised learning is that training data has corresponded labels (Lian et al 2019;Zhang et al 2018). According to the learning strategy, supervised learning is divided into generative method and discriminant method.…”
Section: General Concepts Of Machine Learningmentioning
confidence: 99%
“…According to the structure of data, machine learning is divided into supervised, unsupervised, semi-supervised and reinforcement learning (Zhang et al 2022d;Tran and Ha 2022). The characteristic of supervised learning is that training data has corresponded labels (Lian et al 2019;Zhang et al 2018). According to the learning strategy, supervised learning is divided into generative method and discriminant method.…”
Section: General Concepts Of Machine Learningmentioning
confidence: 99%
“…Joint binary cross-entropy (JBCE) loss [ 21 ] is proposed to train the joint binary neural network (JBNN) to capture label relations. To reduce the computational complexity, partial label dependence can also contribute to this task, which is demonstrated in [ 22 ]. A semisupervised multilabel method is proposed in [ 23 ], while label correlations are incorporated by modifying the loss function.…”
Section: Related Workmentioning
confidence: 99%
“…Several authors [46,20,38] have proposed multi-label classification solutions inspired on deep learning techniques. Some of these solutions [4,17] rely on the autoencoders [12,37], which allow for the unsupervised learning of features.…”
Section: Introductionmentioning
confidence: 99%