2022
DOI: 10.1016/j.comnet.2021.108742
|View full text |Cite
|
Sign up to set email alerts
|

Handling partially labeled network data: A semi-supervised approach using stacked sparse autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…In this respect, there are two main reasons why we believe that RL is a good candidate for the controller synchronization problem. The first reason is that, in wireless networks where several applications with human involvement exist, it is almost impossible to find labelled data for training the ML algorithms, whereas unlabelled data are often abundant and easily available [62]. Secondly, optimizing the synchronization process for heterogeneous networks is indeed a long-term decision that is affected by multiple conditions.…”
Section: Proposed Solutionmentioning
confidence: 99%
“…In this respect, there are two main reasons why we believe that RL is a good candidate for the controller synchronization problem. The first reason is that, in wireless networks where several applications with human involvement exist, it is almost impossible to find labelled data for training the ML algorithms, whereas unlabelled data are often abundant and easily available [62]. Secondly, optimizing the synchronization process for heterogeneous networks is indeed a long-term decision that is affected by multiple conditions.…”
Section: Proposed Solutionmentioning
confidence: 99%
“…To tackle this problem, some studies in recent years have combined the SAE with semi-supervised learning. For the classification of partially labeled network traffic samples, Aouedi et al [ 23 ] proposed the semi-supervised stacked autoencoder (Semi-SAE) to realize a semi-supervised learning of the SAE. This method needs unsupervised feature extraction for all samples in the pre-training stage and fine-tuning of the network parameters based on the classification loss of the labeled samples.…”
Section: Introductionmentioning
confidence: 99%