2016
DOI: 10.1016/j.knosys.2016.05.050
|View full text |Cite
|
Sign up to set email alerts
|

Human error tolerant anomaly detection based on time-periodic packet sampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Few-shot learning is an emerging type of transfer learning technique. By reusing the transferrable knowledge of existing models, a classifier can be built to identify the novel category using only a few labeled training samples [13]. Along with the popularity of deep learning, few-shot learning model is increasingly drawn attention in modern industrial applications.…”
Section: B Few-shot Learning In Industrial Applicationsmentioning
confidence: 99%
“…Few-shot learning is an emerging type of transfer learning technique. By reusing the transferrable knowledge of existing models, a classifier can be built to identify the novel category using only a few labeled training samples [13]. Along with the popularity of deep learning, few-shot learning model is increasingly drawn attention in modern industrial applications.…”
Section: B Few-shot Learning In Industrial Applicationsmentioning
confidence: 99%
“…In bigdata, the dataset is classified into structured and unstructured data. In this dataset feature extraction, and feature reduction are the two important attributes [11].Sampling, normalization, denoising and transformation are done by this data set to obtain noise removal, feature extraction and single input [12]. For large datasets, the sampling process is done by stratified sampling and random sampling [13].…”
Section: Introductionmentioning
confidence: 99%
“…This optimization is done rapidly and effectively by identifying the gradients from a few examples rather than the whole training set. The visible neurons are first provided to produce i v with training data and then the hidden layers j h are sampled based on the probabilities shown in(12).This process is repeated one more time so the visible neurons get updated and buried neurons provide another one-step "rebuilt" states i v and j h .Next the rise of joint likelihood task of data, the describe rule for the visible to hidden weights ij w…”
mentioning
confidence: 99%