2022
DOI: 10.1007/s10994-022-06169-w
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive infinite dropout for noisy and sparse data streams

Abstract: The ability to analyze data streams, which arrive sequentially and possibly infinitely, is increasingly vital in various online applications. However, data streams pose various challenges, including sparse and noisy data as well as concept drifts, which easily mislead a learning method. This paper proposes a simple yet robust framework, called Adaptive Infinite Dropout (aiDropout), to effectively tackle these problems. Our framework uses a dropout technique in a recursive Bayesian approach in order to create a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…The Adaptive Infinite Dropout (AIDropout) [23] and Approximate Linear Dependence (ALD) [24] method increased the processing time, which has a volatility issue and is complex while training the data as well as containing noisy data that misleads the samples and causes sudden variation in the input data stream. However, the state-of-the-art methods do not apply to large datasets, and thus the complicated data stream can cause errors [25]. Data augmentation is not efficient in time series data, as well as increasing the time complexity.…”
Section: Review Of Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The Adaptive Infinite Dropout (AIDropout) [23] and Approximate Linear Dependence (ALD) [24] method increased the processing time, which has a volatility issue and is complex while training the data as well as containing noisy data that misleads the samples and causes sudden variation in the input data stream. However, the state-of-the-art methods do not apply to large datasets, and thus the complicated data stream can cause errors [25]. Data augmentation is not efficient in time series data, as well as increasing the time complexity.…”
Section: Review Of Related Workmentioning
confidence: 99%
“…which do not share any classes with CIFAR-10. LeNet [25] is used as the feature extractor, and batch normalization is added. The data set has the time series length of 1639.…”
Section: Parameter Settingsmentioning
confidence: 99%