2021
DOI: 10.1109/lwc.2021.3051688
|View full text |Cite
|
Sign up to set email alerts
|

Noise Is Useful: Exploiting Data Diversity for Edge Intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Though we used noisy data collected by the real CPSs in the experiments, systematically investigating the impact of noise was not in the scope of our work. Nevertheless, as many studies have already considered the noise issue in machine learning [14,57], they could better guide how to address noisy FOT logs in ENVI.…”
Section: Discussionmentioning
confidence: 99%
“…Though we used noisy data collected by the real CPSs in the experiments, systematically investigating the impact of noise was not in the scope of our work. Nevertheless, as many studies have already considered the noise issue in machine learning [14,57], they could better guide how to address noisy FOT logs in ENVI.…”
Section: Discussionmentioning
confidence: 99%
“…In our case study, though we used noisy data collected by the real CPS, systematically investigating the impact of noise was not in the scope of our work. Nevertheless, as many studies have already considered the noise issue in machine learning [40][41][42], they could better guide how to address noisy data.…”
Section: Discussionmentioning
confidence: 99%
“…Farooq et al [77] proposed a federated learning model using Long Short-term Memory (LSTM) neural networks to predict flood that outperforms traditional LSTM models. Furthermore, numerous studies and experiments show that adding noise is an effective mechanism for improving the generalization and robustness of deep neural networks [78][79][80]. This is especially beneficial in imbalanced learning, where new data instances are synthetically generated, and having a training algorithm that can guarantee robustness is crucial.…”
Section: Deep Imbalanced Learningmentioning
confidence: 99%