2020 IEEE International Conference on Big Data (Big Data) 2020
DOI: 10.1109/bigdata50022.2020.9378186
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Preserving Time-Series Forecasting of User Health Data Streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…However, even ML models trained with raw data can also indirectly reveal sensitive information [17,50,16,49], in particular, RNNs [58]. To protect ML models against such threats, under the state-ofthe-art DP guarantee [22,23], there exist some privacypreserving ML alternatives adopted in the literature, e.g., input [19,31,24,29,10,9], gradient [4,33,51,60,48], and objective perturbation [18].…”
Section: Discussion and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, even ML models trained with raw data can also indirectly reveal sensitive information [17,50,16,49], in particular, RNNs [58]. To protect ML models against such threats, under the state-ofthe-art DP guarantee [22,23], there exist some privacypreserving ML alternatives adopted in the literature, e.g., input [19,31,24,29,10,9], gradient [4,33,51,60,48], and objective perturbation [18].…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…Besides, the work in [30] surveys non-private deep learning applications to mobility datasets in general. Concerning differentially private deep learning, one can find the application of gradient perturbation-based DL models for load forecasting [51], an evaluation of differentially private DL models in fed- erated learning for health stream forecasting [29], the proposal of locally differentially private DL architectures [31], practical libraries for differentially private DL [33,60], and theoretical research works [4,48,19].…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…DP provides the required privacy guarantees at the cost of reducing the prediction accuracy of the machine learning model. An end-to-end pipeline consisting of FL and DR is proposed for health data streams using a clustering mechanism to reduce model training time with high accuracy [31]. DP mechanism is used by the system to provide privacy guarantees and results showed that prediction accuracy only decreases by 2% for the trained model due to this change.…”
Section: Differential Privacymentioning
confidence: 99%
“…The differential privacy approach provides an effective mechanism to safeguard the exchange of ML model parameters from clients to FL or edge server [27][28][29]31]. DP also poses all relevant characteristics to mitigate all risks associated with information leakage during model parameter exchange [26,30,32].…”
Section: Differential Privacymentioning
confidence: 99%
“…While social media might not represent everyone completely, it provides a sample space of people's opinions. Also, some people might not want to reveal their views due to privacy issues, so even if they are on social media, they might not express their true opinion [18]. Nevertheless, despite all these limitations, social media sentiment analysis provides the nearest approximation of public sentiment.…”
Section: Introductionmentioning
confidence: 99%