2020
DOI: 10.1109/tnse.2019.2933639
|View full text |Cite
|
Sign up to set email alerts
|

Privacy on the Edge: Customizable Privacy-Preserving Context Sharing in Hierarchical Edge Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 39 publications
(9 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…It is due to the constant quality of service, and it controls the payoff discount in case of a fixed strategy. In the case of the MDP approach (Gu B (2019)), it is the same as the myopic strategy.…”
Section: Analysis Of Performance Metrics For the Proposed Modelmentioning
confidence: 99%
“…It is due to the constant quality of service, and it controls the payoff discount in case of a fixed strategy. In the case of the MDP approach (Gu B (2019)), it is the same as the myopic strategy.…”
Section: Analysis Of Performance Metrics For the Proposed Modelmentioning
confidence: 99%
“…Future work under progress is that we lay more focus on lightweight and dynamic searchable encryption schemes. In addition, we also plan to integrate federated learning [54], edge computing [55,56], and IoT [57] in this scenario to better enhance privacy protection.…”
Section: Summary and Future Workmentioning
confidence: 99%
“…For instance, edge computing strengthens from the intelligent decisions made by AI mechanisms, while AI models will be further improved by availing the distributed nature of edge computing. Yet, the edge server requires to collect all data generated by users for training and inference purposes, which might violate the user's privacy [13].…”
Section: Related Workmentioning
confidence: 99%
“…In doing so, each node independently selects a number (k) of desirable sub-datasets to be created, randomly picks k samples from the original data, and then assigns each one to each sub-dataset (lines 1-7). Afterward, and for each sample v in the remaining data, if v has not been assigned to any sub-dataset, the scheme calculates the similarity index of the said sample versus each sub-dataset (lines [13][14][15][16]. At the end of the process, the sample is assigned to the sub-dataset with the minimum similarity score.…”
Section: A Data-aware Splitting Schemementioning
confidence: 99%