2021
DOI: 10.1016/j.neucom.2020.10.014
|View full text |Cite
|
Sign up to set email alerts
|

An efficient approach for privacy preserving decentralized deep learning models based on secure multi-party computation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(21 citation statements)
references
References 38 publications
0
19
0
2
Order By: Relevance
“…Finally, the user gives feedback and saves it through the key. Different mathematical tools can be employed to construct the secret sharing scheme of threshold thresholds, which mainly consists of two phases: underground distribution and secret reconstruction (Tran et al, 2021). First, the secret distributor generates sub-secrets and sends them to the participant set.…”
Section: Related Technologiesmentioning
confidence: 99%
“…Finally, the user gives feedback and saves it through the key. Different mathematical tools can be employed to construct the secret sharing scheme of threshold thresholds, which mainly consists of two phases: underground distribution and secret reconstruction (Tran et al, 2021). First, the secret distributor generates sub-secrets and sends them to the participant set.…”
Section: Related Technologiesmentioning
confidence: 99%
“…Tran et al [119] propose a framework, called Secure Decentralized Training Framework (SDTF), to protect the privacy of clients participating in training a decentralized FL. The clients train a model without a server, however, at each epoch, they elect a master node (one of them) which calculates the global gradient and sends it to all nodes, and so on until the algorithm converges.…”
Section: Main Contributionmentioning
confidence: 99%
“…Tran et al proposed an efficient framework for privacypreserving deep neural networks. This framework is not only capable of training deep learning models at high speed but is also highly resistant to collusion attacks [21].…”
Section: Secure Computation Of Sensitive Datamentioning
confidence: 99%