Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security 2019
DOI: 10.1145/3338501.3357371
|View full text |Cite
|
Sign up to set email alerts
|

HybridAlpha

Abstract: Federated learning has emerged as a promising approach for collaborative and privacy-preserving learning. Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private. However, parameter interaction and the resulting model still might disclose information about the training data used. To address these privacy concerns, several approaches have been proposed based on differential privacy and secu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 190 publications
(29 citation statements)
references
References 39 publications
0
18
0
Order By: Relevance
“…However, this method results in high communications costs and long convergence times, such as HE. Then, the HybridAlpha emerged as a method that combines functional encryption with SMC protocol to achieve a highly‐performance model without privacy sacrifice (R. Xu et al, 2019).…”
Section: Privacy Of Flmentioning
confidence: 99%
“…However, this method results in high communications costs and long convergence times, such as HE. Then, the HybridAlpha emerged as a method that combines functional encryption with SMC protocol to achieve a highly‐performance model without privacy sacrifice (R. Xu et al, 2019).…”
Section: Privacy Of Flmentioning
confidence: 99%
“…Aledhari et al [102] mainly focus on architecture options for FL based models -Horizontal FL [89], Vertical FL [89], MMVFL [103], FTL [71], FEDF [104], PerFit [105], FedHealth [106], FADL [107], Blockchain-FL [108], whereas a primary focus of [109] is on aggregation techniques -FedAvg [1], SMC-avg [19], FedProx [4], FedMA [110], Scaffold: Stochastic Controlled Averaging for FL [58], Tensor Factorization [111], FedBCD [31], Federated Distillation (FD) and Federated Augmentation (FAug) [18], Co-Op, LoAdaBoost [17], HybridFL [34], FedCS [112], PrivFL [113], VerifyNet [114].…”
Section: Threat Models and Attack Typesmentioning
confidence: 99%
“…In view of these attacks, PPML protocols have been developed. Existing techniques for designing PPML protocols can be broadly classified into four categories: 1) secure multi-party computation techniques, e.g., [3,11,13,28,29,33], 2) homomorphic encryption, e.g., [24,26,35], 3) differential privacy and homomorphic encryption or secure aggregation, e.g., [9,40,42], and 4) leveraging trusted execution environments (e.g., Intel-SGX), e.g., [18,30]. In the private training using secure multi-party computation, the training data is shared using a secret-sharing protocol among a small set of servers (e.g., 2, 3 or 4-server) (e.g., [11,28,29]), and then the training is conducted and the model is secret-shared among participating servers.…”
Section: Introductionmentioning
confidence: 99%
“…First, we propose a secure, verifiable and robust model aggregation protocol with single round in the presence of dynamic user participations or dropouts. Compared to multi-round secure aggregation solutions (e.g., [9,26]) and solutions that are aided by a fully trusted third party (TTP) (e.g., [42]), PROV-FL achieves a single-round ML model aggregation, without a fully TTP, that is robust against dropouts. Second, we construct the PROV-FL training protocols by combining our new model aggregation protocol in the federated training along with the differential privacy mechanism to provide a good balance between efficiency, privacy and accuracy in training an ML model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation