2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9006280
|View full text |Cite
|
Sign up to set email alerts
|

Secure and Efficient Federated Transfer Learning

Abstract: Machine Learning models require a vast amount of data for accurate training. In reality, most data is scattered across different organizations and cannot be easily integrated under many legal and practical constraints. Federated Transfer Learning (FTL) was introduced in [1] to improve statistical models under a data federation that allow knowledge to be shared without compromising user privacy, and enable complementary knowledge to be transferred in the network. As a result, a target-domain party can build mor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(39 citation statements)
references
References 15 publications
0
39
0
Order By: Relevance
“…The authors complement the federated transfer learning approach with a secure transfer cross-validation model which employs additively Homomorphic Encryption (HE) to protect the performance of the federated transfer learning approach from the security perspective. In [66], the authors extend and improve upon [65] in two ways (1) reducing the overhead of the security model by an order of magnitude through employing secret sharing instead of homomorphic encryption and (2) extending the semi-honest secure multi-party computation model to consider dishonest malicious parties that might arbitrarily deviate from the federated training process. The SPDZ secure multiparty computation protocol is investigated in this context.…”
Section: Distribution Imbalancementioning
confidence: 99%
“…The authors complement the federated transfer learning approach with a secure transfer cross-validation model which employs additively Homomorphic Encryption (HE) to protect the performance of the federated transfer learning approach from the security perspective. In [66], the authors extend and improve upon [65] in two ways (1) reducing the overhead of the security model by an order of magnitude through employing secret sharing instead of homomorphic encryption and (2) extending the semi-honest secure multi-party computation model to consider dishonest malicious parties that might arbitrarily deviate from the federated training process. The SPDZ secure multiparty computation protocol is investigated in this context.…”
Section: Distribution Imbalancementioning
confidence: 99%
“…Compared with some secure deep learning methods that suffer from accuracy loss, the FTL can achieve the same accuracy as non-privacy-preserving methods and a higher accuracy than non-federated self-learning methods. Sharma et al [157] proposed a secure and efficient FTL framework based on a multi-party computation. This allows clients to train a transfer learning model while keeping their datasets private against adversaries.…”
Section: Federated Transfermentioning
confidence: 99%
“…Secret sharing [156] is a cryptographic technique guaranteeing that a secret consisting of n shares can be reconstructed only when a sufficient number of shares are combined. Secret sharing has been used in many FL frameworks to achieve privacy preservation [38,59,113,126,157,169,198,224]. For example, Bonawitz et al [13] proposed a practical and secure framework for the FL based on the secret sharing.…”
Section: Encryption-based Ppflmentioning
confidence: 99%
“…An authentication mechanism is also incorporated to verify the clients. Sharm et al [30] enhance the security of existing federated transfer learning models under malicious setting where some players could arbitrarily deviate from the predefined protocol. They use a variant of Multi-Party Computation (MPC) to improve usability under existence of malicious clients.…”
Section: B Privacy and Security Of Flmentioning
confidence: 99%