2022
DOI: 10.14778/3565816.3565823
|View full text |Cite
|
Sign up to set email alerts
|

OpBoost

Abstract: Vertical Federated Learning (FL) is a new paradigm that enables users with non-overlapping attributes of the same data samples to jointly train a model without directly sharing the raw data. Nevertheless, recent works show that it's still not sufficient to prevent privacy leakage from the training process or the trained model. This paper focuses on studying the privacy-preserving tree boosting algorithms under the vertical FL. The existing solutions based on cryptography involve heavy computation and communica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(1 citation statement)
references
References 37 publications
0
1
0
Order By: Relevance
“…Li et al [29] address the matrix factorization problem in VFL for training recommendation models. Local Differential Privacy (LDP) is studied and applied to VFL tree boosting models in [28]. Though not designed for VFL, He et al [20] propose TransNet to encrypt vertically partitioned data before sending it to the neural network, whereas the bottom model in VFL offers similar functionality.…”
Section: Related Work 51 Vertical Federated Learningmentioning
confidence: 99%
“…Li et al [29] address the matrix factorization problem in VFL for training recommendation models. Local Differential Privacy (LDP) is studied and applied to VFL tree boosting models in [28]. Though not designed for VFL, He et al [20] propose TransNet to encrypt vertically partitioned data before sending it to the neural network, whereas the bottom model in VFL offers similar functionality.…”
Section: Related Work 51 Vertical Federated Learningmentioning
confidence: 99%