2022
DOI: 10.48550/arxiv.2210.01318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization

Abstract: Vertical Federated Learning (FL) is a new paradigm that enables users with non-overlapping attributes of the same data samples to jointly train a model without directly sharing the raw data. Nevertheless, recent works show that it's still not sufficient to prevent privacy leakage from the training process or the trained model. This paper focuses on studying the privacy-preserving tree boosting algorithms under the vertical FL. The existing solutions based on cryptography involve heavy computation and communica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…FedXGBoost [54] uses LDP to add noise to perturb the first-order approximation, and calculates the split score through the perturbed results so as to accelerate the training process on the premise of small accuracy loss. OpBoost [58] desensitizes the training data using distance-based LDP (dLDP), and combines an effective sampling distribution to find the trade-off between desensitization values and privacy, thus improving the accuracy and efficiency of the original LDP model.…”
Section: Different Privacymentioning
confidence: 99%
“…FedXGBoost [54] uses LDP to add noise to perturb the first-order approximation, and calculates the split score through the perturbed results so as to accelerate the training process on the premise of small accuracy loss. OpBoost [58] desensitizes the training data using distance-based LDP (dLDP), and combines an effective sampling distribution to find the trade-off between desensitization values and privacy, thus improving the accuracy and efficiency of the original LDP model.…”
Section: Different Privacymentioning
confidence: 99%