2020
DOI: 10.1016/j.neucom.2019.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Differential privacy for sparse classification learning

Abstract: In this paper, we present a differential privacy version of convex and nonconvex sparse classification approach. Based on alternating direction method of multiplier (ADMM) algorithm, we transform the solving of sparse problem into the multistep iteration process. Then we add exponential noise to stable steps to achieve privacy protection. By the property of the post-processing holding of differential privacy, the proposed approach satisfies the −differential privacy even when the original problem is unstable. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 33 publications
(42 reference statements)
0
10
0
Order By: Relevance
“…In addition to its original version, many variations has been presented, such as [23,24] and [12]. Several ADMM based differentially private algorithms have been presented, for example, [25] applied objective perturbation technique on the original ADMM problem, [26] and [27] applied output and objective perturbation technique, and [28] applied gradient perturbation technique on ADMM-based algorithms in distributed settings.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…In addition to its original version, many variations has been presented, such as [23,24] and [12]. Several ADMM based differentially private algorithms have been presented, for example, [25] applied objective perturbation technique on the original ADMM problem, [26] and [27] applied output and objective perturbation technique, and [28] applied gradient perturbation technique on ADMM-based algorithms in distributed settings.…”
Section: Related Workmentioning
confidence: 99%
“…[35] and [36] has shown that L 1 regularized classification has good performance in feature selection. Limited to the assumption on the loss function, many differentially private ERM algorithms cannot be directly applied on L 1 regularized classification, with a few exceptions such as [4,25], and [28].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations