2019
DOI: 10.3390/make1010029
|View full text |Cite
|
Sign up to set email alerts
|

Differentially Private Image Classification Using Support Vector Machine and Differential Privacy

Abstract: The ubiquity of data, including multi-media data such as images, enables easy mining and analysis of such data. However, such an analysis might involve the use of sensitive data such as medical records (including radiological images) and financial records. Privacy-preserving machine learning is an approach that is aimed at the analysis of such data in such a way that privacy is not compromised. There are various privacy-preserving data analysis approaches such as k-anonymity, l-diversity, t-closeness and Diffe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(25 citation statements)
references
References 22 publications
(23 reference statements)
0
25
0
Order By: Relevance
“…After a long time, Zhang et al (2019) have proposed a differentially private SVM based‐on dual variable perturbation. Recently, Senekane (2019) has performed differentially private image classification using SVM. For these differentially private SVM classification algorithms, Rubinstein et al (2009) and Zhang et al (2019) utilize output perturbation technique from differential privacy, while Senekane (2019) employs input perturbation technique.…”
Section: Differentially Private Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…After a long time, Zhang et al (2019) have proposed a differentially private SVM based‐on dual variable perturbation. Recently, Senekane (2019) has performed differentially private image classification using SVM. For these differentially private SVM classification algorithms, Rubinstein et al (2009) and Zhang et al (2019) utilize output perturbation technique from differential privacy, while Senekane (2019) employs input perturbation technique.…”
Section: Differentially Private Classificationmentioning
confidence: 99%
“…Therefore, in this study, we focused on using input perturbation to perform differentially private classification task. To reach our goal, we adopted input perturbation technique of differential privacy as used in the studies of (Mivule et al, 2012; Sánchez et al, 2016; Sarwate & Chaudhuri, 2013; Senekane, 2019) to perform privacy preserving classification. We experimentally analyzed the performances of the well‐known classification algorithms that are C4.5, Naïve Bayes, Bayesian Networks, IBk, K*, One Rule, PART, Random tree, and Ripper for classification of the differentially private data, obtained by applying input perturbation to 8 widely used UCI datasets for various privacy levels by changing the ɛ values from 1 to 5, and also for small ɛ values (i.e., ɛ <1).…”
Section: Differentially Private Classificationmentioning
confidence: 99%
“…It was expected to protect the small sample data and not to interfere with the classification effect of the model to the whole dataset. Makhamisa Senekane [17] reported a scheme for privacy-preserving image classification using Support Vector Machine and DP. SVM was chosen as a classification algorithm because unlike variants of artificial neural networks, it converged to a global optimum.…”
Section: Related Workmentioning
confidence: 99%
“…It can be seen from the analysis that the SVM classification algorithm based on the proposed differential privacy protection [10]- [17] has three types of problems: (1) When the training set was particularly large, the time consumption of the support vector machine prediction would be particularly large, and the noise would increase, and the accuracy would decrease. (2) The restriction on the objective function was overly strong, requiring it to remain convex and differentiable, so there was no universality.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation