2018
DOI: 10.1109/jstsp.2018.2877842
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Differentially Private Algorithms for Matrix and Tensor Factorization

Abstract: In many signal processing and machine learning applications, datasets containing private information are held at different locations, requiring the development of distributed privacy-preserving algorithms. Tensor and matrix factorizations are key components of many processing pipelines. In the distributed setting, differentially private algorithms suffer because they introduce noise to guarantee privacy. This paper designs new and improved distributed and differentially private algorithms for two popular matri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(30 citation statements)
references
References 37 publications
0
27
0
Order By: Relevance
“…Apart from the above-mentioned works, there are other research works in the closely relevant area, such as tensor/matrix factorizations and functional optimization schemes. In more detail, the authors of [59,60] discussed differentially private algorithms for tensor decomposition, in both centralized and distributed settings [60]. The authors of [40] applied a DP framework in the matrix factorization process with four different possible perturbation: input perturbation, private stochastic gradient perturbation, alternating least squares (ALS) with output perturbation, and output perturbation.…”
Section: Obfuscation/perturbation (Differential Private Learning)mentioning
confidence: 99%
“…Apart from the above-mentioned works, there are other research works in the closely relevant area, such as tensor/matrix factorizations and functional optimization schemes. In more detail, the authors of [59,60] discussed differentially private algorithms for tensor decomposition, in both centralized and distributed settings [60]. The authors of [40] applied a DP framework in the matrix factorization process with four different possible perturbation: input perturbation, private stochastic gradient perturbation, alternating least squares (ALS) with output perturbation, and output perturbation.…”
Section: Obfuscation/perturbation (Differential Private Learning)mentioning
confidence: 99%
“…To effectively parallelize or distribute the calculation of MTTKRP, the majority of researchers rely on the parallel and distributed setups [12], [14], [15], [16], [17], [26]. For instance, the parallel and distributed stochastic gradientbased FlexiFact algorithm was proposed to reduce the computational cost [14].…”
Section: Related Workmentioning
confidence: 99%
“…The differentially private algo rithms perform worse in the distributed environment since the introduction of a larger volume of noise. However, a dis tributed differentially private algorithm for PCA is proposed in [47], which employs a correlated noise design scheme to alleviate the effects of noise and achieves the same noise level as the cen tralized scenario. This method defines a noise generator to generate the D × D matrix Es i.i.d.…”
Section: CImentioning
confidence: 99%
“…Some techniques are designed to reduce time complexity and there remains room to be further improved. In addition, a dis tributed differentially private algorithm [47] for PCA is developed.…”
Section: CImentioning
confidence: 99%