2022
DOI: 10.1109/tkde.2020.3009221
|View full text |Cite
|
Sign up to set email alerts
|

MAS-Encryption and its Applications in Privacy-Preserving Classifiers

Abstract: Homomorphic encryption (HE) schemes, such as fully homomorphic encryption (FHE), support a number of useful computations on ciphertext in a broad range of applications, such as e-voting, private information retrieval, cloud security, and privacy protection. While FHE schemes do not require any interaction during computation, the key limitations are large ciphertext expansion and inefficiency. Thus, to overcome these limitations, we develop a novel cryptographic tool, MAS-Encryption (MASE), to support real-valu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 59 publications
0
25
0
Order By: Relevance
“…However, MemGuard does not defend against the simplest of threshold-based attacks. 21,22,37 DP 38,39 is another classical method used to protect the privacy of ML. Most defense methods based on DP are achieved by adding noise to the objective function for training.…”
Section: Defenses Against Miasmentioning
confidence: 99%
See 1 more Smart Citation
“…However, MemGuard does not defend against the simplest of threshold-based attacks. 21,22,37 DP 38,39 is another classical method used to protect the privacy of ML. Most defense methods based on DP are achieved by adding noise to the objective function for training.…”
Section: Defenses Against Miasmentioning
confidence: 99%
“…DP 38,39 is another classical method used to protect the privacy of ML. Most defense methods based on DP are achieved by adding noise to the objective function for training.…”
Section: Related Workmentioning
confidence: 99%
“…However, since existing subjective quality evaluation systems constrain quality evaluation to a specific range, such as [0, 1], 17 the current IQA database scale is still small compared with the vast UGC data. There is a lack of a labeled IQA data set that reflects all distortion types as well as distortion levels, and building such a data set is an expensive and labor‐intensive process, as the NR‐IQA scoring of distorted images is uncertain 18 . In general, learning‐based NR‐IQA methods assume that the types of distortion in distorted images follow some known type distributions that are generally not applicable to UGC image data 19 .…”
Section: Introductionmentioning
confidence: 99%
“…Under such a situation, ML services are also under threat of data security and privacy, as the data used for model training may contain sensitive information 12–16 . This raises public awareness of ensuring data security and privacy while enjoying ML services 17–22 . As an essential aspect, for example, users hope the ML service providers no longer keep their data if their authorize‐to‐use expires 23–26 …”
Section: Introductionmentioning
confidence: 99%