2019
DOI: 10.1109/tit.2019.2935768
|View full text |Cite
|
Sign up to set email alerts
|

Tunable Measures for Information Leakage and Applications to Privacy-Utility Tradeoffs

Abstract: We introduce a tunable measure for information leakage called maximal α-leakage. This measure quantifies the maximal gain of an adversary in inferring any (potentially random) function of a dataset from a release of the data. The inferential capability of the adversary is, in turn, quantified by a class of adversarial loss functions that we introduce as αloss, α ∈ [1, ∞) ∪ {∞}. The choice of α determines the specific adversarial action and ranges from refining a belief (about any function of the data) for α = … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 70 publications
(79 citation statements)
references
References 58 publications
0
74
0
Order By: Relevance
“…More in-depth analysis and properties of can be found in [ 103 ]. It is shown in [ 71 ] (Lemma 1) that for quantifies the minimum loss in recovering U given V where the loss is measured in terms of the so-called -loss. This loss function reduces to logarithmic loss ( 27 ) and for and , respectively.…”
Section: Family Of Bottleneck Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…More in-depth analysis and properties of can be found in [ 103 ]. It is shown in [ 71 ] (Lemma 1) that for quantifies the minimum loss in recovering U given V where the loss is measured in terms of the so-called -loss. This loss function reduces to logarithmic loss ( 27 ) and for and , respectively.…”
Section: Family Of Bottleneck Problemsmentioning
confidence: 99%
“…As suggested in [ 55 , 62 ], one way to address this issue is to replace mutual information with other statistical measures. In the privacy literature, several measures with strong privacy guarantee have been proposed including Rényi maximal correlation [ 21 , 63 , 64 ], probability of correctly recovering [ 65 , 66 ], minimum mean-squared estimation error (MMSE) [ 67 , 68 ], -information [ 69 ] (a special case of f -information to be described in Section 3 ), Arimoto’s and Sibson’s mutual information [ 61 , 70 ]—to be discussed in Section 3 , maximal leakage [ 71 ], and local differential privacy [ 72 ]. All these measures ensure interpretable privacy guarantees.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, [24] proposes to use the probability of correctly guessing the secret as a privacy metric. In [25] a class of tunable loss functions are introduced to capture a range of adversarial objectives, e.g., refining a belief or guessing the most likely value for the secret. Other methods include posing the privacy problem as a hypothesis test, e.g., in [26].…”
Section: B Other Related Workmentioning
confidence: 99%
“…This measure was defined following an axiomatic framework for measuring information leakage requiring minimal assumptions, being interpretability, and satisfying data-processing, independence, and additivity properties. Maximal leakage was recently generalized to a family of leakage that can be finetuned to specific applications [19], [20]. In this pursuit, αleakage (when the adversary's target is known) and maximal α-leakage (when the adversary's target is not known) was introduced.…”
Section: Relationship With the α-Leakagementioning
confidence: 99%
“…We show that these measures of information leakage are related to each other upon selecting α correctly. We compare these measures of information leakage with α-leakage from the privacy literature [19], [20]. This comparison allows us to establish quasi-convexity of α-information MIA information leakage.…”
Section: Introductionmentioning
confidence: 99%