2019 IEEE International Symposium on Information Theory (ISIT) 2019
DOI: 10.1109/isit.2019.8849769
|View full text |Cite
|
Sign up to set email alerts
|

Robustness of Maximal α-Leakage to Side Information

Abstract: Maximal α-leakage is a tunable measure of information leakage based on the accuracy of guessing an arbitrary function of private data based on public data. The parameter α determines the loss function used to measure the accuracy of a belief, ranging from log-loss at α = 1 to the probability of error at α = ∞. To study the effect of side information on this measure, we introduce and define conditional maximal αleakage. We show that, for a chosen mapping (channel) from the actual (viewed as private) data to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…There are estimation challenges for information-theoretic measures (see [65,66] and the references therein). Designing estimators building upon techniques proposed in [25,27,66,67] is an interesting direction of research. In [68], a method for estimating the unique information for continuous distributions is proposed.…”
Section: Estimation Of Pid Measuresmentioning
confidence: 99%
See 1 more Smart Citation
“…There are estimation challenges for information-theoretic measures (see [65,66] and the references therein). Designing estimators building upon techniques proposed in [25,27,66,67] is an interesting direction of research. In [68], a method for estimating the unique information for continuous distributions is proposed.…”
Section: Estimation Of Pid Measuresmentioning
confidence: 99%
“…Classical information-theoretic measures, such as mutual information, are widely used [9,[25][26][27][28] in fairness to quantify the disparity (dependence) with respect to the sensitive attribute (Z) in the output of a model ( Ŷ). In several situations, however, only identifying disparity in the final output of a model (e.g., quantifying I(Z; Ŷ)) is not enough.…”
Section: Introductionmentioning
confidence: 99%
“…The idea of Algorithms 1 and 2 in this paper is analogous to the agglomerative pairwise merge algorithms in [33], [35]. Similar to L 0 (S → X), the privacy in information theory is measured in terms of the logarithm of the fraction between the prior and posterior statistical uncertainty on S. For example, the average inference loss defined as H(S) − H(S| X), or the mutual information I(S; X), is extended to the worst case H(S) − max x H(S|x) [7] and the α-leakage [9], [36], where the latter is a tunable measure of the mutual information.…”
Section: Notes On Stochastic Information-theoretic Privacy and Differ...mentioning
confidence: 99%
“…Intuitively, in this setup, observations only convey the unique information contributed by the unknown data entry since all other entries are already known to the adversary. To quantify this entrywise information leakage, we propose a conditional form of maximal leakage, namely the pointwise conditional maximal leakage, which is also a special case of the event-conditional Sibson mutual information introduced in [13]. Then, by allowing the unknown entry to be any of the entries in the dataset, we can derive upper bounds on the entrywise information leakage, and provide meaningful worst-case privacy guarantees.…”
Section: Introductionmentioning
confidence: 99%