This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Data Sanitisation Protocols for the Privacy Funnel with Differential Privacy Guarantees

Milan Lopuhaä-Zwakenberg, Haochen Tong, and Boris Škorić Department of Mathematics and Computer Science
Eindhoven University of Technology
Eindhoven, the Netherlands
email: {m.a.lopuhaa,b.skoric}@tue.nl, [email protected]
Abstract

In the Open Data approach, governments and other public organisations want to share their datasets with the public, for accountability and to support participation. Data must be opened in such a way that individual privacy is safeguarded. The Privacy Funnel is a mathematical approach that produces a sanitised database that does not leak private data beyond a chosen threshold. The downsides to this approach are that it does not give worst-case privacy guarantees, and that finding optimal sanitisation protocols can be computationally prohibitive. We tackle these problems by using differential privacy metrics, and by considering local protocols which operate on one entry at a time. We show that under both the Local Differential Privacy and Local Information Privacy leakage metrics, one can efficiently obtain optimal protocols. Furthermore, Local Information Privacy is both more closely aligned to the privacy requirements of the Privacy Funnel scenario, and more efficiently computable. We also consider the scenario where each user has multiple attributes, for which we define Side-channel Resistant Local Information Privacy, and we give efficient methods to find protocols satisfying this criterion while still offering good utility. Finally, we introduce Conditional Reporting, an explicit LIP protocol that can be used when the optimal protocol is infeasible to compute, and we test this protocol on real-world and synthetic data. Experiments on real-world and synthetic data confirm the validity of these methods.

Keywords—Privacy funnel; local differential privacy; information privacy; database sanitisation; complexity.

I Introduction

This paper is an extended version of [18]. Under the Open Data paradigm, governments and other public organisations want to share their collected data with the general public. This increases a government’s transparency, and it also gives citizens and businesses the means to participate in decision-making, as well as using the data for their own purposes. However, while the released data should be as faithful to the raw data as possible, individual citizens’ private data should not be compromised by such data publication.

Let 𝒳\mathcal{X} be a finite set. Consider a database X=(X1,,Xn)𝒳n\vec{X}=(X_{1},\ldots,X_{n})\in\mathcal{X}^{n} owned by a data aggregator, containing a data item Xi𝒳X_{i}\in\mathcal{X} for each user ii (For typical database settings, each user’s data is a vector of attributes Xi=(Xi1,,Xim)X_{i}=(X_{i}^{1},\ldots,X_{i}^{m}); we will consider this in more detail in Section V). This data may not be considered sensitive by itself, but it might be correlated to a secret SiS_{i}. For instance, XiX_{i} might contain the age, sex, weight, skin colour, and average blood pressure of person ii, while SiS_{i} is the presence of some medical condition. To publish the data without leaking the SiS_{i}, the aggregator releases a privatised database Y=(Y1,,Yn)\vec{Y}=(Y_{1},\ldots,Y_{n}), obtained from applying a sanitisation mechanism \mathcal{R} to X\vec{X}. One way to formulate this is by considering the Privacy Funnel:

Problem 1.

(Privacy Funnel, [4]) Suppose the joint probability distribution of S\vec{S} and X\vec{X} is known to the aggregator, and let M0M\in\mathbb{R}_{\geq 0}. Then, find the sanitisation mechanism \mathcal{R} such that I(X;Y)\operatorname{I}(\vec{X};\vec{Y}) is maximised while I(S;Y)M\operatorname{I}(\vec{S};\vec{Y})\leq M.

There are two difficulties with this approach:

  1. 1.

    Finding and implementing good privatization mechanisms that operate on all of X\vec{X} can be computationally prohibitive for large nn, as the complexity is exponential in nn [6][22].

  2. 2.

    Taking mutual information as a leakage measure has as a disadvantage that it gives guarantees about the leakage in the average case. If nn is large, this still leaves room for the sanitisation protocol to leak undesirably much information about a few unlucky users.

Sensitive DataS1S_{1}S2S_{2}\vdotsSnS_{n}DatabaseX1X_{1}X2X_{2}\vdotsXnX_{n} Sanitised Database Y1Y_{1}Y2Y_{2}\vdotsYnY_{n}𝒬\mathcal{Q}𝒬\mathcal{Q}𝒬\mathcal{Q}Hidden from public
Figure 1: Model of the Privacy Funnel with local protocols.

To deal with these two difficulties, we make two changes to the general approach. First, we look at local data sanitisation, i.e., we consider optimization protocols 𝒬:𝒳𝒴\mathcal{Q}\colon\mathcal{X}\rightarrow\mathcal{Y}, for some finite set 𝒴\mathcal{Y}, and we apply 𝒬\mathcal{Q} to each XiX_{i} individually; this situation is depicted in Figure 1. Local sanitisation can be implemented efficiently. In fact, this approach is often taken in the Privacy Funnel setting [20][6]. Second, to ensure strong privacy guarantees even in worst-case scenarios, we take stricter notions of privacy, based on Local Differential Privacy (LDP) [15]. For these metrics, we develop methods to find optimal protocols. Furthermore, for situations where the optimal protocol is computationally unfeasible to find, we introduce a new protocol, Conditional Reporting (CR), that takes advantage of the fact that only SiS_{i} needs to be protected. Determining CR only requires finding the root of a onedimensional increasing function, which can be done fast numerically.

I-A New contributions

In this paper, we adapt two Differential Privacy-like privacy metrics to the Privacy Funnel situation, namely Local Differential Privacy (LDP) [15] and Local Information Privacy (LIP) [13][24]. We modify these metrics so that they measure leakage about the underlying SS rather than XX itself (for notational convenience, we write S,X,YS,X,Y rather than Si,Xi,YiS_{i},X_{i},Y_{i} throughout the rest of this paper). For a given level of leakage, we are interested in the privacy protocol that maximises the mutual information between input XiX_{i} and output YiY_{i}. Adapting methods from [14] on LDP and [23] on perfect privacy, we prove the following Theorem:

Theorem 1 (Theorems 2 and 3 paraphrased).

Suppose a=#𝒳,c=#𝒮a=\#\mathcal{X},c=\#\mathcal{S}, and pX,S\operatorname{p}_{X,S}, as well as a privacy level ε0\varepsilon\geq 0 are given.

  1. 1.

    The optimal ε\varepsilon-LDP protocol can be found by enumerating the vertices of a polytope in a2aa^{2}-a dimensions defined by a(c2c)a(c^{2}-c) inequalities.

  2. 2.

    The optimal ε\varepsilon-LIP protocol can be found by enumerating the vertices of a polytope in a1a-1 dimensions defined by 2ac2ac inequalities.

The descriptions of these polytopes, and how they relate to the optimisation problem, are discussed in Sections III and IV, respectively. Since the complexity of the polytope vertex enumeration depends significantly on both its dimension and the number of defining inequalities [2], finding optimal LIP protocols can be done significantly faster than finding optimal LDP protocols. Furthermore, we will argue that LIP is a privacy metric that more accurately captures information leakage than LDP in the Privacy Funnel scenario. For these two reasons we only consider LIP in the remainder of the paper, although many results can also be formulated for LDP.

A common scenario is that a user’s data XX consists of multiple attributes, i.e. X=(X1,,Xm)X=(X^{1},\ldots,X^{m}). Here one can consider an attacker model where the attacker has access to some of the XjX^{j}. In this situation ε\varepsilon-LIP does not accurately reflect a user’s privacy. Because of this, we introduce a new privacy metric called Sidechannel-Resistant LIP that takes such sidechannels into account. We expand the vertex enumeration methods outlined above to find optimal SRLIP methods in Section V.

Finding the optimal protocols can become computationally unfeasible for large aa and cc. In such a situation, one needs to resort to explicitely given protocols. In the literature there is a wealth of protocols that satisfy ε\varepsilon-LDP w.r.t. XX. These certainly work in our situation, but they might not be ideal, because these are designed to obfuscate all information about XX, rather than just the part that relates to SS. For this reason, we introduce Conditional Reporting (CR), a privacy protocol that focuses on hiding SS rather than XX, in Section VI. Finding the appropriate CR protocol for a given probability distribution and privacy level can be done fast numerically.

In Section VII, we test the methods and protocols discussed above on both synthetic and real data. Compared to [18], new content in this extended paper are Section VI, the experiments on real data, and the extended literature review.

I-B Related work

The Privacy Funnel (PF) setting was introduced in [20], to provide a framework for obfuscating data in such a way that the obfuscated data remains as faithful as possible to the original, while ensuring that the information leakage about a latent variable is limited. The Privacy Funnel is related to the Information Bottleneck (IB) [25], a problem from machine learning that seeks to compress data as much as possible, while retaining a minimal threshold of information about a latent variable. In PF as well as IB, both utility and leakage are measured via mutual information. Many approaches to finding the optimal protocols in PF also work for IB and vice versa [17][6]. A wider range of privacy metrics for the Privacy Funnel, and their relation to Differential Privacy, is discussed in [24].

Local Differential Privacy (LDP) was introduced in [15]. It is an adaptation of Differential Privacy [9] to a setting where there is no trusted central party to obfuscate the data. As a privacy metric, it has the advantage that it offers a privacy guarantee in any case, not just the average case, and that it does not depend on the data distribution. On the downside, it can be difficult to fulfill such a stringent definition of privacy, and many relaxations of (Local) Differential Privacy have been proposed [5][10][8][21]. We are particularly interested in Local Information Privacy (LIP) [13][24], also called Removal Local Differential Privacy [11]. LIP retains the worst-case guarantees of LDP, but is less restrictive, and can take advantage of a known distribution. In the context where only part of the data is considered secret, many privacy metrics fall under the umbrella of Pufferfish Privacy [16].

In [14], a method was introduced for finding optimal LDP-protocols for a wide variety of utility metrics, including mutual information. The method relies on finding the vertices of a polytope, but since this is the well-studied Differential Privacy polytope, its vertices can be described explicitly [12]. Similarly, [23] uses a vertex enumeration method to find the optimal protocol in the perfect privacy situation, i.e. when the released data is independent of the secret data. The complexity of vertex enumeration is discussed in [1][2].

II Mathematical Setting

The database X=(X1,,Xn)\vec{X}=(X_{1},\ldots,X_{n}) consists of a data item XiX_{i} for each user ii, each an element of a given finite set 𝒳\mathcal{X}. Furthermore, each user has sensitive data Si𝒮S_{i}\in\mathcal{S}, which is correlated with XiX_{i}; again we assume 𝒮\mathcal{S} to be finite (see Figure 1). We assume that each (Si,Xi)(S_{i},X_{i}) is drawn independently from the same distribution pS,X\operatorname{p}_{S,X} on 𝒮×𝒳\mathcal{S}\times\mathcal{X} which is known to the aggregator through observing (S,X)(\vec{S},\vec{X}) (if one allows for non-independent XiX_{i}, then differential privacy is no longer an adequate privacy metric [5][24]). The aggregator, who has access to X\vec{X}, sanitises the database by applying a sanitisation protocol (i.e., a random function) 𝒬:𝒳𝒴\mathcal{Q}\colon\mathcal{X}\rightarrow\mathcal{Y} to each XiX_{i}, outputting Y=(Y1,,Yn)=(𝒬(X1),,𝒬(Xn))\vec{Y}=(Y_{1},\ldots,Y_{n})=(\mathcal{Q}(X_{1}),\ldots,\mathcal{Q}(X_{n})). The aggregator’s goal is to find a 𝒬\mathcal{Q} that maximises the information about XiX_{i} preserved in YiY_{i} (measured as I(Xi;Yi)\operatorname{I}(X_{i};Y_{i})) while leaking only minimal information about SiS_{i}.

Without loss of generality we write 𝒳={1,,a}\mathcal{X}=\{1,\ldots,a\}, 𝒴={1,,b}\mathcal{Y}=\{1,\ldots,b\} and 𝒮={1,,c}\mathcal{S}=\{1,\ldots,c\} for integers a,b,ca,b,c. We omit the subscript ii from XiX_{i}, YiY_{i}, SiS_{i} as no probabilities depend on it, and we write such probabilities as px\operatorname{p}_{x}, ps\operatorname{p}_{s}, px|s\operatorname{p}_{x|s}, etc., which form vectors pX\operatorname{p}_{X}, pS|x\operatorname{p}_{S|x}, etc., and matrices pX|S\operatorname{p}_{X|S}, etc.

As noted before, instead of looking at the mutual information I(S;Y)\operatorname{I}(S;Y), we consider two different, related measures of sensitive information leakage known from the literature. The first one is an adaptation of LDP, the de facto standard in information privacy [15]:

Definition 1.

(ε\varepsilon-LDP) Let ε0\varepsilon\in\mathbb{R}_{\geq 0}. We say that 𝒬\mathcal{Q} satisfies ε\varepsilon-LDP w.r.t. SS if

y𝒴s,s𝒮(Y=y|S=s)(Y=y|S=s)eε.\forall_{y\in{\cal Y}}\forall_{s,s^{\prime}\in\cal S}\quad\frac{\mathbb{P}(Y=y|S=s)}{\mathbb{P}(Y=y|S=s^{\prime})}\leq\textrm{\emph{e}}^{\varepsilon}. (1)

Most literature on LDP considers LDP w.r.t. XX, i.e. (Y=y|X=x)(Y=y|X=x)eε\frac{\mathbb{P}(Y=y|X=x)}{\mathbb{P}(Y=y|X=x^{\prime})}\leq\textrm{e}^{\varepsilon} for all x,x,yx,x^{\prime},y. Throughout this paper, by ε\varepsilon-LDP we always mean ε\varepsilon-LDP w.r.t. SS, unless otherwise specified.

The LDP metric reflects the fact that we are only interested in hiding sensitive data, rather than all data; it is a specific case of what has been named ‘pufferfish privacy’ [16]. The advantage of LDP compared to mutual information is that it gives privacy guarantees for the worst case, not just the average case. This is desirable in the database setting, as a worst-case metric guarantees the security of the private data of all users, while average-case metrics are only concerned with the average user. Another useful privacy metric is Local Information Privacy (LIP) [13][24], also called Removal Local Differential Privacy [11]:

Definition 2.

(ε\varepsilon-LIP) Let ε0\varepsilon\in\mathbb{R}_{\geq 0}. We say that 𝒬\mathcal{Q} satisfies ε\varepsilon-LIP w.r.t. SS if

y𝒴,s𝒮eε(Y=y|S=s)(Y=y)eε.\forall_{y\in{\cal Y},s\in{\cal S}}\quad\quad\textrm{\emph{e}}^{-\varepsilon}\leq\frac{\mathbb{P}(Y=y|S=s)}{\mathbb{P}(Y=y)}\leq\textrm{\emph{e}}^{\varepsilon}. (2)

Compared to LDP, the disadvantage of LIP is that it depends on the distribution of SS; this is not a problem in our scenario, as the aggregator, who chooses 𝒬\mathcal{Q}, has access to the distribution of SS. The advantage of LIP is that is more closely related to an attacker’s capabilities: since (Y=y|S=s)(Y=y)=(S=s|Y=y)(S=s)\frac{\mathbb{P}(Y=y|S=s)}{\mathbb{P}(Y=y)}=\frac{\mathbb{P}(S=s|Y=y)}{\mathbb{P}(S=s)}, satisfying ε\varepsilon-LIP means that an attacker’s posterior distribution of SS given Y=yY=y does not deviate from their prior distribution by more than a factor eε\textrm{e}^{\varepsilon}. The following lemma outlines the relations between LDP, LIP and mutual information (see Figure 2).

Lemma 1.

(See [24]) Let 𝒬\mathcal{Q} be a sanitisation protocol, and let ε0\varepsilon\in\mathbb{R}_{\geq 0}.

  1. 1.

    If 𝒬\mathcal{Q} satisfies ε\varepsilon-LDP, then it satisfies ε\varepsilon-LIP.

  2. 2.

    If 𝒬\mathcal{Q} satisfies ε\varepsilon-LIP, then it satisfies 2ε2\varepsilon-LDP, and I(S;Y)ε\operatorname{I}(S;Y)\leq\varepsilon.

ε\varepsilon-LDP2ε2\varepsilon-LDPε\varepsilon-LIPI(S;Y)ε\operatorname{I}(S;Y)\leq\varepsilonε\varepsilon-SRLIPMultiple attributes, see Section V
Figure 2: Relations between privacy notions. The multiple attributes setting is discussed in Section V.
Remark 1.

One gets robust equivalents of LDP and LIP by demanding that 𝒬\mathcal{Q} satisfy ε\varepsilon-LIP (ε\varepsilon-LDP) for a set of distributions pS,X\operatorname{p}_{S,X}, instead of only a single distribution [16]. Letting pS,X\operatorname{p}_{S,X} range over all possible distributions on 𝒮×𝒳\mathcal{S}\times\mathcal{X} yields LIP (LDP) w.r.t. XX.

In this notation, instead of Problem 1 we consider the following problem:

Problem 2.

Suppose pS,X\operatorname{p}_{S,X} is known to the aggregator, and let ε0\varepsilon\in\mathbb{R}_{\geq 0}. Then, find the sanitisation protocol 𝒬\mathcal{Q} such that I(X;Y)\operatorname{I}(X;Y) is maximised while 𝒬\mathcal{Q} satisfies ε\varepsilon-LDP (ε\varepsilon-LIP, respectively) with respect to SS.

Note that this problem does not depend on the number of users nn, and as such this approach will find solutions that are scalable w.r.t. nn.

III Optimizing 𝒬\mathcal{Q} for ε\varepsilon-LDP

Our goal is now to find the optimal 𝒬\mathcal{Q}, i.e., the protocol that maximises I(X;Y)\operatorname{I}(X;Y) while satisfying ε\varepsilon-LDP, for a given ε\varepsilon. We can represent any sanitisation protocol as a matrix Qb×aQ\in\mathbb{R}^{b\times a}, where Qy|x=(Y=y|X=x)Q_{y|x}=\mathbb{P}(Y=y|X=x). Then, ε\varepsilon-LDP is satisfied if and only if

x:\displaystyle\forall x\colon yQy|x=1,\displaystyle\ \sum_{y}Q_{y|x}=1, (3)
x,y:\displaystyle\forall x,y\colon 0Qy|x,\displaystyle\ 0\leq Q_{y|x}, (4)
s,s,y:\displaystyle\forall s,s^{\prime},y\colon (QpX|s)yeε(QpX|s)y.\displaystyle\ (Q\operatorname{p}_{X|s})_{y}\leq\textrm{e}^{\varepsilon}(Q\operatorname{p}_{X|s^{\prime}})_{y}. (5)

As such, for a given 𝒴\mathcal{Y}, the set of ε\varepsilon-LDP-satisfying sanitisation protocols can be considered a closed, bounded, convex polytope Γ\Gamma in b×a\mathbb{R}^{b\times a}. This fact allows us to efficiently find optimal protocols.

Theorem 2.

Let ε0\varepsilon\in\mathbb{R}_{\geq 0}. Let 𝒬:𝒳𝒴\mathcal{Q}\colon\mathcal{X}\rightarrow\mathcal{Y} be a ε\varepsilon-LDP protocol that maximises I(X;Y)\operatorname{I}(X;Y), i.e., the protocol that solves Problem 2 w.r.t. LDP.

  1. 1.

    One can take b=ab=a.

  2. 2.

    Let Γ\Gamma be the polytope described above, for b=ab=a. Then the optimal 𝒬\mathcal{Q} corresponds to one of the vertices of Γ\Gamma.

Proof.

The first result is obtained by generalising the results of [14]: there this is proven for regular ε\varepsilon-LDP (i.e., w.r.t. XX), but the arguments given in that proof hold just as well in our situation; the only difference is that their polytope is defined by the ε\varepsilon-LDP conditions w.r.t. XX, but this has no impact on the proof. The second statement follows from the fact that I(X;Y)\operatorname{I}(X;Y) is a convex function in 𝒬\mathcal{Q}; therefore its maximum on a bounded polytope is attained in one of the vertices. ∎

This theorem reduces the search for the optimal LDP protocol to enumerating the set of vertices of Γ\Gamma, a a(a1)a(a-1)-dimensional convex polytope.

One might argue that, since the optimal 𝒬\mathcal{Q} depends on pS,X\operatorname{p}_{S,X}, the publication of 𝒬\mathcal{Q} might provide an aggregator with information about the distribution of SS. However, information on the distribution (as opposed to information of individual users’ data) is not considered sensitive [19]. In fact, the reason why the aggregator sanitises the data is because an attacker is assumed to have knowledge about this correlation, and revealing too much information about XX would cause the aggregator to use this information to infer information about SS.

IV Optimizing 𝒬\mathcal{Q} for ε\varepsilon-LIP

If one uses ε\varepsilon-LIP as a privacy metric, one can find the optimal sanitisation protocol in a similar fashion. To do this, we again describe 𝒬\mathcal{Q} as a matrix, but this time a different one. Let qbq\in\mathbb{R}^{b} be the probability mass function of YY, and let Ra×bR\in\mathbb{R}^{a\times b} be given by Rx|y=(X=x|Y=y)R_{x|y}=\mathbb{P}(X=x|Y=y); we denote its yy-th row by RX|yaR_{X|y}\in\mathbb{R}^{a}. Then, a pair (R,q)(R,q) defines a sanitisation protocol 𝒬\mathcal{Q} satisfying ε\varepsilon-LIP if and only if

y:\displaystyle\forall y\colon 0qy,\displaystyle\ 0\leq q_{y}, (6)
Rq=pX,\displaystyle\ Rq=\operatorname{p}_{X}, (7)
y:\displaystyle\forall y\colon xRx|y=1,\displaystyle\ \sum_{x}R_{x|y}=1, (8)
x,y:\displaystyle\forall x,y\colon 0Rx|y,\displaystyle\ 0\leq R_{x|y}, (9)
y,s:\displaystyle\forall y,s\colon eεpsps|XRX|yeεps.\displaystyle\ \textrm{e}^{-\varepsilon}\operatorname{p}_{s}\leq\operatorname{p}_{s|X}R_{X|y}\leq\textrm{e}^{\varepsilon}\operatorname{p}_{s}. (10)

Note that (10) defines the ε\varepsilon-LIP condition, since for a given s,ys,y we have ps|XRX|ypS=(S=s|Y=y)(S=s)=(Y=y|S=s)(Y=y)\frac{\operatorname{p}_{s|X}R_{X|y}}{\operatorname{p}_{S}}=\frac{\mathbb{P}(S=s|Y=y)}{\mathbb{P}(S=s)}=\frac{\mathbb{P}(Y=y|S=s)}{\mathbb{P}(Y=y)}. (In)equalities (810) can be expressed as saying that for every y𝒴y\in\mathcal{Y} one has that RX|yΔR_{X|y}\in\Delta, where Δ\Delta is the convex closed bounded polytope in 𝒳\mathbb{R}^{\mathcal{X}} given by

Δ={v𝒳:xvx=1,x:0vx,s:eεpsps|Xveεps}.\Delta=\left\{v\in\mathbb{R}^{\mathcal{X}}:\begin{array}[]{l}\sum_{x}v_{x}=1,\\ \forall x:0\leq v_{x},\\ \forall s:\textrm{e}^{-\varepsilon}\operatorname{p}_{s}\leq\operatorname{p}_{s|X}v\leq\textrm{e}^{\varepsilon}\operatorname{p}_{s}\end{array}\right\}. (11)

As in Theorem 2, we can use this polytope to find optimal protocols:

Theorem 3.

Let ε0\varepsilon\in\mathbb{R}_{\geq 0}, and let Δ\Delta be the polytope above. Let 𝒱={v1,,vM}\mathcal{V}=\{v_{1},\ldots,v_{M}\} be its set of vertices. For vi𝒱v_{i}\in\mathcal{V}, let H(vi)\operatorname{H}(v_{i}) be its entropy, i.e.

H(vi)=x𝒳vxln(vi,x).\operatorname{H}(v_{i})=-\sum_{x\in\mathcal{X}}v_{x}\ln(v_{i,x}). (12)

Let α^\hat{\alpha} be the solution to the optimisation problem

minimiseαM\displaystyle\textrm{\emph{minimise}}_{\alpha\in\mathbb{R}^{M}} i=1MH(vi)αi\displaystyle\ \ \sum_{i=1}^{M}\operatorname{H}(v_{i})\alpha_{i} (13)
subjectto i:αi0,\displaystyle\ \ \forall i:\alpha_{i}\geq 0, (14)
i=1Mαivi=pX.\displaystyle\ \ \sum_{i=1}^{M}\alpha_{i}v_{i}=\operatorname{p}_{X}. (15)

Then the ε\varepsilon-LIP protocol 𝒬:𝒳𝒴\mathcal{Q}\colon\mathcal{X}\rightarrow\mathcal{Y} that maximises I(X;Y)\operatorname{I}(X;Y) is given by

𝒴\displaystyle\mathcal{Y} ={iM:α^i>0},\displaystyle=\{i\leq M:\hat{\alpha}_{i}>0\}, (16)
qi\displaystyle q_{i} =α^i,\displaystyle=\hat{\alpha}_{i}, (17)
Rx|i\displaystyle R_{x|i} =vi,x,\displaystyle=v_{i,x}, (18)

for all i𝒴{1,,M}i\in\mathcal{Y}\subseteq\{1,\ldots,M\} and all x𝒳x\in\mathcal{X}. One has bab\leq a.

Proof.

This was proven for ε=0\varepsilon=0 (i.e., when SS and YY are independent) in [23], but the proof works similarly for ε>0\varepsilon>0; the main difference is that the equality constraints of their (10) will be replaced by the inequality constraints of our (10), but this has no impact on the proof presented there. ∎

Since linear optimization problems can be solved fast, again the optimization problem reduces to finding the vertices of a polytope. The advantage of using LIP instead of LDP is that Δ\Delta is a (a1)(a-1)-dimensional polytope, while Γ\Gamma of Section III is a(a1)a(a-1)-dimensional. The time complexity of vertex enumeration is linear in the number of vertices [1], while the number of vertices can grow exponentially in the dimension of the polyhedron [2]. Together, this means that the dimension plays a huge role in the time complexity, hence we expect finding the optimum under LIP to be significantly faster than under LDP.

V Multiple Attributes

An often-occuring scenario is that a user’s data consists of multiple attributes, i.e., X=(X1,,Xm)𝒳=𝒳1××𝒳mX=(X^{1},\ldots,X^{m})\in\mathcal{X}=\mathcal{X}^{1}\times\cdots\times\mathcal{X}^{m}. This can be problematic for our approach for two reasons:

  1. 1.

    Such a large 𝒳\mathcal{X} can be problematic, since the computing time for optimisation both under LDP and LIP will depend heavily on aa.

  2. 2.

    In practice, an attacker might sometimes utilise side channels to access some subsets of attributes XijX_{i}^{j} for some users. For these users, a sanitisation protocol can leak more information (w.r.t. to the attacker’s updated prior information) than its LDP/LIP parameter would suggest.

To see how the second problem might arise in practice, suppose that Xi1X^{1}_{i} is the height of individual ii, Xi2X^{2}_{i} is their weight, and SiS_{i} is whether ii is obese or not. Since height is only lightly correlated with obesity, taking Yi=Xi1Y_{i}=X_{i}^{1} would satisfy ε\varepsilon-LIP for some reasonably small ε\varepsilon. However, suppose that an attacker has access to Xi2X^{2}_{i} via a side channel. While knowing ii’s weight gives the attacker some, but not perfect knowledge about ii’s obesity, the combination of the weight from the side channel, and the height from the YiY_{i}, allows the attacker to calculate ii’s BMI, giving much more information about ii’s obesity. Therefore, the given protocol gives much less privacy in the presence of this side channel.

To solve the second problem, we introduce a more stringent privacy notion called Side-channel Resistant LIP (SRLIP), which ensures that no matter which attributes an attacker has access to, the protocol still satisfies ε\varepsilon-LIP with respect to the attacker’s new prior distribution. One could similarly introduce SRLDP, and many results will still hold for this privacy measure; nevertheless, since we concluded that LIP is preferable to LDP, we focus on SRLIP. For any subset J{1,,m}J\subseteq\{1,\ldots,m\}, we write 𝒳J=jJ𝒳j\mathcal{X}^{J}=\prod_{j\in J}\mathcal{X}^{j} and its elements as xJx^{J}.

Definition 3.

(ε\varepsilon-SRLIP). Let ε>0\varepsilon>0, and let 𝒳=j=1m𝒳j\mathcal{X}=\prod_{j=1}^{m}\mathcal{X}^{j}. We say that 𝒬\mathcal{Q} satisfies ε\varepsilon-SRLIP if for every y𝒴y\in\mathcal{Y}, for every s𝒮s\in\mathcal{S}, for every J{1,,m}J\subseteq\{1,\ldots,m\}, and for every xJ𝒳Jx^{J}\in\mathcal{X}^{J} one has

eε(Y=y|S=s,XJ=xJ)(Y=y|XJ=xJ)eε.\textrm{\emph{e}}^{-\varepsilon}\leq\frac{\mathbb{P}(Y=y|S=s,X^{J}=x^{J})}{\mathbb{P}(Y=y|X^{J}=x^{J})}\leq\textrm{\emph{e}}^{\varepsilon}. (19)

In terms of Remark 1, 𝒬\mathcal{Q} satisfies ε\varepsilon-SRLIP if and only if it satisfies ε\varepsilon-LIP w.r.t. pS,X|xJ\operatorname{p}_{S,X|x^{J}} for all JJ and xJx^{J}. Taking J=J=\varnothing gives us the regular definition of ε\varepsilon-LIP, proving the following Lemma:

Lemma 2.

Let ε>0\varepsilon>0. If 𝒬\mathcal{Q} satisfies ε\varepsilon-SRLIP, then 𝒬\mathcal{Q} satisfies ε\varepsilon-LIP.

While SRLIP is stricter than LIP itself, it has the advantage that even when an attacker has access to some data of a user, the sanitisation protocol still does not leak an unwanted amount of information beyond the knowledge the attacker has gained via the side channel. Another advantage is that, contrary to LIP itself, SRLIP satisfies an analogon of the concept of privacy budget [9]:

Theorem 4.

Let 𝒳=j=1m𝒳j\mathcal{X}=\prod_{j=1}^{m}\mathcal{X}^{j}, and for every jj, let 𝒬j:𝒳j𝒴j\mathcal{Q}^{j}\colon\mathcal{X}^{j}\rightarrow\mathcal{Y}^{j} be a sanitisation protocol. Let εj0\varepsilon^{j}\in\mathbb{R}_{\geq 0} for every jj. Suppose that for every jmj\leq m, for every J{1,,j1,j+1,,m}J\subseteq\{1,\ldots,j-1,j+1,\ldots,m\}, and every xJ𝒳Jx^{J}\in\mathcal{X}^{J}, 𝒬j\mathcal{Q}^{j} satisfies εj\varepsilon^{j}-LIP w.r.t. pS,X|xJ\operatorname{p}_{S,X|x^{J}}. Then j𝒬j:𝒳j𝒴j\prod_{j}\mathcal{Q}^{j}\colon\mathcal{X}\rightarrow\prod_{j}\mathcal{Y}^{j} satisfies jεj\sum_{j}\varepsilon^{j}-SRLIP.

The proof is presented in Appendix A. This theorem tells us that to find a ε\varepsilon-SRLIP protocol for 𝒳\mathcal{X}, it suffices to find a sanitisation protocol for each 𝒳j\mathcal{X}^{j} that is εm\frac{\varepsilon}{m}-LIP w.r.t. a number of prior distributions. Unfortunately, the method of finding an optimal ε\varepsilon-LIP protocol w.r.t. one prior pS,X\operatorname{p}_{S,X} of Theorem 3 does not transfer to the multiple prior setting. This is because this method only finds one (R,q)(R,q), while by (7) we need a different (R,q)(R,q) for each prior distribution. Therefore, we are forced to adopt an approach similar to the one in Theorem 2. The matrix QjQ^{j} (given by Qyj|xjj=(𝒬j(xj)=yjQ^{j}_{y^{j}|x^{j}}=\mathbb{P}(\mathcal{Q}^{j}(x^{j})=y^{j})) corresponding to 𝒬j:𝒳j𝒴j\mathcal{Q}^{j}\colon\mathcal{X}^{j}\rightarrow\mathcal{Y}^{j} satisfies the criteria of Theorem 4 if and only if the following criteria are satisfied:

xj:\displaystyle\forall x^{j}: yjQyj|xjj=1,\displaystyle\ \sum_{y^{j}}Q^{j}_{y^{j}|x^{j}}=1, (20)
xj,yj:\displaystyle\forall x^{j},y^{j}: 0Qyj|xjj,\displaystyle\ 0\leq Q^{j}_{y^{j}|x^{j}}, (21)
J,xJ,s,yj:\displaystyle\forall J,x^{J},s,y^{j}: eε/m(QjpXj|xJ)yj(QjpXj|s,xJ)yj,\displaystyle\ \textrm{e}^{-\varepsilon/m}(Q^{j}\operatorname{p}_{X^{j}|x^{J}})_{y^{j}}\leq(Q^{j}\operatorname{p}_{X^{j}|s,x^{J}})_{y^{j}}, (22)
J,xJ,s,yj:\displaystyle\forall J,x^{J},s,y^{j}: (QjpXj|s,xJ)yjeε/m(QjpXj|xJ)yj.\displaystyle\ (Q^{j}\operatorname{p}_{X^{j}|s,x^{J}})_{y^{j}}\leq\textrm{e}^{\varepsilon/m}(Q^{j}\operatorname{p}_{X^{j}|x^{J}})_{y^{j}}. (23)

Similar to Theorem 2, we can find the optimal 𝒬j\mathcal{Q}^{j} satisfying these conditions by finding the vertices of the polytope defined by (2023). In terms of time complexity, the comparison to finding the optimal ε\varepsilon-LIP protocol via Theorem 3 versus finding a ε\varepsilon-SRLIP protocol via Theorem 4 is not straightforward. The complexity of enumerating the vertices of a polytope is 𝒪(ndv)\mathcal{O}(ndv), where nn is the number of inequalities, dd is the dimension, and vv is the number of vertices [1]. For the Δ\Delta of Theorem 3 we have d=a1d=a-1 and n=a+2cn=a+2c. In contrast, the polytope defined by (2023) satisfies d=aj(aj1)d=a^{j}(a^{j}-1) and n=(aj)2+2cjj(aj+1)n=(a^{j})^{2}+2c\prod_{j^{\prime}\neq j}(a^{j^{\prime}}+1). Finding vv for both these polytopes is difficult, but in general v(nd)v\leq\binom{n}{d}. Since this grows exponentially in dd, we expect Theorem 4 to be faster when the aja^{j} are small compared to aa, i.e., when mm is large. We will investigate this experimentally in the next section.

VI Explicit protocols

The methods of Sections III and IV allow us to find the optimal LDP and LIP protocols. The complexity depends heavily on aa and cc, and can become computationally infeasible for large aa and cc. For such datasets, one has to rely on predetermined privacy algorithms. We consider two approaches: as a benchmark, we discuss how ‘standard’ LDP protocols can be applied to the Privacy Funnel situation, and we introduce a new method, Conditional Reporting, that is meant to address the shortcomings of standard LDP protocols. As in the previous section, we focus on LIP, but much of the discussion carries over to LDP as well.

VI-A Standard LDP protocols

In the literature, there are many examples of protocols 𝒬:𝒳𝒴\mathcal{Q}\colon\mathcal{X}\rightarrow\mathcal{Y}, depending on a privacy parameter α\alpha, whose output satisfies α\alpha-LDP with respect to XX; for an overview see [28]. Such a protocol automatically satisfies α\alpha-LDP, hence certainly α\alpha-LIP, with respect to SS. However, because XX is only indirectly correlated with YY, such a protocol’s actual LIP value may be lower. We can find the privacy of such a protocol 𝒬\mathcal{Q} by

LIP(𝒬)=maxy𝒴,s𝒮|lnxQy|xpx|sxQy|xpx|;\operatorname{LIP}(\mathcal{Q})=\max_{y\in\mathcal{Y},s\in\mathcal{S}}\left|\ln\frac{\sum_{x}Q_{y|x}\operatorname{p}_{x|s}}{\sum_{x}Q_{y|x}\operatorname{p}_{x}}\right|; (24)

then 𝒬\mathcal{Q} satisfies ε\varepsilon-LIP if and only if LIP(𝒬)ε\operatorname{LIP}(\mathcal{Q})\leq\varepsilon.

For this paper we are mainly interested in two protocols. The first one is Generalised Rapid Response (GRR) [27]. We are interested in GRR because for large enough α\alpha it maximises I(X;Y)\operatorname{I}(X;Y) [14]. Given α\alpha, GRR is a privacy protocol GRRα:𝒳𝒳\operatorname{GRR}^{\alpha}\colon\mathcal{X}\rightarrow\mathcal{X} given by

GRRy|xα={eαeα+a1, if x=y,1eα+a1, if xy.\operatorname{GRR}^{\alpha}_{y|x}=\left\{\begin{array}[]{ll}\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+a-1},&\textrm{ if $x=y$},\\ \frac{1}{\textrm{e}^{\alpha}+a-1},&\textrm{ if $x\neq y$}.\end{array}\right. (25)

A direct calculation then shows that

LIP(GRRα)=maxx,s|ln1+(eα1)px|s1+(eα1)px|.\operatorname{LIP}(\operatorname{GRR}^{\alpha})=\max_{x,s}\left|\ln\frac{1+(\textrm{e}^{\alpha}-1)\operatorname{p}_{x|s}}{1+(\textrm{e}^{\alpha}-1)\operatorname{p}_{x}}\right|. (26)

If we want GRR to satisfy ε\varepsilon-LIP, we then need to solve LIP(GRRα)=ε\operatorname{LIP}(\operatorname{GRR}^{\alpha})=\varepsilon for α\alpha. Since LIP(GRRα)\operatorname{LIP}(\operatorname{GRR}^{\alpha}) is increasing in α\alpha, this can be done fast computationally.

The second protocol that is relevant to this paper is Optimised Unary Encoding (OUE) [26]. This protocol is notable for being one of the protocols that has the least known variance in frequency estimation [26]. For a choice of α\alpha as privacy parameter, and an input xx, the output of OUEα:𝒳2𝒳\operatorname{OUE}^{\alpha}\colon\mathcal{X}\rightarrow 2^{\mathcal{X}} is a vector of independent Bernoulli variables ExE_{x^{\prime}} for x𝒳x^{\prime}\in\mathcal{X}, satisfying

(Ex=1)={12, if x=x,1eα+1, if xx.\mathbb{P}(E_{x^{\prime}}=1)=\left\{\begin{array}[]{ll}\frac{1}{2},&\textrm{ if $x^{\prime}=x$},\\ \frac{1}{\textrm{e}^{\alpha}+1},&\textrm{ if $x^{\prime}\neq x$}.\end{array}\right. (27)

In other words, If we identify a y2𝒳y\in 2^{\mathcal{X}} with a subset of 𝒳\mathcal{X} (so #y\#y denotes its cardinality), we get

OUEy|xα={e(a#y)α2(eα+1)a1, if xy,e(a#y1)α2(eα+1)a1, if xy.\operatorname{OUE}^{\alpha}_{y|x}=\left\{\begin{array}[]{ll}\frac{\textrm{e}^{(a-\#y)\alpha}}{2(\textrm{e}^{\alpha}+1)^{a-1}},&\textrm{ if $x\in y$},\\ \frac{\textrm{e}^{(a-\#y-1)\alpha}}{2(\textrm{e}^{\alpha}+1)^{a-1}},&\textrm{ if $x\notin y$}.\end{array}\right. (28)

It follows that

LIP(OUEα)=maxy,s|ln1+(eα1)xypx|s1+(eα1)xypx|.\operatorname{LIP}(\operatorname{OUE}^{\alpha})=\max_{y,s}\left|\ln\frac{1+(\textrm{e}^{\alpha}-1)\sum_{x\in y}\operatorname{p}_{x|s}}{1+(\textrm{e}^{\alpha}-1)\sum_{x\in y}\operatorname{p}_{x}}\right|. (29)

VI-B Conditional Reporting

In general, a generic LDP protocol will not be ideal for our situation, since these are designed to obscure all information about XX, rather than just the part that holds information about SS. To address this shortcoming, we introduce the Conditional Reporting (CR) in Algorithm 1. This mechanism needs both SS and XX as input; hence it differs from the other protocols discussed in this paper, which only have XX as input. The value of SS is masked by Randomised Response. If the output s~\tilde{s} equals SS, we return the true value of XX. If not, we output a random one, whose probability distribution is given by pX|s~\operatorname{p}_{X|\tilde{s}}.

Input : Privacy parameter α\alpha; Probability distribution pS,X\operatorname{p}_{S,X}; input (s,x)𝒮×𝒳(s,x)\in\mathcal{S}\times\mathcal{X}
Output :  y𝒳y\in\mathcal{X}
Sample S~𝒮\tilde{S}\in\mathcal{S} with (S~=s)={eαeα+#𝒮1, if s=s,1eα0+#𝒮1, otherwise\mathbb{P}(\tilde{S}=s^{\prime})=\left\{\begin{array}[]{ll}\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+\#\mathcal{S}-1},&\textrm{ if $s^{\prime}=s$},\\ \frac{1}{\textrm{e}^{\alpha_{0}}+\#\mathcal{S}-1},&\textrm{ otherwise}\end{array}\right. if s~=s\tilde{s}=s then
      yxy\leftarrow x;
else
      Sample x~𝒳\tilde{x}\in\mathcal{X} with (x~=x)=px|s~\mathbb{P}(\tilde{x}=x^{\prime})=\operatorname{p}_{x^{\prime}|\tilde{s}};
       yx~y\leftarrow\tilde{x};
end if
Algorithm 1 Conditional Reporting (CRα\operatorname{CR}^{\alpha})

CRα\operatorname{CR}^{\alpha} certainly satisfies α\alpha-LDP, hence α\alpha-LIP, w.r.t. SS. However, if SS and XX are not perfectly correlated, we can get better privacy, as outlined by the proposition below.

Proposition 1.

Given a probability distribution pX,S\operatorname{p}_{X,S} and a α0\alpha\geq 0, define

L(α)=maxx,s|ln(eα1)px|s+spx|s(eα1)px+spx|s|.L(\alpha)=\max_{x,s}\left|\ln\frac{(\textrm{e}^{\alpha}-1)\operatorname{p}_{x|s}+\sum_{s^{\prime}}\operatorname{p}_{x|s^{\prime}}}{(\textrm{e}^{\alpha}-1)\operatorname{p}_{x}+\sum_{s^{\prime}}\operatorname{p}_{x|s^{\prime}}}\right|. (30)

Then CRα\operatorname{CR}^{\alpha} satisfies ε\varepsilon-LIP if and only if εL(α)\varepsilon\geq L(\alpha).

The proof is presented in Appendix A. One can use this proposition to find the α\alpha needed to have CRα\operatorname{CR}^{\alpha} satisfy ε\varepsilon-LDP, by solving L(α)=εL(\alpha)=\varepsilon. At the very least one has the following upper bound:

Proposition 2.

The protocol CRα\operatorname{CR}^{\alpha} satisfies α\alpha-LDP. In particular, it satisfies α\alpha-LIP, and L(α)αL(\alpha)\leq\alpha.

Proof.

For all y𝒳y\in\mathcal{X} and s𝒮s\in\mathcal{S} we have, following equation (46) in Appendix A, that

(CRα(X,S)=y|S=s)=1eα+c1(eαpy|s+sspy|s).\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s)=\tfrac{1}{\textrm{e}^{\alpha}+c-1}\Big{(}\textrm{e}^{\alpha}\operatorname{p}_{y|s}+\sum_{s^{\prime}\neq s}\operatorname{p}_{y|s^{\prime}}\Big{)}. (31)

It follows that

(CRα(X,S)=y|S=s)(CRα(X,S)=y|S=s)\displaystyle\frac{\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s)}{\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s^{\prime})}
=eαpy|s+py|s+s′′s,spy|s′′py|s+eαpy|s+s′′s,spy|s′′\displaystyle=\frac{\textrm{e}^{\alpha}\operatorname{p}_{y|s}+p_{y|s^{\prime}}+\sum_{s^{\prime\prime}\neq s,s^{\prime}}\operatorname{p}_{y|s^{\prime\prime}}}{\operatorname{p}_{y|s}+\textrm{e}^{\alpha}p_{y|s^{\prime}}+\sum_{s^{\prime\prime}\neq s,s^{\prime}}\operatorname{p}_{y|s^{\prime\prime}}} (32)
max{1,eαpy|s+py|spy|s+eαpy|s}\displaystyle\leq\max\left\{1,\frac{\textrm{e}^{\alpha}\operatorname{p}_{y|s}+p_{y|s^{\prime}}}{\operatorname{p}_{y|s}+\textrm{e}^{\alpha}p_{y|s^{\prime}}}\right\} (33)
eα.\displaystyle\leq\textrm{e}^{\alpha}.\qed (34)

VII Experiments

We test the feasibility of the different methods by performing small-scale experiments on synthetic data and real-world data. All experiments are implemented in Matlab and conducted on a PC with Intel Core i7-7700HQ 2.8GHz and 32GB memory.

VII-A Synthetic data: LDP vs LIP

We compare the computing time for finding optimal ε\varepsilon-LDP and ε\varepsilon-LIP protocols for c=2c=2 and a=5a=5 for 10 random distributions pS,X\operatorname{p}_{S,X}, obtained by generating each ps,x\operatorname{p}_{s,x} uniformly from [0,1][0,1] and then normalising. We take ε{0.5,1,1.5,2}\varepsilon\in\{0.5,1,1.5,2\}; the results are in Figure 3. As one can see, Theorem 3 gives significantly faster results than Theorem 2; the average computing time for Theorem 2 for ε=0.5\varepsilon=0.5 is 133s, while for Theorem 3 this is 0.0206s. With regards to the utility I(X;Y)\operatorname{I}(X;Y), since ε\varepsilon-LDP implies ε\varepsilon-LIP, the optimal ε\varepsilon-LIP protocol will have better utility than the optimal ε\varepsilon-LDP protocol. However, as can be seen from the figure, the difference in utility is relatively low.

Note that for bigger ε\varepsilon, both the difference in computing time and the difference in I(X;Y)\operatorname{I}(X;Y) between LDP and LIP become less. This is because of the probabilistic relation between SS and XX, for ε\varepsilon large enough, any sanitisation protocol satisfies ε\varepsilon-LIP and ε\varepsilon-LDP. This means that as ε\varepsilon grows, the resulting polytopes will have fewer defining inequalities, hence they will have fewer vertices. This results in lower computation times, which affects LDP more than LIP. At the same time, the fact that every protocol is both ε\varepsilon-LIP and ε\varepsilon-LDP will result in the same optimal utility.

In Figure 4, we compare optimal ε2\frac{\varepsilon}{2}-LDP protocols to optimal ε\varepsilon-LIP protocols. Again, LIP is significantly faster than LDP. Since ε\varepsilon-LIP implies ε2\frac{\varepsilon}{2}-LDP, the optimal ε2\frac{\varepsilon}{2}-LDP has higher utility; again the difference is low.

Refer to caption
Figure 3: Comparision of computation time and I(X;Y)\operatorname{I}(X;Y) for ε\varepsilon-LDP protocols found via Theorem 2 and ε\varepsilon-LIP protocols found via Theorem 3, for random pS,X\operatorname{p}_{S,X} with c=2c=2, a=5a=5, and ε{0.5,1,1.5,2}\varepsilon\in\{0.5,1,1.5,2\}.
Refer to caption
Figure 4: Comparision of computation time and I(X;Y)\operatorname{I}(X;Y) for ε\varepsilon-LDP protocols found via Theorem 2 and ε2\frac{\varepsilon}{2}-LIP protocols found via Theorem 3, for random pS,X\operatorname{p}_{S,X} with c=2c=2, a=5a=5, and ε{0.5,1,1.5,2}\varepsilon\in\{0.5,1,1.5,2\}.

VII-B Synthetic data: LIP vs SRLIP

We also perform similar comparisons for multiple attributes, for c=2c=2, a1=a2=3a_{1}=a_{2}=3 and a3=4a_{3}=4, comparing the methods of Theorems 3 and 4. The results are presented in Figure 5. As one can see, Theorem 4 is significantly slower, with Theorem 3 being on average 476476 times as fast. There is a sizable difference in utility, caused on one hand by the fact that ε\varepsilon-SRLIP is a stricter privacy requirement than ε\varepsilon-LIP, and on the other hand by the fact that Theorem 4 does not give us the optimal ε\varepsilon-SRLIP protocol.

Refer to caption
Figure 5: Comparison of computation time and I(X;Y)\operatorname{I}(X;Y) for ε\varepsilon-(SR)LIP-protocols found via Theorems 3 and 4, for random pS,X\operatorname{p}_{S,X} with c=2c=2, a1=a2=3a_{1}=a_{2}=3, a3=4a_{3}=4, and ε{0.5,1,1.5,2}\varepsilon\in\{0.5,1,1.5,2\}.

VII-C Adult dataset

Refer to caption
(a) S=S= marital status, X=X= education
Refer to caption
(b) S=S= occupation, X=X= education
Refer to caption
(c) S=S= marital status, X=X= relationship
Refer to caption
(d) S=S= occupation, X=X= relationship
Refer to caption
(e) S=S= marital status, X=X= race
Refer to caption
(f) S=S= occupation, X=X= race
Figure 6: Experiments on the Adult dataset.

We also test the utility of Conditional Reporting (CR), both on real world data and synthetic data. We consider the well-known Adult dataset [7], which contains demographic data from the 1994 US census. For our tests, we take S{marital status, occupation}S\in\{\text{marital status, occupation}\} (with c=7c=7 and c=15c=15, respectively) and X{education,relationship,sex}X\in\{\text{education},\text{relationship},\text{sex}\} (with a=16,6,2a=16,6,2). Based on our findings in the previous sections, we take LIP as a privacy measure, and I(X;Y)\operatorname{I}(X;Y) as a utility measure. We compare CR on the one hand with the optimal method (Opt-LIP) found in Section IV, and on the other hand with the established LDP protocols GRR and OUE. The results are shown in Figure 6. For X=X= education, the mutual information for OUE was infeasible to compute. Similarly, for S=S= occupation, some cases of Opt-LIP failed to compute within a reasonable timeframe. Nevertheless, we can conclude that GRR and CR both perform somewhere between Opt-LIP and OUE. As the LIP value ε\varepsilon grows larger, GRR and CR grow close to Opt-LIP. At the same time, OUE falls off for large ε\varepsilon, having 12H(X)\tfrac{1}{2}\operatorname{H}(X) as its limit. This is because OUE only has probability 12\tfrac{1}{2} transmitting the true XX (as element of the set YY). The difference between GRR and CR is less clear, and it appears to depend on the joint distribution pX,S\operatorname{p}_{X,S} which protocol gives the best utility.

VII-D Synthetic data: GRR vs CR

To investigate the difference between GRR and CR, we apply both methods to synthetic data. We disregard OUE as it performs worse than the other two protocols, especially in the low privacy regime. For a fixed choice of aa and cc, we draw a number of probability distributions from the Jeffreys prior on 𝒮×𝒳\mathcal{S}\times\mathcal{X}, i.e. the symmetric Dirichlet distribution with parameter 12\tfrac{1}{2}. We fix a set of LIP values ε\varepsilon, and for each of these and each probability distribution, we solve equations (26) and (30), setting the left hand side equal to ε\varepsilon and solving for αGRR\alpha_{\operatorname{GRR}} and αCR\alpha_{\operatorname{CR}}. We then calculate the mutual information I(X;Y)\operatorname{I}(X;Y), which we normalise by dividing by H(X)\operatorname{H}(X). The resulting averages and standard deviations are displayed in Figure 7. On the whole, we see that the larger aa is compared to cc, the more utility CR provides compared to GRR. However, this does not tell the whole story, as the difference between datasets has more impact on the utility than the difference between methods.

Refer to caption
(a) a=5a=5, c=2c=2
Refer to caption
(b) a=2a=2, c=5c=5
Refer to caption
(c) a=5a=5, c=5c=5
Refer to caption
(d) a=3a=3, c=5c=5
Refer to caption
(e) a=5a=5, c=7c=7
Refer to caption
(f) a=7a=7, c=5c=5
Figure 7: Experiments on synthetic data. For each value of aa and cc, the average utility is taken over 100 randomly generated probability distributions. Bar size denotes standard deviation.

VII-E GRR and CR parameter α\alpha

To investigate what property of the probability distribution pXSp_{XS} causes CR to outperform GRR, we consider the parameters αCR\alpha_{\operatorname{CR}} and αGRR\alpha_{\operatorname{GRR}} that govern the privacy protocols CR and GRR. Both of these have the property that the higher their value, the less ‘random’ the protocols are, resulting in a better utility. Since these α\alpha are found from ε\varepsilon through different equations, the difference in utility of GRR and CR for different probability distributions may be explained by a difference in α\alpha. We test this assertion for 100 randomly generated distributions in Figure 8. As can be seen, the difference in mutual information can for a large part be explained by a difference in α\alpha (ρ=0.9815\rho=0.9815, ρ=0.9889\rho=0.9889, and ρ=0.9731\rho=0.9731, respectively). In Figure 9, we plot the relation between α\alpha and the LIP value ε\varepsilon for the experiments in 6(b) and 6(d). The fact that αGRR>αCR\alpha_{\operatorname{GRR}}>\alpha_{\operatorname{CR}} in 9(a) corresponds to the fact that GRR outperforms CR in 6(b), and the opposite relation holds between 9(b) and 6(d).

Refer to caption
(a) ε=1\varepsilon=1
Refer to caption
(b) ε=1.5\varepsilon=1.5
Refer to caption
(c) ε=2\varepsilon=2
Figure 8: Difference in α\alpha versus difference in utility for 100100 randomly generated probability distributions, for a=c=5a=c=5.
Refer to caption
(a) S=S= occupation, X=X= education
Refer to caption
(b) S=S= occupation, X=X= relationship
Figure 9: Value of GRR and CR parameter α\alpha for different values of ε\varepsilon for the Adult dataset.

Unfortunately, we were not able to relate the difference in parameter α\alpha to other properties of the distribution. Without presenting details we mention that the properties I(X;S),maxx,spx,s,maxxpx\operatorname{I}(X;S),\max_{x,s}\operatorname{p}_{x,s},\max_{x}\operatorname{p}_{x} and maxsps\max_{s}\operatorname{p}_{s} do not appear to have an impact on the difference in utility between GRR and CR.

VIII Conclusions and future work

Local data sanitisation protocols have the advantage of being scalable for large numbers of users. Furthermore, the advantage of using differential privacy-like privacy metrics is that they provide worst-case guarantees, ensuring that the privacy of every user is sufficiently protected. For both ε\varepsilon-LDP and ε\varepsilon-LIP we have derived methods to find optimal sanitisation protocols. Within this setting, we have observed that ε\varepsilon-LIP has two main advantages over ε\varepsilon-LDP. First, it fits better within the privacy funnel setting, where the distribution pS,X\operatorname{p}_{S,X} is (at least approximately) known to the estimator. Second, finding the optimal protocol is significantly faster than under LDP, especially for small ε\varepsilon. If one nevertheless prefers ε\varepsilon-LDP as a privacy metric, then it is still worthwile to find the optimal ε2\frac{\varepsilon}{2}-LIP protocol, as this can be found significantly faster, at a low utility penalty.

In the multiple attributes setting, we have shown that ε\varepsilon-SRLIP provides additional privacy guarantees compared to ε\varepsilon-LIP, since without this requirement a protocol can lose all its privacy protection in the presence of side channels. Unfortunately, however, experiments show that we pay for this both in computation time and in utility.

With regard to the specific protocols, we have found that the newly introduced protocol, CR, generally outperforms OUE, especially for high values of ε\varepsilon-LIP. It behaves more or less similar to GRR, and which of these two protocols performs best depends on properties of the joint distribution pX,S\operatorname{p}_{X,S}. In particular, it largely depends on which of the two protocols has the highest value of their governing parameter α\alpha. Also, we have seen that CR performs better on average if aa is large compared to cc.

For further research, a number of important avenues remain to be explored. First, the aggregator’s knowledge about pS,X\operatorname{p}_{S,X} may not be perfect, because they may learn about pS,X\operatorname{p}_{S,X} through observing (S,X)(\vec{S},\vec{X}). Incorporating this uncertainty leads to robust optimisation [3], which would give stronger privacy guarantees.

Second, it might be possible to improve the method of obtaining ε\varepsilon-SRLIP protocols via Theorem 4. Examining its proof shows that lower values of εj\varepsilon^{j} may suffice to still ensure ε\varepsilon-SRLIP. Furthermore, the optimal choice of (εj)jm(\varepsilon^{j})_{j\leq m} such that jεj=ε\sum_{j}\varepsilon^{j}=\varepsilon might not be εj=εm\varepsilon^{j}=\frac{\varepsilon}{m}. However, it is computationally prohibitive to perform the vertex enumeration for many different choices of (εj)jm(\varepsilon^{j})_{j\leq m}, and as such a new theoretical approach is needed to determine the optimal (εj)jm(\varepsilon^{j})_{j\leq m} from ε\varepsilon and pS,X\operatorname{p}_{S,X}.

Third, it would be interesting to see if there are other ways to close the gap between the theoretically optimal protocol, which may be hard to compute in practice, and general LDP protocols, which do not see the difference between sensitive and non-sensitive information. This is relevant because CR needs both SS and XX as input, and there may be situations where access to SS is not available.

Although CR outperforms GRR and OUE for some datasets, it does not do so consistently. More research in the properties of distributions where CR fails to provide a significant advantage might lead to improved privacy protocols.

Acknowledgements

This work was supported by NWO grant 628.001.026 (Dutch Research Council, the Hague, the Netherlands).

References

  • [1] David Avis and Komei Fukuda “A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra” In Discrete & Computational Geometry 8.3 Springer, 1992, pp. 295–313
  • [2] Imre Bárány and Attila Pór “On 0-1 polytopes with many facets” In Advances in Mathematics 161.2 Academic Press, 2001, pp. 209–228
  • [3] Dimitris Bertsimas, Vishal Gupta and Nathan Kallus “Data-driven robust optimization” In Mathematical Programming 167.2 Springer, 2018, pp. 235–292
  • [4] Flavio du Pin Calmon et al. “Principal inertia components and applications” In IEEE Transactions on Information Theory 63.8 IEEE, 2017, pp. 5011–5038
  • [5] Paul Cuff and Lanqing Yu “Differential privacy as a mutual information constraint” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 43–54
  • [6] Ni Ding and Parastoo Sadeghi “A Submodularity-based Agglomerative Clustering Algorithm for the Privacy Funnel” Preprint In arXiv:1901.06629, 2019
  • [7] Dheeru Dua and Casey Graff “UCI Machine Learning Repository”, 2017 URL: http://archive.ics.uci.edu/ml/datasets/Adult
  • [8] Cynthia Dwork and Guy N Rothblum “Concentrated differential privacy” Preprint In arXiv:1603.01887, 2016
  • [9] Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam Smith “Calibrating noise to sensitivity in private data analysis” In Theory of cryptography conference, 2006, pp. 265–284 Springer
  • [10] Cynthia Dwork et al. “Our data, ourselves: Privacy via distributed noise generation” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2006, pp. 486–503 Springer
  • [11] Úlfar Erlingsson et al. “Encode, shuffle, analyze privacy revisited: formalizations and empirical evaluation” In arXiv:2001.03618, 2020
  • [12] Naoise Holohan, Douglas J Leith and Oliver Mason “Extreme points of the local differential privacy polytope” In Linear Algebra and its Applications 534 Elsevier, 2017, pp. 78–96
  • [13] Bo Jiang, Ming Li and Ravi Tandon “Local Information Privacy with Bounded Prior” In ICC 2019-2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7 IEEE
  • [14] Peter Kairouz, Sewoong Oh and Pramod Viswanath “Extremal mechanisms for local differential privacy” In Advances in neural information processing systems, 2014, pp. 2879–2887
  • [15] Shiva Prasad Kasiviswanathan et al. “What can we learn privately?” In SIAM Journal on Computing 40.3 SIAM, 2011, pp. 793–826
  • [16] Daniel Kifer and Ashwin Machanavajjhala “Pufferfish: A framework for mathematical privacy definitions” In ACM Transactions on Database Systems (TODS) 39.1 ACM New York, NY, USA, 2014, pp. 1–36
  • [17] SY Kung “A compressive privacy approach to generalized information bottleneck and privacy funnel problems” In Journal of the Franklin Institute 355.4 Elsevier, 2018, pp. 1846–1872
  • [18] Milan Lopuhaä-Zwakenberg “The Privacy Funnel from the Viewpoint of Local Differential Privacy” In Fourteenth International Conference on the Digital Society, 2020, pp. 19–24
  • [19] Milan Lopuhaä-Zwakenberg, Boris Škorić and Ninghui Li “Information-theoretic metrics for Local Differential Privacy protocols” Preprint In arXiv:1910.07826, 2019
  • [20] Ali Makhdoumi, Salman Salamatian, Nadia Fawaz and Muriel Médard “From the information bottleneck to the privacy funnel” In 2014 IEEE Information Theory Workshop (ITW 2014), 2014, pp. 501–505 IEEE
  • [21] Ilya Mironov “Rényi differential privacy” In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), 2017, pp. 263–275 IEEE
  • [22] Fabian Prasser, Florian Kohlmayer, Ronald Lautenschlaeger and Klaus A Kuhn “Arx-a comprehensive tool for anonymizing biomedical data” In AMIA Annual Symposium Proceedings 2014, 2014, pp. 984 American Medical Informatics Association
  • [23] Borzoo Rassouli and Deniz Gunduz “On perfect privacy” In 2018 IEEE International Symposium on Information Theory (ISIT), 2018, pp. 2551–2555 IEEE
  • [24] Salman Salamatian et al. “Privacy-Utility Tradeoff and Privacy Funnel” unpublished preprint, 2020 URL: http://www.mit.edu/~salmansa/files/privacy_TIFS.pdf
  • [25] Naftali Tishby, Fernando C Pereira and William Bialek “The information bottleneck method” Preprint In arXiv:physics/0004057, 2000
  • [26] Tianhao Wang, Jeremiah Blocki, Ninghui Li and Somesh Jha “Locally differentially private protocols for frequency estimation” In 26th {\{USENIX}\} Security Symposium ({\{USENIX}\} Security 17), 2017, pp. 729–745
  • [27] Stanley L Warner “Randomized response: A survey technique for eliminating evasive answer bias” In Journal of the American Statistical Association 60.309 Taylor & Francis, 1965, pp. 63–69
  • [28] Mengmeng Yang et al. “Local Differential Privacy and Its Applications: A Comprehensive Survey” Preprint In arXiv:2008.03686, 2020

Appendix A Proofs

Proof of Theorem 4.

For J{1,,m}J\subseteq\{1,\ldots,m\} and j{1,,m}j\in\{1,\ldots,m\}, we write J[j]:=J{1,,j1}J[j]:=J\cap\{1,\ldots,j-1\}. Furthermore, we write 𝒳\J=jJ𝒳j\mathcal{X}^{\backslash J}=\prod_{j\notin J}\mathcal{X}^{j}, and its elements as x\Jx^{\backslash J}. We write ε:=jεj\varepsilon:=\sum_{j}\varepsilon^{j}. We then have

py|s,xJ\displaystyle\operatorname{p}_{y|s,x^{J}} =x\Jpy|xpx\J|s,xJ\displaystyle=\sum_{x^{\backslash J}}\operatorname{p}_{y|x}\operatorname{p}_{x^{\backslash J}|s,x^{J}} (35)
=pyJ|xJx\j(jJpyj|xj)px\J|s,xJ\displaystyle=\operatorname{p}_{y^{J}|x^{J}}\sum_{x^{\backslash j}}\left(\prod_{j\notin J}\operatorname{p}_{y^{j}|x^{j}}\right)\operatorname{p}_{x^{\backslash J}|s,x^{J}} (36)
=pyJ|xJx\jjJpyj|xjpxj|s,xJ[j]\displaystyle=\operatorname{p}_{y^{J}|x^{J}}\sum_{x^{\backslash j}}\prod_{j\notin J}\operatorname{p}_{y^{j}|x^{j}}\operatorname{p}_{x^{j}|s,x^{J[j]}} (37)
=pyJ|xJjJxjpyj|xjpxj|s,xJ[j]\displaystyle=\operatorname{p}_{y^{J}|x^{J}}\prod_{j\notin J}\sum_{x^{j}}\operatorname{p}_{y^{j}|x^{j}}\operatorname{p}_{x^{j}|s,x^{J[j]}} (38)
=pyJ|xJjJpyj|s,xJ[j]\displaystyle=\operatorname{p}_{y^{J}|x^{J}}\prod_{j\notin J}\operatorname{p}_{y^{j}|s,x^{J[j]}} (39)
pyJ|xJjJeεjpyj|xJ[j]\displaystyle\leq\operatorname{p}_{y^{J}|x^{J}}\prod_{j\notin J}\textrm{e}^{\varepsilon^{j}}\operatorname{p}_{y^{j}|x^{J[j]}} (40)
eεpyJ|xJjJpyj|xJ[j]\displaystyle\leq\textrm{e}^{\varepsilon}\operatorname{p}_{y^{J}|x^{J}}\prod_{j\notin J}\operatorname{p}_{y^{j}|x^{J[j]}} (41)
=eεpy|xJ.\displaystyle=\textrm{e}^{\varepsilon}\operatorname{p}_{y|x^{J}}. (42)

The fact that eεpy|xJpy|s,xJ\textrm{e}^{-\varepsilon}\operatorname{p}_{y|x^{J}}\leq\operatorname{p}_{y|s,x^{J}} is proven analogously. ∎

Proof of Proposition 1.

Write Qy|x,s=(CRα(x,s)=y)Q_{y|x,s}=\mathbb{P}(\operatorname{CR}^{\alpha}(x,s)=y). Then

Qy|x,s\displaystyle Q_{y|x,s} =s(CRα(x,s)=y|s~=s)(s~=s|S=s)\displaystyle=\sum_{s^{\prime}}\mathbb{P}(\operatorname{CR}^{\alpha}(x,s)=y|\tilde{s}=s^{\prime})\mathbb{P}(\tilde{s}=s^{\prime}|S=s) (43)
=eαeα+c1+1eα+c1sspy|s,\displaystyle=\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+c-1}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}\neq s}\operatorname{p}_{y|s^{\prime}}, (44)

where δx=y\delta_{x=y} is the Kronecker delta. It follows that

(CRα(X,S)=y|S=s)\displaystyle\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s)
=xQy|x,spx|s\displaystyle=\sum_{x}Q_{y|x,s}\operatorname{p}_{x|s} (45)
=eαeα+c1py|s+1eα+c1sspy|s\displaystyle=\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y|s}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}\neq s}\operatorname{p}_{y|s^{\prime}} (46)
=eα1eα+c1py|s+1eα+c1spy|s,\displaystyle=\frac{\textrm{e}^{\alpha}-1}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y|s}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}}\operatorname{p}_{y|s^{\prime}}, (47)
(CRα(X,S)=y)\displaystyle\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y)
=s(CRα(X,S)=y|S=s)ps\displaystyle=\sum_{s}\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s)\operatorname{p}_{s} (48)
=eαeα+c1py+1eα+c1ssspy|sps\displaystyle=\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s}\sum_{s^{\prime}\neq s}\operatorname{p}_{y|s^{\prime}}\operatorname{p}_{s} (49)
=eαeα+c1py+1eα+c1spy|sssps\displaystyle=\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}}\operatorname{p}_{y|s^{\prime}}\sum_{s\neq s^{\prime}}\operatorname{p}_{s} (50)
=eαeα+c1py+1eα+c1s(py|spy,s)\displaystyle=\frac{\textrm{e}^{\alpha}}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}}(\operatorname{p}_{y|s^{\prime}}-\operatorname{p}_{y,s^{\prime}}) (51)
=eα1eα+c1py+1eα+c1spy|s.\displaystyle=\frac{\textrm{e}^{\alpha}-1}{\textrm{e}^{\alpha}+c-1}\operatorname{p}_{y}+\frac{1}{\textrm{e}^{\alpha}+c-1}\sum_{s^{\prime}}\operatorname{p}_{y|s^{\prime}}. (52)

We find that

L(α)=maxy,s|ln(CRα(X,S)=y|S=s)(CRα(X,S)=y)|,L(\alpha)=\max_{y,s}\left|\ln\frac{\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y|S=s)}{\mathbb{P}(\operatorname{CR}^{\alpha}(X,S)=y)}\right|, (53)

hence CRα\operatorname{CR}^{\alpha} satisfies ε\varepsilon-LIP if and only if εL(α)\varepsilon\geq L(\alpha). ∎