Data Sanitisation Protocols for the Privacy Funnel with Differential Privacy Guarantees
Abstract
In the Open Data approach, governments and other public organisations want to share their datasets with the public, for accountability and to support participation. Data must be opened in such a way that individual privacy is safeguarded. The Privacy Funnel is a mathematical approach that produces a sanitised database that does not leak private data beyond a chosen threshold. The downsides to this approach are that it does not give worst-case privacy guarantees, and that finding optimal sanitisation protocols can be computationally prohibitive. We tackle these problems by using differential privacy metrics, and by considering local protocols which operate on one entry at a time. We show that under both the Local Differential Privacy and Local Information Privacy leakage metrics, one can efficiently obtain optimal protocols. Furthermore, Local Information Privacy is both more closely aligned to the privacy requirements of the Privacy Funnel scenario, and more efficiently computable. We also consider the scenario where each user has multiple attributes, for which we define Side-channel Resistant Local Information Privacy, and we give efficient methods to find protocols satisfying this criterion while still offering good utility. Finally, we introduce Conditional Reporting, an explicit LIP protocol that can be used when the optimal protocol is infeasible to compute, and we test this protocol on real-world and synthetic data. Experiments on real-world and synthetic data confirm the validity of these methods.
Keywords—Privacy funnel; local differential privacy; information privacy; database sanitisation; complexity.
I Introduction
This paper is an extended version of [18]. Under the Open Data paradigm, governments and other public organisations want to share their collected data with the general public. This increases a government’s transparency, and it also gives citizens and businesses the means to participate in decision-making, as well as using the data for their own purposes. However, while the released data should be as faithful to the raw data as possible, individual citizens’ private data should not be compromised by such data publication.
Let be a finite set. Consider a database owned by a data aggregator, containing a data item for each user (For typical database settings, each user’s data is a vector of attributes ; we will consider this in more detail in Section V). This data may not be considered sensitive by itself, but it might be correlated to a secret . For instance, might contain the age, sex, weight, skin colour, and average blood pressure of person , while is the presence of some medical condition. To publish the data without leaking the , the aggregator releases a privatised database , obtained from applying a sanitisation mechanism to . One way to formulate this is by considering the Privacy Funnel:
Problem 1.
(Privacy Funnel, [4]) Suppose the joint probability distribution of and is known to the aggregator, and let . Then, find the sanitisation mechanism such that is maximised while .
There are two difficulties with this approach:
- 1.
-
2.
Taking mutual information as a leakage measure has as a disadvantage that it gives guarantees about the leakage in the average case. If is large, this still leaves room for the sanitisation protocol to leak undesirably much information about a few unlucky users.
To deal with these two difficulties, we make two changes to the general approach. First, we look at local data sanitisation, i.e., we consider optimization protocols , for some finite set , and we apply to each individually; this situation is depicted in Figure 1. Local sanitisation can be implemented efficiently. In fact, this approach is often taken in the Privacy Funnel setting [20][6]. Second, to ensure strong privacy guarantees even in worst-case scenarios, we take stricter notions of privacy, based on Local Differential Privacy (LDP) [15]. For these metrics, we develop methods to find optimal protocols. Furthermore, for situations where the optimal protocol is computationally unfeasible to find, we introduce a new protocol, Conditional Reporting (CR), that takes advantage of the fact that only needs to be protected. Determining CR only requires finding the root of a onedimensional increasing function, which can be done fast numerically.
I-A New contributions
In this paper, we adapt two Differential Privacy-like privacy metrics to the Privacy Funnel situation, namely Local Differential Privacy (LDP) [15] and Local Information Privacy (LIP) [13][24]. We modify these metrics so that they measure leakage about the underlying rather than itself (for notational convenience, we write rather than throughout the rest of this paper). For a given level of leakage, we are interested in the privacy protocol that maximises the mutual information between input and output . Adapting methods from [14] on LDP and [23] on perfect privacy, we prove the following Theorem:
Theorem 1 (Theorems 2 and 3 paraphrased).
Suppose , and , as well as a privacy level are given.
-
1.
The optimal -LDP protocol can be found by enumerating the vertices of a polytope in dimensions defined by inequalities.
-
2.
The optimal -LIP protocol can be found by enumerating the vertices of a polytope in dimensions defined by inequalities.
The descriptions of these polytopes, and how they relate to the optimisation problem, are discussed in Sections III and IV, respectively. Since the complexity of the polytope vertex enumeration depends significantly on both its dimension and the number of defining inequalities [2], finding optimal LIP protocols can be done significantly faster than finding optimal LDP protocols. Furthermore, we will argue that LIP is a privacy metric that more accurately captures information leakage than LDP in the Privacy Funnel scenario. For these two reasons we only consider LIP in the remainder of the paper, although many results can also be formulated for LDP.
A common scenario is that a user’s data consists of multiple attributes, i.e. . Here one can consider an attacker model where the attacker has access to some of the . In this situation -LIP does not accurately reflect a user’s privacy. Because of this, we introduce a new privacy metric called Sidechannel-Resistant LIP that takes such sidechannels into account. We expand the vertex enumeration methods outlined above to find optimal SRLIP methods in Section V.
Finding the optimal protocols can become computationally unfeasible for large and . In such a situation, one needs to resort to explicitely given protocols. In the literature there is a wealth of protocols that satisfy -LDP w.r.t. . These certainly work in our situation, but they might not be ideal, because these are designed to obfuscate all information about , rather than just the part that relates to . For this reason, we introduce Conditional Reporting (CR), a privacy protocol that focuses on hiding rather than , in Section VI. Finding the appropriate CR protocol for a given probability distribution and privacy level can be done fast numerically.
I-B Related work
The Privacy Funnel (PF) setting was introduced in [20], to provide a framework for obfuscating data in such a way that the obfuscated data remains as faithful as possible to the original, while ensuring that the information leakage about a latent variable is limited. The Privacy Funnel is related to the Information Bottleneck (IB) [25], a problem from machine learning that seeks to compress data as much as possible, while retaining a minimal threshold of information about a latent variable. In PF as well as IB, both utility and leakage are measured via mutual information. Many approaches to finding the optimal protocols in PF also work for IB and vice versa [17][6]. A wider range of privacy metrics for the Privacy Funnel, and their relation to Differential Privacy, is discussed in [24].
Local Differential Privacy (LDP) was introduced in [15]. It is an adaptation of Differential Privacy [9] to a setting where there is no trusted central party to obfuscate the data. As a privacy metric, it has the advantage that it offers a privacy guarantee in any case, not just the average case, and that it does not depend on the data distribution. On the downside, it can be difficult to fulfill such a stringent definition of privacy, and many relaxations of (Local) Differential Privacy have been proposed [5][10][8][21]. We are particularly interested in Local Information Privacy (LIP) [13][24], also called Removal Local Differential Privacy [11]. LIP retains the worst-case guarantees of LDP, but is less restrictive, and can take advantage of a known distribution. In the context where only part of the data is considered secret, many privacy metrics fall under the umbrella of Pufferfish Privacy [16].
In [14], a method was introduced for finding optimal LDP-protocols for a wide variety of utility metrics, including mutual information. The method relies on finding the vertices of a polytope, but since this is the well-studied Differential Privacy polytope, its vertices can be described explicitly [12]. Similarly, [23] uses a vertex enumeration method to find the optimal protocol in the perfect privacy situation, i.e. when the released data is independent of the secret data. The complexity of vertex enumeration is discussed in [1][2].
II Mathematical Setting
The database consists of a data item for each user , each an element of a given finite set . Furthermore, each user has sensitive data , which is correlated with ; again we assume to be finite (see Figure 1). We assume that each is drawn independently from the same distribution on which is known to the aggregator through observing (if one allows for non-independent , then differential privacy is no longer an adequate privacy metric [5][24]). The aggregator, who has access to , sanitises the database by applying a sanitisation protocol (i.e., a random function) to each , outputting . The aggregator’s goal is to find a that maximises the information about preserved in (measured as ) while leaking only minimal information about .
Without loss of generality we write , and for integers . We omit the subscript from , , as no probabilities depend on it, and we write such probabilities as , , , etc., which form vectors , , etc., and matrices , etc.
As noted before, instead of looking at the mutual information , we consider two different, related measures of sensitive information leakage known from the literature. The first one is an adaptation of LDP, the de facto standard in information privacy [15]:
Definition 1.
(-LDP) Let . We say that satisfies -LDP w.r.t. if
(1) |
Most literature on LDP considers LDP w.r.t. , i.e. for all . Throughout this paper, by -LDP we always mean -LDP w.r.t. , unless otherwise specified.
The LDP metric reflects the fact that we are only interested in hiding sensitive data, rather than all data; it is a specific case of what has been named ‘pufferfish privacy’ [16]. The advantage of LDP compared to mutual information is that it gives privacy guarantees for the worst case, not just the average case. This is desirable in the database setting, as a worst-case metric guarantees the security of the private data of all users, while average-case metrics are only concerned with the average user. Another useful privacy metric is Local Information Privacy (LIP) [13][24], also called Removal Local Differential Privacy [11]:
Definition 2.
(-LIP) Let . We say that satisfies -LIP w.r.t. if
(2) |
Compared to LDP, the disadvantage of LIP is that it depends on the distribution of ; this is not a problem in our scenario, as the aggregator, who chooses , has access to the distribution of . The advantage of LIP is that is more closely related to an attacker’s capabilities: since , satisfying -LIP means that an attacker’s posterior distribution of given does not deviate from their prior distribution by more than a factor . The following lemma outlines the relations between LDP, LIP and mutual information (see Figure 2).
Lemma 1.
(See [24]) Let be a sanitisation protocol, and let .
-
1.
If satisfies -LDP, then it satisfies -LIP.
-
2.
If satisfies -LIP, then it satisfies -LDP, and .
Remark 1.
One gets robust equivalents of LDP and LIP by demanding that satisfy -LIP (-LDP) for a set of distributions , instead of only a single distribution [16]. Letting range over all possible distributions on yields LIP (LDP) w.r.t. .
In this notation, instead of Problem 1 we consider the following problem:
Problem 2.
Suppose is known to the aggregator, and let . Then, find the sanitisation protocol such that is maximised while satisfies -LDP (-LIP, respectively) with respect to .
Note that this problem does not depend on the number of users , and as such this approach will find solutions that are scalable w.r.t. .
III Optimizing for -LDP
Our goal is now to find the optimal , i.e., the protocol that maximises while satisfying -LDP, for a given . We can represent any sanitisation protocol as a matrix , where . Then, -LDP is satisfied if and only if
(3) | ||||
(4) | ||||
(5) |
As such, for a given , the set of -LDP-satisfying sanitisation protocols can be considered a closed, bounded, convex polytope in . This fact allows us to efficiently find optimal protocols.
Theorem 2.
Let . Let be a -LDP protocol that maximises , i.e., the protocol that solves Problem 2 w.r.t. LDP.
-
1.
One can take .
-
2.
Let be the polytope described above, for . Then the optimal corresponds to one of the vertices of .
Proof.
The first result is obtained by generalising the results of [14]: there this is proven for regular -LDP (i.e., w.r.t. ), but the arguments given in that proof hold just as well in our situation; the only difference is that their polytope is defined by the -LDP conditions w.r.t. , but this has no impact on the proof. The second statement follows from the fact that is a convex function in ; therefore its maximum on a bounded polytope is attained in one of the vertices. ∎
This theorem reduces the search for the optimal LDP protocol to enumerating the set of vertices of , a -dimensional convex polytope.
One might argue that, since the optimal depends on , the publication of might provide an aggregator with information about the distribution of . However, information on the distribution (as opposed to information of individual users’ data) is not considered sensitive [19]. In fact, the reason why the aggregator sanitises the data is because an attacker is assumed to have knowledge about this correlation, and revealing too much information about would cause the aggregator to use this information to infer information about .
IV Optimizing for -LIP
If one uses -LIP as a privacy metric, one can find the optimal sanitisation protocol in a similar fashion. To do this, we again describe as a matrix, but this time a different one. Let be the probability mass function of , and let be given by ; we denote its -th row by . Then, a pair defines a sanitisation protocol satisfying -LIP if and only if
(6) | ||||
(7) | ||||
(8) | ||||
(9) | ||||
(10) |
Note that (10) defines the -LIP condition, since for a given we have . (In)equalities (8–10) can be expressed as saying that for every one has that , where is the convex closed bounded polytope in given by
(11) |
As in Theorem 2, we can use this polytope to find optimal protocols:
Theorem 3.
Let , and let be the polytope above. Let be its set of vertices. For , let be its entropy, i.e.
(12) |
Let be the solution to the optimisation problem
(13) | ||||
subjectto | (14) | |||
(15) |
Then the -LIP protocol that maximises is given by
(16) | ||||
(17) | ||||
(18) |
for all and all . One has .
Proof.
Since linear optimization problems can be solved fast, again the optimization problem reduces to finding the vertices of a polytope. The advantage of using LIP instead of LDP is that is a -dimensional polytope, while of Section III is -dimensional. The time complexity of vertex enumeration is linear in the number of vertices [1], while the number of vertices can grow exponentially in the dimension of the polyhedron [2]. Together, this means that the dimension plays a huge role in the time complexity, hence we expect finding the optimum under LIP to be significantly faster than under LDP.
V Multiple Attributes
An often-occuring scenario is that a user’s data consists of multiple attributes, i.e., . This can be problematic for our approach for two reasons:
-
1.
Such a large can be problematic, since the computing time for optimisation both under LDP and LIP will depend heavily on .
-
2.
In practice, an attacker might sometimes utilise side channels to access some subsets of attributes for some users. For these users, a sanitisation protocol can leak more information (w.r.t. to the attacker’s updated prior information) than its LDP/LIP parameter would suggest.
To see how the second problem might arise in practice, suppose that is the height of individual , is their weight, and is whether is obese or not. Since height is only lightly correlated with obesity, taking would satisfy -LIP for some reasonably small . However, suppose that an attacker has access to via a side channel. While knowing ’s weight gives the attacker some, but not perfect knowledge about ’s obesity, the combination of the weight from the side channel, and the height from the , allows the attacker to calculate ’s BMI, giving much more information about ’s obesity. Therefore, the given protocol gives much less privacy in the presence of this side channel.
To solve the second problem, we introduce a more stringent privacy notion called Side-channel Resistant LIP (SRLIP), which ensures that no matter which attributes an attacker has access to, the protocol still satisfies -LIP with respect to the attacker’s new prior distribution. One could similarly introduce SRLDP, and many results will still hold for this privacy measure; nevertheless, since we concluded that LIP is preferable to LDP, we focus on SRLIP. For any subset , we write and its elements as .
Definition 3.
(-SRLIP). Let , and let . We say that satisfies -SRLIP if for every , for every , for every , and for every one has
(19) |
In terms of Remark 1, satisfies -SRLIP if and only if it satisfies -LIP w.r.t. for all and . Taking gives us the regular definition of -LIP, proving the following Lemma:
Lemma 2.
Let . If satisfies -SRLIP, then satisfies -LIP.
While SRLIP is stricter than LIP itself, it has the advantage that even when an attacker has access to some data of a user, the sanitisation protocol still does not leak an unwanted amount of information beyond the knowledge the attacker has gained via the side channel. Another advantage is that, contrary to LIP itself, SRLIP satisfies an analogon of the concept of privacy budget [9]:
Theorem 4.
Let , and for every , let be a sanitisation protocol. Let for every . Suppose that for every , for every , and every , satisfies -LIP w.r.t. . Then satisfies -SRLIP.
The proof is presented in Appendix A. This theorem tells us that to find a -SRLIP protocol for , it suffices to find a sanitisation protocol for each that is -LIP w.r.t. a number of prior distributions. Unfortunately, the method of finding an optimal -LIP protocol w.r.t. one prior of Theorem 3 does not transfer to the multiple prior setting. This is because this method only finds one , while by (7) we need a different for each prior distribution. Therefore, we are forced to adopt an approach similar to the one in Theorem 2. The matrix (given by )) corresponding to satisfies the criteria of Theorem 4 if and only if the following criteria are satisfied:
(20) | ||||
(21) | ||||
(22) | ||||
(23) |
Similar to Theorem 2, we can find the optimal satisfying these conditions by finding the vertices of the polytope defined by (20–23). In terms of time complexity, the comparison to finding the optimal -LIP protocol via Theorem 3 versus finding a -SRLIP protocol via Theorem 4 is not straightforward. The complexity of enumerating the vertices of a polytope is , where is the number of inequalities, is the dimension, and is the number of vertices [1]. For the of Theorem 3 we have and . In contrast, the polytope defined by (20–23) satisfies and . Finding for both these polytopes is difficult, but in general . Since this grows exponentially in , we expect Theorem 4 to be faster when the are small compared to , i.e., when is large. We will investigate this experimentally in the next section.
VI Explicit protocols
The methods of Sections III and IV allow us to find the optimal LDP and LIP protocols. The complexity depends heavily on and , and can become computationally infeasible for large and . For such datasets, one has to rely on predetermined privacy algorithms. We consider two approaches: as a benchmark, we discuss how ‘standard’ LDP protocols can be applied to the Privacy Funnel situation, and we introduce a new method, Conditional Reporting, that is meant to address the shortcomings of standard LDP protocols. As in the previous section, we focus on LIP, but much of the discussion carries over to LDP as well.
VI-A Standard LDP protocols
In the literature, there are many examples of protocols , depending on a privacy parameter , whose output satisfies -LDP with respect to ; for an overview see [28]. Such a protocol automatically satisfies -LDP, hence certainly -LIP, with respect to . However, because is only indirectly correlated with , such a protocol’s actual LIP value may be lower. We can find the privacy of such a protocol by
(24) |
then satisfies -LIP if and only if .
For this paper we are mainly interested in two protocols. The first one is Generalised Rapid Response (GRR) [27]. We are interested in GRR because for large enough it maximises [14]. Given , GRR is a privacy protocol given by
(25) |
A direct calculation then shows that
(26) |
If we want GRR to satisfy -LIP, we then need to solve for . Since is increasing in , this can be done fast computationally.
The second protocol that is relevant to this paper is Optimised Unary Encoding (OUE) [26]. This protocol is notable for being one of the protocols that has the least known variance in frequency estimation [26]. For a choice of as privacy parameter, and an input , the output of is a vector of independent Bernoulli variables for , satisfying
(27) |
In other words, If we identify a with a subset of (so denotes its cardinality), we get
(28) |
It follows that
(29) |
VI-B Conditional Reporting
In general, a generic LDP protocol will not be ideal for our situation, since these are designed to obscure all information about , rather than just the part that holds information about . To address this shortcoming, we introduce the Conditional Reporting (CR) in Algorithm 1. This mechanism needs both and as input; hence it differs from the other protocols discussed in this paper, which only have as input. The value of is masked by Randomised Response. If the output equals , we return the true value of . If not, we output a random one, whose probability distribution is given by .
certainly satisfies -LDP, hence -LIP, w.r.t. . However, if and are not perfectly correlated, we can get better privacy, as outlined by the proposition below.
Proposition 1.
Given a probability distribution and a , define
(30) |
Then satisfies -LIP if and only if .
The proof is presented in Appendix A. One can use this proposition to find the needed to have satisfy -LDP, by solving . At the very least one has the following upper bound:
Proposition 2.
The protocol satisfies -LDP. In particular, it satisfies -LIP, and .
Proof.
For all and we have, following equation (46) in Appendix A, that
(31) |
It follows that
(32) | |||
(33) | |||
(34) |
VII Experiments
We test the feasibility of the different methods by performing small-scale experiments on synthetic data and real-world data. All experiments are implemented in Matlab and conducted on a PC with Intel Core i7-7700HQ 2.8GHz and 32GB memory.
VII-A Synthetic data: LDP vs LIP
We compare the computing time for finding optimal -LDP and -LIP protocols for and for 10 random distributions , obtained by generating each uniformly from and then normalising. We take ; the results are in Figure 3. As one can see, Theorem 3 gives significantly faster results than Theorem 2; the average computing time for Theorem 2 for is 133s, while for Theorem 3 this is 0.0206s. With regards to the utility , since -LDP implies -LIP, the optimal -LIP protocol will have better utility than the optimal -LDP protocol. However, as can be seen from the figure, the difference in utility is relatively low.
Note that for bigger , both the difference in computing time and the difference in between LDP and LIP become less. This is because of the probabilistic relation between and , for large enough, any sanitisation protocol satisfies -LIP and -LDP. This means that as grows, the resulting polytopes will have fewer defining inequalities, hence they will have fewer vertices. This results in lower computation times, which affects LDP more than LIP. At the same time, the fact that every protocol is both -LIP and -LDP will result in the same optimal utility.
In Figure 4, we compare optimal -LDP protocols to optimal -LIP protocols. Again, LIP is significantly faster than LDP. Since -LIP implies -LDP, the optimal -LDP has higher utility; again the difference is low.


VII-B Synthetic data: LIP vs SRLIP
We also perform similar comparisons for multiple attributes, for , and , comparing the methods of Theorems 3 and 4. The results are presented in Figure 5. As one can see, Theorem 4 is significantly slower, with Theorem 3 being on average times as fast. There is a sizable difference in utility, caused on one hand by the fact that -SRLIP is a stricter privacy requirement than -LIP, and on the other hand by the fact that Theorem 4 does not give us the optimal -SRLIP protocol.

VII-C Adult dataset






We also test the utility of Conditional Reporting (CR), both on real world data and synthetic data. We consider the well-known Adult dataset [7], which contains demographic data from the 1994 US census. For our tests, we take (with and , respectively) and (with ). Based on our findings in the previous sections, we take LIP as a privacy measure, and as a utility measure. We compare CR on the one hand with the optimal method (Opt-LIP) found in Section IV, and on the other hand with the established LDP protocols GRR and OUE. The results are shown in Figure 6. For education, the mutual information for OUE was infeasible to compute. Similarly, for occupation, some cases of Opt-LIP failed to compute within a reasonable timeframe. Nevertheless, we can conclude that GRR and CR both perform somewhere between Opt-LIP and OUE. As the LIP value grows larger, GRR and CR grow close to Opt-LIP. At the same time, OUE falls off for large , having as its limit. This is because OUE only has probability transmitting the true (as element of the set ). The difference between GRR and CR is less clear, and it appears to depend on the joint distribution which protocol gives the best utility.
VII-D Synthetic data: GRR vs CR
To investigate the difference between GRR and CR, we apply both methods to synthetic data. We disregard OUE as it performs worse than the other two protocols, especially in the low privacy regime. For a fixed choice of and , we draw a number of probability distributions from the Jeffreys prior on , i.e. the symmetric Dirichlet distribution with parameter . We fix a set of LIP values , and for each of these and each probability distribution, we solve equations (26) and (30), setting the left hand side equal to and solving for and . We then calculate the mutual information , which we normalise by dividing by . The resulting averages and standard deviations are displayed in Figure 7. On the whole, we see that the larger is compared to , the more utility CR provides compared to GRR. However, this does not tell the whole story, as the difference between datasets has more impact on the utility than the difference between methods.






VII-E GRR and CR parameter
To investigate what property of the probability distribution causes CR to outperform GRR, we consider the parameters and that govern the privacy protocols CR and GRR. Both of these have the property that the higher their value, the less ‘random’ the protocols are, resulting in a better utility. Since these are found from through different equations, the difference in utility of GRR and CR for different probability distributions may be explained by a difference in . We test this assertion for 100 randomly generated distributions in Figure 8. As can be seen, the difference in mutual information can for a large part be explained by a difference in (, , and , respectively). In Figure 9, we plot the relation between and the LIP value for the experiments in 6(b) and 6(d). The fact that in 9(a) corresponds to the fact that GRR outperforms CR in 6(b), and the opposite relation holds between 9(b) and 6(d).





Unfortunately, we were not able to relate the difference in parameter to other properties of the distribution. Without presenting details we mention that the properties and do not appear to have an impact on the difference in utility between GRR and CR.
VIII Conclusions and future work
Local data sanitisation protocols have the advantage of being scalable for large numbers of users. Furthermore, the advantage of using differential privacy-like privacy metrics is that they provide worst-case guarantees, ensuring that the privacy of every user is sufficiently protected. For both -LDP and -LIP we have derived methods to find optimal sanitisation protocols. Within this setting, we have observed that -LIP has two main advantages over -LDP. First, it fits better within the privacy funnel setting, where the distribution is (at least approximately) known to the estimator. Second, finding the optimal protocol is significantly faster than under LDP, especially for small . If one nevertheless prefers -LDP as a privacy metric, then it is still worthwile to find the optimal -LIP protocol, as this can be found significantly faster, at a low utility penalty.
In the multiple attributes setting, we have shown that -SRLIP provides additional privacy guarantees compared to -LIP, since without this requirement a protocol can lose all its privacy protection in the presence of side channels. Unfortunately, however, experiments show that we pay for this both in computation time and in utility.
With regard to the specific protocols, we have found that the newly introduced protocol, CR, generally outperforms OUE, especially for high values of -LIP. It behaves more or less similar to GRR, and which of these two protocols performs best depends on properties of the joint distribution . In particular, it largely depends on which of the two protocols has the highest value of their governing parameter . Also, we have seen that CR performs better on average if is large compared to .
For further research, a number of important avenues remain to be explored. First, the aggregator’s knowledge about may not be perfect, because they may learn about through observing . Incorporating this uncertainty leads to robust optimisation [3], which would give stronger privacy guarantees.
Second, it might be possible to improve the method of obtaining -SRLIP protocols via Theorem 4. Examining its proof shows that lower values of may suffice to still ensure -SRLIP. Furthermore, the optimal choice of such that might not be . However, it is computationally prohibitive to perform the vertex enumeration for many different choices of , and as such a new theoretical approach is needed to determine the optimal from and .
Third, it would be interesting to see if there are other ways to close the gap between the theoretically optimal protocol, which may be hard to compute in practice, and general LDP protocols, which do not see the difference between sensitive and non-sensitive information. This is relevant because CR needs both and as input, and there may be situations where access to is not available.
Although CR outperforms GRR and OUE for some datasets, it does not do so consistently. More research in the properties of distributions where CR fails to provide a significant advantage might lead to improved privacy protocols.
Acknowledgements
This work was supported by NWO grant 628.001.026 (Dutch Research Council, the Hague, the Netherlands).
References
- [1] David Avis and Komei Fukuda “A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra” In Discrete & Computational Geometry 8.3 Springer, 1992, pp. 295–313
- [2] Imre Bárány and Attila Pór “On 0-1 polytopes with many facets” In Advances in Mathematics 161.2 Academic Press, 2001, pp. 209–228
- [3] Dimitris Bertsimas, Vishal Gupta and Nathan Kallus “Data-driven robust optimization” In Mathematical Programming 167.2 Springer, 2018, pp. 235–292
- [4] Flavio du Pin Calmon et al. “Principal inertia components and applications” In IEEE Transactions on Information Theory 63.8 IEEE, 2017, pp. 5011–5038
- [5] Paul Cuff and Lanqing Yu “Differential privacy as a mutual information constraint” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 43–54
- [6] Ni Ding and Parastoo Sadeghi “A Submodularity-based Agglomerative Clustering Algorithm for the Privacy Funnel” Preprint In arXiv:1901.06629, 2019
- [7] Dheeru Dua and Casey Graff “UCI Machine Learning Repository”, 2017 URL: http://archive.ics.uci.edu/ml/datasets/Adult
- [8] Cynthia Dwork and Guy N Rothblum “Concentrated differential privacy” Preprint In arXiv:1603.01887, 2016
- [9] Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam Smith “Calibrating noise to sensitivity in private data analysis” In Theory of cryptography conference, 2006, pp. 265–284 Springer
- [10] Cynthia Dwork et al. “Our data, ourselves: Privacy via distributed noise generation” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2006, pp. 486–503 Springer
- [11] Úlfar Erlingsson et al. “Encode, shuffle, analyze privacy revisited: formalizations and empirical evaluation” In arXiv:2001.03618, 2020
- [12] Naoise Holohan, Douglas J Leith and Oliver Mason “Extreme points of the local differential privacy polytope” In Linear Algebra and its Applications 534 Elsevier, 2017, pp. 78–96
- [13] Bo Jiang, Ming Li and Ravi Tandon “Local Information Privacy with Bounded Prior” In ICC 2019-2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7 IEEE
- [14] Peter Kairouz, Sewoong Oh and Pramod Viswanath “Extremal mechanisms for local differential privacy” In Advances in neural information processing systems, 2014, pp. 2879–2887
- [15] Shiva Prasad Kasiviswanathan et al. “What can we learn privately?” In SIAM Journal on Computing 40.3 SIAM, 2011, pp. 793–826
- [16] Daniel Kifer and Ashwin Machanavajjhala “Pufferfish: A framework for mathematical privacy definitions” In ACM Transactions on Database Systems (TODS) 39.1 ACM New York, NY, USA, 2014, pp. 1–36
- [17] SY Kung “A compressive privacy approach to generalized information bottleneck and privacy funnel problems” In Journal of the Franklin Institute 355.4 Elsevier, 2018, pp. 1846–1872
- [18] Milan Lopuhaä-Zwakenberg “The Privacy Funnel from the Viewpoint of Local Differential Privacy” In Fourteenth International Conference on the Digital Society, 2020, pp. 19–24
- [19] Milan Lopuhaä-Zwakenberg, Boris Škorić and Ninghui Li “Information-theoretic metrics for Local Differential Privacy protocols” Preprint In arXiv:1910.07826, 2019
- [20] Ali Makhdoumi, Salman Salamatian, Nadia Fawaz and Muriel Médard “From the information bottleneck to the privacy funnel” In 2014 IEEE Information Theory Workshop (ITW 2014), 2014, pp. 501–505 IEEE
- [21] Ilya Mironov “Rényi differential privacy” In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), 2017, pp. 263–275 IEEE
- [22] Fabian Prasser, Florian Kohlmayer, Ronald Lautenschlaeger and Klaus A Kuhn “Arx-a comprehensive tool for anonymizing biomedical data” In AMIA Annual Symposium Proceedings 2014, 2014, pp. 984 American Medical Informatics Association
- [23] Borzoo Rassouli and Deniz Gunduz “On perfect privacy” In 2018 IEEE International Symposium on Information Theory (ISIT), 2018, pp. 2551–2555 IEEE
- [24] Salman Salamatian et al. “Privacy-Utility Tradeoff and Privacy Funnel” unpublished preprint, 2020 URL: http://www.mit.edu/~salmansa/files/privacy_TIFS.pdf
- [25] Naftali Tishby, Fernando C Pereira and William Bialek “The information bottleneck method” Preprint In arXiv:physics/0004057, 2000
- [26] Tianhao Wang, Jeremiah Blocki, Ninghui Li and Somesh Jha “Locally differentially private protocols for frequency estimation” In 26th USENIX Security Symposium (USENIX Security 17), 2017, pp. 729–745
- [27] Stanley L Warner “Randomized response: A survey technique for eliminating evasive answer bias” In Journal of the American Statistical Association 60.309 Taylor & Francis, 1965, pp. 63–69
- [28] Mengmeng Yang et al. “Local Differential Privacy and Its Applications: A Comprehensive Survey” Preprint In arXiv:2008.03686, 2020
Appendix A Proofs
Proof of Theorem 4.
For and , we write . Furthermore, we write , and its elements as . We write . We then have
(35) | ||||
(36) | ||||
(37) | ||||
(38) | ||||
(39) | ||||
(40) | ||||
(41) | ||||
(42) |
The fact that is proven analogously. ∎
Proof of Proposition 1.
Write . Then
(43) | ||||
(44) |
where is the Kronecker delta. It follows that
(45) | |||
(46) | |||
(47) | |||
(48) | |||
(49) | |||
(50) | |||
(51) | |||
(52) |
We find that
(53) |
hence satisfies -LIP if and only if . ∎