Privately Learning Markov Random Fields
Abstract
We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy. Our learning goals include both structure learning, where we try to estimate the underlying graph structure of the model, as well as the harder goal of parameter learning, in which we additionally estimate the parameter on each edge. We provide algorithms and lower bounds for both problems under a variety of privacy constraints – namely pure, concentrated, and approximate differential privacy. While non-privately, both learning goals enjoy roughly the same complexity, we show that this is not the case under differential privacy. In particular, only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence. As a result, we show that the privacy constraint imposes a strong separation between these two learning problems in the high-dimensional data regime.
1 Introduction
In this chapter, we continue to study the problem of private distribution estimation. However, we focus on a more complicated class of distributions – random graphs.
Graphical models are a common structure used to model high-dimensional data, which find a myriad of applications in diverse research disciplines, including probability theory, Markov Chain Monte Carlo, computer vision, theoretical computer science, social network analysis, game theory, and computational biology [LevinPW09, Chatterjee05, Felsenstein04, DaskalakisMR11, GemanG86, Ellison93, MontanariS10]. While statistical tasks involving general distributions over variables often run into the curse of dimensionality (i.e., an exponential sample complexity in ), Markov Random Fields (MRFs) are a particular family of undirected graphical models which are parameterized by the “order” of their interactions. Restricting the order of interactions allows us to capture most distributions which may naturally arise, and also avoids this severe dependence on the dimension (i.e., we often pay an exponential dependence on instead of ). An MRF is defined as follows, see Section 2 for more precise definitions and notations we will use in this chapter.
Definition 1.1.
Let , be a graph on nodes, and be the set of cliques of size at most in . A Markov Random Field with alphabet size and -order interactions is a distribution over such that
where depends only on varables in .
The case when corresponds to the prototypical example of an MRF, the Ising model [Ising25] (Definition 2.1). More generally, if , we call the model pairwise (Definition 2.2), and if but is unrestricted, we call the model a binary MRF (Definition 2.4). In this chapter, we mainly look at these two special cases of MRFs.
Given the wide applicability of these graphical models, there has been a great deal of work on the problem of graphical model estimation [RavikumarWL10, SanthanamW12, Bresler15, VuffrayMLC16, KlivansM17, HamiltonKM17, RigolletH17, LokhovVMC18, WuSD19]. That is, given a dataset generated from a graphical model, can we infer properties of the underlying distribution? Most of the attention has focused on two learning goals.
It is clear that structure learning is easier than parameter learning. Nonetheless, the sample complexity of both learning goals is known to be roughly equivalent. That is, both can be performed using a number of samples which is only logarithmic in the dimension (assuming a model of bounded “width” 111This is a common parameterization of the problem, which roughly corresponds to the graph having bounded-degree, see Section 2 for more details.), thus facilitating estimation in very high-dimensional settings.
Our goal is to design algorithms which guarantee both:
-
•
Accuracy: With probability greater than , the algorithm learns the underlying graphical model;
-
•
Privacy: The algorithm satisfies differential privacy, even when the dataset is not drawn from a graphical model.
Thematically, we investigate the following question: how much additional data is needed to learn Markov Random Fields under the constraint of differential privacy? As mentioned before, absent privacy constraints, the sample complexity is logarithmic in . Can we guarantee privacy with comparable amounts of data? Or if more data is needed, how much more?
1.1 Results and Techniques
We proceed to describe our results on privately learning Markov Random Fields. In this section, we will assume familiarity with some of the most common notions of differential privacy: pure -differential privacy, -zero-concentrated differential privacy, and approximate -differential privacy. In particular, one should know that these are in (strictly) decreasing order of strength (i.e., an algorithm which satisfies pure DP gives more privacy to the dataset than concentrated DP), formal definitions appear in Section 2. Furthermore, in order to be precise, some of our theorem statements will use notation which is defined later (Section 2) – these may be skipped on a first reading, as our prose will not require this knowledge.
Upper Bounds.
Our first upper bounds are for parameter learning. First, we have the following theorem, which gives an upper bound for parameter learning pairwise graphical models under concentrated differential privacy, showing that this learning goal can be achieved with samples. In particular, this includes the special case of the Ising model, which corresponds to an alphabet size . Note that this implies the same result if one relaxes the learning goal to structure learning, or the privacy notion to approximate DP, as these modifications only make the problem easier. Further details are given in Section 3.3.
Theorem 1.2.
There exists an efficient -zCDP algorithm which learns the parameters of a pairwise graphical model to accuracy with probability at least , which requires a sample complexity of
This result can be seen as a private adaptation of the elegant work of [WuSD19] (which in turn builds on the structural results of [KlivansM17]). Wu, Sanghavi, and Dimakis [WuSD19] show that -constrained logistic regression suffices to learn the parameters of all pairwise graphical models. We first develop a private analog of this method, based on the private Franke-Wolfe method of Talwar, Thakurta, and Zhang [TalwarTZ14, TalwarTZ15], which is of independent interest. This method is studied in Section 3.1.
Theorem 1.3.
If we consider the problem of private sparse logistic regression, there exists an efficient -zCDP algorithm that produces a parameter vector , such that with probability at least , the empirical risk
We note that Theorem 1.3 avoids a polynomial dependence on the dimension in favor of a polynomial dependence on the “sparsity” parameter . The greater dependence on which arises in Theorem 1.2 is from applying Theorem 1.3 and then using composition properties of concentrated DP.
We go on to generalize the results of [WuSD19], showing that -constrained logistic regression can also learn the parameters of binary -wise MRFs. This result is novel even in the non-private setting. Further details are presented in Section 4.
The following theorem shows that we can learn the parameters of binary -wise MRFs with samples.
Theorem 1.4.
Let be an unknown binary -wise MRF with associated polynomial . Then there exists an -zCDP algorithm which, with probability at least , learns the maximal monomials of to accuracy , given i.i.d. samples , where
To obtain the rate above, our algorithm uses the Private Multiplicative Weights (PMW) method by [HardtR10] to estimate all parity queries of all orders no more than . The PMW method runs in time exponential in , since it maintains a distribution over the data domain. We can also obtain an oracle-efficient algorithm that runs in polynomial time when given access to an empirical risk minimization oracle over the class of parities. By replacing PMW with such an oracle-efficient algorithm sepFEM in [VietriTBSW20], we obtain a slightly worse sample complexity
For the special case of structure learning under approximate differential privacy, we provide a significantly better algorithm. In particular, we can achieve an sample complexity, which improves exponentially on the above algorithm’s sample complexity of . The following is a representative theorem statement for pairwise graphical models, though we derive similar statements for binary MRFs of higher order.
Theorem 1.5.
There exists an efficient -differentially private algorithm which, with probability at least , learns the structure of a pairwise graphical model, which requires a sample complexity of
This result can be derived using stability properties of non-private algorithms. In particular, in the non-private setting, the guarantees of algorithms for this problem recover the entire graph exactly with constant probability. This allows us to derive private algorithms at a multiplicative cost of samples, using either the propose-test-release framework [DworkL09] or stability-based histograms [KorolovaKMN09, BunNSV15]. Further details are given in Section 6.
Lower Bounds.
We note the significant gap between the aforementioned upper bounds: in particular, our more generally applicable upper bound (Theorem 1.2) has a dependence on the dimension, whereas the best known lower bound is [SanthanamW12]. However, we show that our upper bound is tight. That is, even if we relax the privacy notion to approximate differential privacy, or relax the learning goal to structure learning, the sample complexity is still . Perhaps surprisingly, if we perform both relaxations simultaneously, this falls into the purview of Theorem 1.5, and the sample complexity drops to .
First, we show that even under approximate differential privacy, learning the parameters of a graphical model requires samples. The formal statement is given in Section 5.
Theorem 1.6 (Informal).
Any algorithm which satisfies approximate differential privacy and learns the parameters of a pairwise graphical model with probability at least requires samples.
This result is proved by constructing a family of instances of binary pairwise graphical models (i.e., Ising models) which encode product distributions. Specifically, we consider the set of graphs formed by a perfect matching with edges for . In order to estimate the parameter on every edge, one must estimate the correlation between each such pair of nodes, which can be shown to correspond to learning the mean of a particular product distribution in -distance. This problem is well-known to have a gap between the non-private and private sample complexities, due to methods derived from fingerprinting codes [BunUV14, DworkSSUV15, SteinkeU17a], and differentially private Fano’s inequality.
Second, we show that learning the structure of a graphical model, under either pure or concentrated differential privacy, requires samples. The formal theorem appears in Section 7.
Theorem 1.7 (Informal).
Any algorithm which satisfies pure or concentrated differential privacy and learns the structure of a pairwise graphical model with probability at least requires samples.
We derive this result via packing arguments [HardtT10, BeimelBKN14], and differentially private Fano’s inequality, by showing that there exists a large number (exponential in ) of different binary pairwise graphical models which must be distinguished. The construction of a set of size implies lower bounds of and for learning under pure and concentrated differential privacy, respectively.
1.1.1 Summary and Discussion
We summarize our findings on privately learning Markov Random Fields in Table 1, focusing on the specific case of the Ising model. We note that qualitatively similar relationships between problems also hold for general pairwise models as well as higher-order binary Markov Random Fields. Each cell denotes the sample complexity of a learning task, which is a combination of an objective and a privacy constraint. Problems become harder as we go down (as the privacy requirement is tightened) and to the right (structure learning is easier than parameter learning).
The top row shows that both learning goals require only samples to perform absent privacy constraints, and are thus tractable even in very high-dimensional settings or when data is limited. However, if we additionally wish to guarantee privacy, our results show that this logarithmic sample complexity is only achievable when one considers structure learning under approximate differential privacy. If one changes the learning goal to parameter learning, or tightens the privacy notion to concentrated differential privacy, then the sample complexity jumps to become polynomial in the dimension, in particular . Nonetheless, we provide algorithms which match this dependence, giving a tight bound on the sample complexity.
Structure Learning | Parameter Learning | |
Non-private | (folklore) | (folklore) |
Approximate DP | (Theorems 6.3) | (Theorems 3.8 and 5.1) |
Zero-concentrated DP | (Theorems 3.8 and 7.1) | (Theorems 3.8 and 5.1) |
Pure DP | (Theorem 7.1) | (Theorem 7.1) |
1.2 Related Work
As mentioned before, there has been significant work in learning the structure and parameters of graphical models, see, e.g., [ChowL68, CsiszarT06, AbbeelKN06, RavikumarWL10, JalaliJR11, JalaliRVS11, SanthanamW12, BreslerGS14b, Bresler15, VuffrayMLC16, KlivansM17, HamiltonKM17, RigolletH17, LokhovVMC18, WuSD19]. Perhaps a turning point in this literature is the work of Bresler [Bresler15], who showed for the first time that general Ising models of bounded degree can be learned in polynomial time. Since this result, following works have focused on both generalizing these results to broader settings (including MRFs with higher-order interactions and non-binary alphabets) as well as simplifying existing arguments. There has also been work on learning, testing, and inferring other statistical properties of graphical models [BhattacharyaM16, MartindelCampoCU16, DaskalakisDK17, MukherjeeMY18, Bhattacharya19]. In particular, learning and testing Ising models in statistical distance have also been explored [DaskalakisDK18, GheissariLP18, DevroyeMR20, DaskalakisDK19, BezakovaBCSV19], and are interesting questions under the constraint of privacy.
Recent investigations at the intersection of graphical models and differential privacy include [BernsteinMSSHM17, ChowdhuryRJ20, McKennaSM19]. Bernstein et al. [BernsteinMSSHM17] privately learn graphical models by adding noise to the sufficient statistics and use an expectation-maximization based approach to recover the parameters. However, the focus is somewhat different, as they do not provide finite sample guarantees for the accuracy when performing parameter recovery, nor consider structure learning at all. Chowdhury, Rekatsinas, and Jha [ChowdhuryRJ20] study differentially private learning of Bayesian Networks, another popular type of graphical model which is incomparable with Markov Random Fields. McKenna, Sheldon, and Miklau [McKennaSM19] apply graphical models in place of full contingency tables to privately perform inference.
Graphical models can be seen as a natural extension of product distributions, which correspond to the case when the order of the MRF is . There has been significant work in differentially private estimation of product distributions [BlumDMN05, BunUV14, DworkMNS06, SteinkeU17a, KamathLSU18, CaiWZ19, BunKSW2019]. Recently, this investigation has been broadened into differentially private distribution estimation, including sample-based estimation of properties and parameters, see, e.g., [NissimRS07, Smith11, BunNSV15, DiakonikolasHS15, KarwaV18, AcharyaKSZ18, KamathLSU18, BunKSW2019]. For further coverage of differentially private statistics, see [KamathU20].
2 Preliminaries and Notation
In order to distinguish between the vector coordinate and the sample, we use a different notation in this chapter. Given a set of points , we use superscripts, i.e., to denote the -th datapoint. Given a vector , we use subscripts, i.e., to denote its -th coordinate. We also use to denote the vector after deleting the -th coordinate, i.e. .
2.1 Markov Random Field Preliminaries
We first introduce the definition of the Ising model, which is a special case of general MRFs when .
Definition 2.1.
The -variable Ising model is a distribution on that satisfies
where is a symmetric weight matrix with and is a mean-field vector. The dependency graph of is an undirected graph , with vertices and edges . The width of is defined as
Let be the minimum edge weight in absolute value, i.e.,
We note that the Ising model is supported on . A natural generalization is to generalize its support to , and maintain pairwise correlations.
Definition 2.2.
The -variable pairwise graphical model is a distribution on that satisfies
where is a set of weight matrices satisfying , and is a set of mean-field vectors. The dependency graph of is an undirected graph , with vertices and edges . The width of is defined as
Define .
Both the models above only consider pairwise interactions between nodes. In order to capture higher-order interactions, we examine the more general model of Markov Random Fields (MRFs). In this chapter, we will restrict our attention to MRFs over a binary alphabet (i.e., distributions over ). In order to define binary -wise MRFs, we first need the following definition of multilinear polynomials, partial derivatives and maximal monomials.
Definition 2.3.
Multilinear polynomial is defined as such that where denotes the coefficient of the monomial with respect to the variables . Let denote the partial derivative of with respect to . Similarly, for , let denote the partial derivative of with respect to the variables . We say is a maximal monomial of if for all .
Now we are able to formally define binary -wise MRFs.
Definition 2.4.
For a graph on vertices, let denotes all cliques of size at most in G. A binary -wise Markov random field on is a distribution on which satisfies
and each is a multilinear polynomial that depends only on the variables in .
We call the dependency graph of the MRF and the factorization polynomial of the MRF. The width of is defined as , where .
Now we introduce the definition of -unbiased distribution and its properties. The proof appears in [KlivansM17].
Definition 2.5 (-unbiased).
Let be the alphabet set, e.g., for binary -pairwise MRFs and for pairwise graphical models. A distribution on is -unbiased if for , , and any assignment to , .
The marginal distribution of a -unbiased distribution also satisfies -unbiasedness.
Lemma 2.6.
Let be a -unbiased on , with alphabet set . For , , the distribution of is also -unbiased.
The following lemmas provide -unbiased guarantees for various graphical models.
Lemma 2.7.
Let be a pairwise graphical model with alphabet size and width . Then is -unbiased with . In particular, an Ising model is -unbiased.
Lemma 2.8.
Let be a binary -wise MRFs with width . Then is -unbiased with .
Finally, we define two possible goals for learning graphical models. First, the easier goal is structure learning, which involves recovering the set of non-zero edges.
Definition 2.9.
An algorithm learns the structure of a graphical model if, given samples , it outputs a graph over such that , the set of edges in the dependency graph of .
The more difficult goal is parameter learning, which requires the algorithm to learn not only the location of the edges, but also their parameter values.
Definition 2.10.
An algorithm learns the parameters of an Ising model (resp. pairwise graphical model) if, given samples , it outputs a matrix (resp. set of matrices ) such that (resp. , ).
Definition 2.11.
An algorithm learns the parameters of a binary -wise MRF with associated polynomial if, given samples , it outputs another multilinear polynomial such that that for all maximal monomial , .
2.2 Privacy Preliminaries
A dataset is a collection of points from some universe . In this chapter we consider a few different variants of differential privacy. The first is the standard notion of differential privacy, which has been heavily used in the previous chapters. The second is concentrated differential privacy [DworkR16]. In this chapter, we specifically consider its refinement zero-mean concentrated differential privacy [BunS16].
Definition 2.12 (Concentrated Differential Privacy (zCDP) [BunS16]).
A randomized algorithm satisfies -zCDP if for every pair of neighboring datasets ,
where is the -Rényi divergence between and .
The following lemma quantifies the relationships between -DP, -zCDP and -DP.
Lemma 2.13 (Relationships Between Variants of DP [BunS16]).
For every ,
-
1.
If satisfies -DP, then is -zCDP.
-
2.
If satisfies -zCDP, then satisfies -DP for every .
Roughly speaking, pure DP is stronger than zero-concentrated DP, which is stronger than approximate DP.
A crucial property of all the variants of differential privacy is that they can be composed adaptively. By adaptive composition, we mean a sequence of algorithms where the algorithm may also depend on the outcomes of the algorithms .
Lemma 2.14 (Composition of zero-concentrated DP [BunS16]).
If is an adaptive composition of differentially private algorithms , and are -zCDP respectively, then is -zCDP for .
3 Parameter Learning of Pairwise Graphical Models
3.1 Private Sparse Logistic Regression
As a subroutine of our parameter learning algorithm, we consider the following problem: given a training data set consisting of n pairs of data , where and , a constraint set , and a loss function , we want to find with a zCDP constraint. This problem was previously studied in [TalwarTZ14]. Before stating their results, we need the following two definitions. The first definition is regarding Lipschitz continuity.
Definition 3.1.
A function is -Lipschitz with respect to norm, if the following holds.
The performance of the algorithm also depends on the “curvature” of the loss function, which is defined below, based on the definition of [Clarkson10, Jaggi13]. A side remark is that this is a strictly weaker constraint than smoothness [TalwarTZ14].
Definition 3.2 (Curvature constant).
For , is defined as
Now we are able to introduce the algorithm and its theoretical guarantees.
Lemma 3.3 (Theorem 5.5 from [TalwarTZ14]).
Algorithm 1 satisfies -zCDP. Furthermore, let , be defined as in Algorithm 1. Let be an upper bound on the curvature constant for the loss function for all and be the number of extreme points in . If we set , then with probability at least over the randomness of the algorithm,
Proof.
The utility guarantee is proved in [TalwarTZ14]. Therefore, it is enough to prove the algorithm satisfies -zCDP. According to the definition of the Laplace mechanism, every iteration of the algorithm satisfies -DP, which naturally satisfies -zCDP by Lemma 2.13. Then, by the composition theorem of zCDP (Lemma 2.14), the algorithm satisfies -zCDP. ∎
If we consider the specific problem of sparse logistic regression, we will get the following corollary.
Corollary 3.4.
If we consider the problem of sparse logistic regression, i.e., , with the constraint that , and we further assume that , let , then with probability at least over the randomness of the algorithm,
Furthermore, the time complexity of the algorithm is .
Proof.
First let we show . If we fix sample , then for any ,
Since , we have .
Next, we wish to show . We use the following lemma from [TalwarTZ14].
Lemma 3.5 (Remark 4 in [TalwarTZ14]).
For any such that , is upper bounded by , where .
If we take , then , where
We have , since , and ,
Finally given , the number of extreme points of equals . By replacing all these parameters in Lemma 3.3, we have proved the loss guarantee in the corollary.
With respect to the time complexity, we note that the time complexity of each iteration is and there are iterations in total. ∎
Now if we further assume the data set is drawn i.i.d. from some underlying distribution , the following lemma from learning theory relates the true risk and the empirical risk, which shall be heavily used in the following sections.
Theorem 3.6.
If we consider the same problem setting and assumptions as in Corollary 3.4, and we further assume that the training data set is drawn i.i.d. from some unknown distribution , then with probability at least over the randomness of the algorithm and the training data set,
where .
Proof.
By triangle inequality,
Now we need to bound each term. We firstly bound the first and last term simultaneously. By the generalization error bound (Lemma 7 from [WuSD19]), they are bounded by simultaneously, with probability greater than . Then we turn to the second term, by Corollary 3.4, with probability greater than , it is bounded by . Finally we bound the third term. According to the definition of , the third term should be smaller than 0. Therefore, by union bound, , with probability greater than . ∎
3.2 Privately Learning Ising Models
We first consider the problem of estimating the weight matrix of the Ising model. To be precise, given i.i.d. samples generated from an unknown distribution , our goal is to design an -zCDP estimator such that with probability at least , .
An observation of the Ising model is that for any node , the probability of conditioned on the values of the remaining nodes follows from a sigmoid function. The next lemma comes from [KlivansM17], which formalizes this observation.
Lemma 3.7.
Let and , then , ,
where , and .
Proof.
The proof is from [KlivansM17], and we include it here for completeness. According to the definition of the Ising model,
∎
By Lemma 3.7, we can estimate the weight matrix by solving a logistic regression for each node, which is utilized in [WuSD19] to design non-private estimators. Our algorithm uses the private Frank-Wolfe method to solve the per-node logistic regression problem, achieving the following theoretical guarantee.
Output:
Theorem 3.8.
Let be an unknown -variable Ising model with . There exists an efficient -zCDP algorithm which outputs a weight matrix such that with probability greater than , if the number of i.i.d. samples satisfies
Proof.
We first prove that Algorithm 2 satisfies -zCDP. Notice that in each iteration, the algorithm solves a private sparse logistic regression under -zCDP. Therefore, Algorithm 2 satisfies -zCDP by composition (Lemma 2.14).
For the accuracy analysis, we start by looking at the first iteration () and showing that , , with probability greater than .
Given a random sample , we let , . From Lemma 3.7, , where . We also note that , as a consequence of the width constraint of the Ising model.
For any i.i.d. samples drawn from the Ising model, let and , it is easy to check that each is the realization of . Let be the output of , where . By Lemma 3.6, when , with probability greater than ,
We will use the following lemma from [WuSD19]. Roughly speaking, with the assumption that the samples are generated from an Ising model, any estimator which achieves a small error in the loss guarantees an accurate parameter recovery in distance.
Lemma 3.9.
Let be a distribution on . Given , suppose for . If the marginal distribution of on is -unbiased, and for some , and , then
By Lemma 2.6, Lemma 2.7 and Lemma 3.9, if , we have . By replacing , we prove that with probability greater than . Noting that similar argument works for the other iterations and non-overlapping part of the matrix is recovered in different iterations. By union bound over iterations, we prove that with probability at least , .
Finally, we note that the time compexity of the algorithm is since the private Frank-Wolfe algorithm is time efficient by Corollary 3.4. ∎
3.3 Privately Learning Pairwise Graphical Models
Next, we study parameter learning for pairwise graphical models over general alphabet. Given i.i.d. samples drawn from an unknown distribution , we want to design an -zCDP estimator such that with probability at least , . To facilitate our presentation, we assume that , every row (and column) vector of has zero mean.222The assumption that is centered is without loss of generality and widely used in the literature [KlivansM17, WuSD19]. We present the argument here for completeness. Suppose the -th row of is not centered, i.e., , we can define and , and the probability distribution remains unchanged.
Analogous to Lemma 3.7 for the Ising model, a pairwise graphical model has the following property, which can be utilized to recover its parameters.
Lemma 3.10 (Fact 2 of [WuSD19]).
Let and . For any , any , and any ,
Now we introduce our algorithm. Without loss of generality, we consider estimating for all as a running example. We fix a pair of values , where and . Let be the samples where . In order to utilize Lemma 3.10, we perform the following transformation on the samples in : for the -th sample , let if , else . And is the one-hot encoding of the vector , where is a mapping from to , and the -th row is the -th standard basis vector given . Then we define as follows:
Lemma 3.10 implies that , , where is the element-wise multiplication of matrices. According to the definition of the width of , . Now we can apply the sparse logistic regression method of Algorithm 3 to the samples in .
Suppose is the output of the private Frank-Wolfe algorithm, we define as follows: ,
(1) |
can be seen as a “centered” version of (for the first rows). It is not hard to see that , so is also a minimizer of the sparse logistic regression.
For now, assume that , is a “good” approximation of , which we will show later. If we sum over , it can be shown that is also a “good” approximation of , for all , and , because of the centering assumption of , i.e., . With these considerations in mind, we are able to introduce our algorithm.
Output: for all
The following theorem is the main result of this section. Its proof is structurally similar to that of Theorem 3.8.
Theorem 3.11.
Let be an unknown -variable pairwise graphical model distribution, and we suppose that has width . There exists an efficient -zCDP algorithm which outputs such that with probability greater than , , if the number of i.i.d. samples satisfy
Proof.
We consider estimating for all as an example. Fixing one pair , let be the samples whose first element is either or , and be the number of samples in . We perform the following transformation on the samples in : for the sample , let if , else , and let be the one-hot encoding of the vector .
Suppose the underlying joint distribution of and is , i.e., , then by Theorem 3.6, when , with probability greater than ,
The following lemma appears in [WuSD19], which is analogous to Lemma 3.9 for the Ising model.
Lemma 3.12.
Let be a -unbiased distribution on . For , denotes the one-hot encoding of . Let be two matrices where and for all . Let be a distribution such that given , for . Suppose for , and , then
By Lemma 2.6, Lemma 2.7 and Lemma 3.12, if we substitute , when
(2) |
By a union bound, Equation (2) holds for all pairs simultaneously with probability greater than . If we sum over and use the fact that , we have
Note that we need to guarantee that we obtain samples for each pair . Since is -unbiased, given , for all , . By Hoeffding’s inequality, when , with probability greater than , we have enough samples for all pairs simultaneously. Substituting , we have
The same argument holds for other entries of the matrix. We conclude the proof by a union bound over iterations.
Finally, we note that the time compexity of the algorithm is since the private Frank-Wolfe algorithm is time efficient by Corollary 3.4. ∎
4 Privately Learning Binary -wise MRFs
Let be a -wise MRF on with underlying dependency graph and factorization polynomial . We assume that the width of is bounded by , i.e., , where . Similar to [KlivansM17], given i.i.d. samples generated from an unknown distribution , we consider the following two related learning objectives, under the constraint of -zCDP:
-
1.
find a multilinear polynomial such that with probability greater than , ;
-
2.
find a multilinear polynomial such that with probability greater than , for every maximal monomial of , .
We note that our first objective can be viewed as parameter estimation in distance, where only an average performance guarantee is provided. In the second objective, the algorithm recovers every maximal monomial, which can be viewed as parameter estimation in distance. These two objectives are addressed in Sections 4.1 and 4.2, respectively.
4.1 Parameter Estimation in Distance
The following property of MRFs, from [KlivansM17], plays a critical role in our algorithm. The proof is similar to that of Lemma 3.7.
Lemma 4.1 (Lemma 7.6 of [KlivansM17]).
Let be a -wise MRF on with underlying dependency graph and factorization polynomial , then
Lemma 4.1 shows that, similar to pairwise graphical models, it also suffices to learn the parameters of binary -wise MRF using sparse logistic regression.
Output: , where is the -dimensional complete graph
Theorem 4.2.
There exists a -zCDP algorithm which, with probability at least , finds a multilinear polynomial such that given i.i.d. samples , where
Proof.
Similar to the previous proof, we start by fixing . Given a random sample , let and . According to Lemma 4.1, we know that , where . Furthermore, by the width constraint. Now, given i.i.d. samples drawn from , it is easy to check that for any given , its corresponding is one realization of . Let be the output of , where and . By Lemma 3.6, with probability greater than , assuming .
Now we need the following lemma from [KlivansM17], which is analogous to Lemma 3.7 for the Ising model.
Lemma 4.3 (Lemma 6.4 of [KlivansM17]).
Let be a distribution on . Given multilinear polynomial , for . Suppose the marginal distribution of on is -unbiased, and for another multilinear polynomial , where , then
By substituting , we have that with probability greater than , . We note that the coefficients of different monomials are recovered in each iteration. Therefore, by a union bound over iterations, we prove the desired result. ∎
4.2 Parameter Estimation in Distance
In this section, we introduce a slightly modified version of the algorithm in the last section.
We first show that if the estimates for the parity queries are sufficiently accurate, Algorithm 5 solves the estimation problem, as long as the sample size is large enough.
Lemma 4.4.
Suppose that the estimates satisfies for all such that and . Then with probability at least , Algorithm 5 outputs a multilinear polynomial such that for every maximal monomial of , given i.i.d. samples , as long as
Proof.
We will condition on the event that is a “good” estimate of : for all such that . Let us fix . Let , , and we know that , where . Now given i.i.d. samples drawn from , let be the output of , where and . Similarly, with probability at least ,
as long as .
Now we utilize Lemma 6.4 from [KlivansM17], which states that if , given a random sample , for any maximal monomial of ,
By replacing , we have , as long as . Accordingly, for any maximal monomial , . By Hoeffding inequality, given , for each maximal monomial , with probability greater than , . Note that , then . Therefore,
Finally, by a union bound over iterations and all the maximal monomials, we prove the desired results.∎
We now consider two private algorithms for releasing the parity queries. The first algorithm is called Private Multiplicative Weights (PMW) [HardtR10], which provides a better accuracy guarantee but runs in time exponential in the dimension . The following theorem can be viewed as a zCDP version of Theorem 4.3 in [Vadhan17], by noting that during the analysis, every iteration satisfies -DP, which naturally satisfies -zCDP, and by replacing the strong composition theorem of -DP by the composition theorem of zCDP (Lemma 2.14).
Lemma 4.5 (Sample complexity of PMW, modification of Theorem 4.3 of [Vadhan17]).
The PMW algorithm satisfies -zCDP and releases such that with probability greater than , for all with , as long as the size of the data set
The second algorithm sepFEM (Separator-Follow-the-perturbed-leader with exponential mechanism) has slightly worse sample complexity, but runs in polynomial time when it has access to an optimization oracle that does the following: given as input a weighted dataset , find ,
The oracle essentially solves cost-sensitive classification problems over the set of parity functions [ZLA03], and it can be implemented with an integer program solver [VietriTBSW20, GaboardiAHRW14].
Lemma 4.6 (Sample complexity of sepFEM, [VietriTBSW20]).
The sepFEM algorithm satisfies -zCDP and releases such that with probability greater than , for all with , as long as the size of the data set
The algorithm runs in polynomial time given access to the optimization oracle defined above.
Theorem 4.7.
Algorithm 5 is a -zCDP algorithm which, with probability at least , finds a multilinear polynomial such that for every maximal monomial of , given i.i.d. samples , and
-
1.
if it uses PMW for releasing ; it has a sample complexity of
and a runtime complexity that is exponential in ;
-
2.
if it uses sepFEM for releasing , it has a sample complexity of
and runs in polynomial time whenever .
5 Lower Bounds for Parameter Learning
The lower bound for parameter estimation is based on mean estimation in distance.
Theorem 5.1.
Suppose is an -differentially private algorithm that takes i.i.d. samples drawn from any unknown -variable Ising model and outputs such that Then .
Proof.
Consider a Ising model with defined as follows: for , and for all other pairs of . This construction divides the nodes into pairs, where there is no correlation between nodes belonging to different pairs. It follows that
For each observation , we obtain an observation such that . Then each observation is distributed according to a product distribution in such that the mean of each coordinate is .
Suppose that an -differentially private algorithm takes observations drawn from any such Ising model distribution and output a matrix such that . Let be the value of rounded into the range of , and so . It follows that
where the last step follows from the fact that for any . Thus, such private algorithm also can estimate the mean of the product distribution accurately:
Now we will use the following sample complexity lower bound on private mean estimation on product distributions.
Lemma 5.2 (Lemma 6.2 of [KamathLSU18]).
If is -differentially private, and for every product distribution over such that the mean of each coordinate satisfies ,
then .
Then our stated bound follows by instantiating and in Lemma 5.2.∎
6 Structure Learning of Graphical Models
In this section, we will give an -differentially private algorithm for learning the structure of a Markov Random Field. The dependence on the dimension will be only logarithmic, in comparison to the complexity of privately learning the parameters. As we have shown in Section 5, this dependence is necessarily polynomial in , even under approximate differential privacy. Furthermore, as we will show in Section 7, if we wish to learn the structure of an MRF under more restrictive notions of privacy (such as pure or concentrated), the complexity also becomes polynomial in . Thus, in very high-dimensional settings, learning the structure of the MRF under approximate differential privacy is essentially the only notion of private learnability which is tractable.
The following lemma is immediate from stability-based mode arguments (see, e.g., Proposition 3.4 of [Vadhan17]).
Lemma 6.1.
Suppose there exists a (non-private) algorithm which takes sampled i.i.d. from some distribution , and outputs some fixed value (which may depend on ) with probability at least . Then there exists an -differentially private algorithm which takes samples and outputs with probability at least .
We can now directly import the following theorem from [WuSD19].
Theorem 6.2 ([WuSD19]).
There exists an algorithm which, with probability at least , learns the structure of a pairwise graphical model. It requires samples.
This gives us the following private learning result as a corollary.
Corollary 6.3.
There exists an -differentially private algorithm which, with probability at least , learns the structure of a pairwise graphical model. It requires samples.
For binary MRFs of higher-order, we instead import the following theorem from [KlivansM17]:
Theorem 6.4 ([KlivansM17]).
There exists an algorithm which, with probability at least , learns the structure of a binary -wise MRF. It requires samples.
This gives us the following private learning result as a corollary.
Corollary 6.5.
There exists an -differentially private algorithm which, with probability at least , learns the structure of a binary -wise MRF. It requires
samples.
7 Lower Bounds for Structure Learning of Graphical Models
In this section, we will prove structure learning lower bounds under pure DP or zero-concentrated DP. The graphical models we consider are the Ising models and pairwise graphical model. However, we note that all the lower bounds for the Ising model also hold for binary -wise MRFs, since the Ising model is a special case of binary -wise MRFs corresponding to . We will show that under -DP or -zCDP, a polynomial dependence on the dimension is unavoidable in the sample complexity.
In Section 7.1, we assume that our samples are generated from an Ising model. In Section 7.2, we extend our lower bounds to pairwise graphical models.
7.1 Lower Bounds for Structure Learning of Ising Models
Theorem 7.1.
Any -DP algorithm which learns the structure of an Ising model with minimum edge weight with probability at least requires samples. Furthermore, at least samples are required for the same task under -zCDP.
Proof.
Our lower bound argument is in two steps. The first step is to construct a set of distributions, consisting of different Ising models such that any feasible structure learning algorithm should output different answers for different distributions. In the second step, we utilize our Private Fano’s inequality, or the packing argument for zCDP [BunS16] to get the desired lower bound.
To start, we would like to use the following binary code to construct the distribution set. Let , given , we construct the corresponding distribution with defined as follows: for , and 0 elsewhere. By construction, we divide the nodes into different pairs, where there is no correlation between nodes belonging to different pairs. Furthermore, for pair , if , which means the value of node is independent of node , it is not hard to show
On the other hand, if ,
The Chi-squared distance between these two distributions is
Now we want to upper bound the total variation distance between and for any . Let and denote the joint distribution of node and node corresponding to and . We have that
where the first inequality is by Pinsker’s inequality, and the last inequality comes from the fact that the KL divergence is always upper bounded by the Chi-squared distance.
In order to attain pure DP lower bounds, we utilize the corollary of DP Fano’s inequality for estimation (Theorem LABEL:coro:fano).
For any , we have . By the property of maximal coupling [Hollander12], there must exist some coupling between and with expected Hamming distance smaller than . Therefore, we have , and accordingly, .
Now we move to zCDP lower bounds. We utilize a different version of the packing argument [BunS16], which works under zCDP.
Lemma 7.2.
Let be a set of distributions over . Let be a collection of disjoint subsets of . If there exists an -zCDP algorithm such that for every , given , , then
By Lemma 7.2, we derive and accordingly. ∎
7.2 Lower Bounds for Structure Learning of Pairwise Graphical Models
Similar techniques can be used to derive lower bounds for pairwise graphical models.
Theorem 7.3.
Any -DP algorithm which learns the structure of the -variable pairwise graphical models with minimum edge weight with probability at least requires samples. Furthermore, at least samples are required for the same task under -zCDP.
Proof.
Similar to before, we start with constructing a distribution set consisting of different pairwise graphical models such that any accurate structure learning algorithm must output different answers for different distributions.
Let be the real symmetric matrix with each value constrained to either or , i.e., . Without loss of generality, we assume is even. Given , where , we construct the corresponding distribution with defined as follows: for , and for other pairs , . Similarly, by this construction we divide nodes into different pairs, and there is no correlation between nodes belonging to different pairs.
We first prove lower bounds under -DP. By Theorem LABEL:coro:fano, , since for any two -sample distributions, the expected coupling distance can be always upper bounded by . We also note that . Therefore, we have . At the same time, is another lower bound, inherited from the easier task of learning Ising models.
With respect to zCDP, we utilize Lemma 7.2 and obtain . Therefore, we have . ∎
Acknowledgments
The authors would like to thank Kunal Talwar for suggesting the study of this problem, and Adam Klivans, Frederic Koehler, Ankur Moitra, and Shanshan Wu for helpful and inspiring conversations. GK would like to thank Chengdu Style Restaurant (古月飘香) in Berkeley for inspiration in the conception of this project.