Nearly Tight Black-Box Auditing of
Differentially Private Machine Learning††thanks: To appear at NeurIPS 2024, please cite accordingly
Abstract
This paper presents an auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model that is substantially tighter than prior work. The main intuition is to craft worst-case initial model parameters, as DP-SGD’s privacy analysis is agnostic to the choice of the initial model parameters. For models trained on MNIST and CIFAR-10 at theoretical , our auditing procedure yields empirical estimates of and , respectively, on a 1,000-record sample and and on the full datasets. By contrast, previous audits were only (relatively) tight in stronger white-box models, where the adversary can access the model’s inner parameters and insert arbitrary gradients. Overall, our auditing procedure can offer valuable insight into how the privacy analysis of DP-SGD could be improved and detect bugs and DP violations in real-world implementations. The source code needed to reproduce our experiments is available here.
1 Introduction
Differentially Private Stochastic Gradient Descent (DP-SGD) [1] allows training machine learning models while providing formal Differential Privacy (DP) guarantees [17]. DP-SGD is a popular algorithm supported in many open-source libraries like Opacus [43], TensorFlow [21], and JAX [6]. Progress in privacy accounting and careful hyper-parameter tuning has helped reduce the gap in model performance compared to non-private training [13]. Furthermore, pre-training models on large public datasets and then fine-tuning them on private datasets using DP-SGD can also be used to improve utility [13].
Implementing DP-SGD correctly can be challenging – as with most DP algorithms [7, 28] – and bugs that lead to DP violations have been found in several DP-SGD implementations [9, 24]. These violations do not only break the formal DP guarantees but can also result in realistic privacy threats [19, 36]. This prompts the need to audit DP-SGD implementations to verify whether the theoretical DP guarantees hold in practice. This usually involves running membership inference attacks [35], whereby an adversary infers whether or not a sample was used to train the model. The adversary’s success estimates the empirical privacy leakage from DP-SGD, and that is compared to the theoretical DP bound. If the former is “far” from the latter, it means that the auditing procedure is loose and may not be exploiting the maximum possible privacy leakage. Whereas, when empirical and theoretical estimates are “close,” auditing is tight and can be used to identify bugs and violations [31].
Although techniques to tightly audit DP-SGD exist in literature [31, 32], they do so only in active white-box threat models, where the adversary can observe and insert arbitrary gradients into the intermediate DP-SGD steps. By contrast, in the more restrictive black-box threat model, where an adversary can only insert an input canary and observe the final trained model, prior work has only achieved loose audits [22, 31, 32]. In fact, prior work has suggested that the privacy analysis of DP-SGD in the black-box threat model can be tightened but only in limited settings (i.e., strongly convex smooth loss functions) [4, 11, 42]. Therefore, it has so far remained an open research question [31] whether or not it is possible to tightly audit DP-SGD in the black-box model.
In this paper, we introduce a novel auditing procedure that achieves substantially tighter black-box audits of DP-SGD than prior work, even for natural (i.e., not adversarially crafted) datasets. The main intuition behind our auditing procedure is to craft worst-case initial model parameters, as DP-SGD’s privacy analysis is agnostic to the choice of initial model parameters. We show that the gap between worst-case empirical and theoretical privacy leakage in the black-box threat model is much smaller than previously thought [22, 31, 32]. When auditing a convolutional neural network trained on MNIST and CIFAR-10 at theoretical , our audits achieve an empirical privacy leakage estimate of and , respectively, compared to and when the models were initialized to average-case initial model parameters, as done in prior work [13, 22, 31].
In the process, we identify and analyze the key factors affecting the tightness of black-box auditing: 1) dataset size and 2) gradient clipping norm. Specifically, we find that the audits are looser for larger datasets and larger gradient clipping norms. We believe this is due to more noise, which reduces the empirical privacy leakage. When considering smaller datasets with 1,000 samples, our audits are tighter – for , we obtain and , respectively, for MNIST and CIFAR-10.
Finally, we present tight audits for models with only the last layer fine-tuned using DP-SGD. Specifically, we audit a 28-layer Wide-ResNet model pre-trained on ImageNet-32 [13] with the last layer privately fine-tuned on CIFAR-10. With , we achieve an empirical privacy leakage estimate of compared to when the models are initialized to average-case initial parameters (as in previous work [13, 22, 31]).
Overall, the ability to perform rigorous audits of differentially private machine learning techniques allows shedding light on the tightness of theoretical guarantees in different settings. At the same time, it provides critical diagnostic tools to verify the correctness of implementations and libraries.
2 Related Work
DP-SGD. Abadi et al. [1] introduce DP-SGD and use it to train a feed-forward neural network model classifying digits on the MNIST dataset. On the more complex CIFAR-10 dataset, they instead fine-tune a pre-trained convolutional neural network, incurring a significant performance loss. Tramèr and Boneh [38] improve on [1] using hand-crafted feature extraction techniques but also suggest that end-to-end differentially private training may be inherently difficult without additional data. Papernot et al. [33] and Dörmann et al. [16] also present improvements considering different activation functions and larger batch sizes. Finally, De et al. [13] achieve state-of-the-art model performance on CIFAR-10 both while training from scratch and in fine-tuning regimes; they do so by using large batch sizes, augmenting training data, and parameter averaging. Our work focuses on auditing realistic models with high utility that might be used in practice; thus, we audit models with utility similar to [13] and the same models pre-trained in [13] in the fine-tuning regime.
Membership Inference Attacks (MIAs).
Shokri et al. [35] propose the first black-box MIAs against machine learning models using shadow models. Carlini et al. [8] improve by fitting a Gaussian distribution on the loss of the target sample and using fewer models to achieve equivalent attack accuracies. Sablayrolles et al. [34] consider per-example thresholds to improve the performance of the attacks in a low false-positive rate setting [8]. By factoring in various uncertainties in the MIA “game,” Ye et al. [41] construct an optimal generic attack. Concurrent work [18, 27, 40] use poisoned pre-trained models, which is somewhat similar to our approach of using worst-case initial model parameters, but with a different focus. Specifically, Liu et al. [27] and Wen et al. [40] show that privacy leakage from MIAs is amplified when pre-trained models are poisoned but, crucially, do not consider DP auditing. Additionally, while Feng and Tramèr [18] achieve tight audits with poisoned pre-trained models, their strategy is geared towards the last-layer only fine-tuning regime. By contrast, we focus on auditing DP-SGD with full model training to estimate the empirical worst-case privacy leakage in the black-box setting.
Auditing DP-SGD.
Jayaraman and Evans [23] derive empirical privacy guarantees from DP-SGD, observing a large gap from the theoretical upper bounds. Jagielski et al. [22] use data poisoning in the black-box model to audit Logistic Regression and Fully Connected Neural Network models and produce tighter audits than using MIAs. They initialize models to fixed (average-case) initial parameters, but the resulting empirical privacy guarantees are still far from the theoretical bounds. They also adapt their procedure to the fine-tuning regime, again producing loose empirical estimates.
Nasr et al. [32] are the first to audit DP-SGD tightly; they do so by using adversarially crafted datasets and active white-box adversaries that insert canary gradients into the intermediate steps of DP-SGD. In follow-up work, [31] presents a tight auditing procedure also for natural datasets, again in the active white-box model. Zanella-Béguelin et al. [45] present tighter auditing procedures using Bayesian techniques, needing fewer models to audit larger privacy levels. Finally, De et al. [13] perform black-box auditing of their implementation of DP-SGD in the JAX framework [6]; they also use fixed (average-case) initialization of models as in [22], similarly achieving loose audits.
Overall, while prior work [13, 22, 32] showed that reducing the randomness in the initial model parameters results in tighter empirical privacy leakage estimates, the audits remained loose as the fixed model parameters were still initialized randomly. Specifically, designing an effective adversarial strategy that provides tight audits in the black-box setting without destroying utility has remained, to the best of our knowledge, an open problem.
3 Background
3.1 Differential Privacy (DP)
Definition 1 (Differential Privacy (DP) [17]).
A randomized mechanism is -differentially private if, for any two neighboring datasets and , it holds:
Put simply, DP guarantees a formal upper bound (constrained by the privacy parameter ) on the probability that any adversary observing the output of the mechanism can distinguish between two neighboring inputs to – i.e., two datasets differing in only one record.
3.2 Differentially Private Machine Learning (DPML)
DP-SGD.
Differentially Private Stochastic Gradient Descent (DP-SGD) [1] is a popular algorithm for training machine learning models while satisfying DP. In this paper, we consider classification tasks on samples from the domain , where is the features domain and is the labels domain.
DP-SGD, reviewed in Algorithm 1, introduces two hyper-parameters: gradient clipping norm and noise multiplier . Typically, is set to 1, while is calibrated to the required privacy level based on the number of iterations, , and batch size, . Early DP-SGD work [1] set batch sizes to be small and similar to those of non-private SGD; however, recent work has highlighted the advantage of setting large batch sizes [13, 16] by training state-of-the-art models with them. Following this trend, and to ease auditing, we set , as also done in prior work [31]. Finally, while the initial set of model parameters is typically sampled randomly (e.g., using Xavier initialization [20]), it can also be set arbitrarily (and even adversarially), without affecting the privacy guarantees provided by DP-SGD.
Fine-tuning regime.
In this setting, one first pre-trains a model on a (large) public dataset and then fine-tunes it on the private data. When a suitable public dataset is available, prior work has shown that fine-tuning can achieve much higher utility than training from scratch using DP-SGD [13].
There are two methods for fine-tuning: doing it for all layers or only the last one. In both cases, the initial model parameters, , are set to the pre-trained model parameters (rather than randomly) before running DP-SGD. However, while the former updates all model parameters during DP-SGD, with the latter, the model parameters until the last (fully connected) layer are “frozen” and not updated during DP-SGD. This setting treats the pre-trained model as a feature extractor and trains a Logistic Regression model using DP-SGD based on the extracted features. As prior work shows [13] that this setting produces models with accuracy closer to the non-private state of the art (while also incurring smaller computational costs), we also do fine-tuning of just the last layer.
3.3 DP Auditing
As mentioned above, DP provides a theoretical limit on the adversary’s ability – bounded by the privacy parameter – to distinguish between and . With auditing, one assesses this limit empirically to verify whether the theoretical guarantees provided by are also met in practice. There may be several reasons this does not happen, e.g., the privacy analysis of is not tight [32] or because of bugs in the implementation of [31, 39]. Therefore, with DP mechanisms increasingly deployed in real-world settings, auditing them becomes increasingly important.
Deriving .
When auditing a DP mechanism , one runs repeatedly on two neighboring inputs, and , times each and the outputs are given to an adversary . Next, tries to distinguish between and , which determines a false positive rate (FPR) and false negative rate (FNR). For a given number of outputs and confidence level , empirical upper bounds and can be computed using Clopper-Pearson confidence intervals [32, 31]. Finally, the empirical upper bounds are converted to an empirical lower bound using the privacy region of (see below) and can be compared to the theoretical . We consider an audit tight if . To ease presentation, we abstract away the details and use to denote the process of estimating an from a given FPR and FNR.111Please refer to https://github.com/spalabucr/bb-audit-dpsgd for details.
Privacy region of DP-SGD.
Given a mechanism , the privacy region of the mechanism, defines the FPR and FNR values attainable for any adversary aiming to distinguish between and . Nasr et al. [31] note that the privacy region of DP-SGD corresponds (tightly) to that of -Gaussian Differential Privacy [15], which can be used to first compute a lower bound on :
(1) |
Then, a lower bound on corresponds to a lower bound on by means of the following theorem:
Theorem 1 (-GDP to -DP conversion [15]).
A mechanism is -GDP iff it is -DP for all , where:
(2) |
4 Auditing Procedure
In this section, we present our auditing procedure. We discuss the threat model in which we operate, the key intuition for using worst-case initial parameters, and how we craft them to maximize the empirical privacy leakage.
Threat model.
We consider a simple black-box model where the adversary only sees the final model parameters, , from DP-SGD. Specifically, unlike prior work on tight audits [31, 32], the adversary cannot observe intermediate model parameters or insert canary gradients. We do so to consider more realistic adversaries that may exist in practice. Additionally, we assume that the adversary can choose a worst-case target sample as is standard for auditing DP mechanisms [22]. Finally, we assume that the adversary can choose (possibly adversarial) initial model parameters. Note that this is not only an assumption allowed by DP-SGD’s privacy analysis, but, in the context of the popular public pre-training, private fine-tuning regime [13], this is also a reasonable assumption.
Auditing.
Our DP-SGD auditing procedure is outlined in Algorithm 2. As we operate in the black-box threat model, the procedure only makes a call to the DP-SGD algorithm—once again, they cannot insert canary gradients, etc. As mentioned, the adversary can set the initial model parameters (), which is fixed for all models. Finally, we assume that the final model parameters are given to the adversary, who calculates the loss of the target sample as “observations.” These are then thresholded to compute FPRs and FNRs and estimate the using the EstimateEps process (see Section 3.3). This threshold, which we denote as , must be computed on a separate set of observations (e.g., a validation set) for the to constitute a technically valid lower bound. However, as any decision threshold for GDP is equally likely to maximize [31], we follow common practice [29, 32, 31] to find the threshold that maximizes the value of from .
Crafting worst-case initial parameters.
Recall that DP-SGD’s privacy analysis holds not only for randomly initialized models but also for arbitrary fixed parameters. Indeed, this was noted in prior work [13, 22, 31] and led to using initial model parameters that are fixed for all models. In our work, we consider worst-case, rather than average-case, initial parameters that minimize the gradients of normal samples in the dataset. This makes the target sample’s gradient much more distinguishable. To do so, we pre-train the model using non-private SGD on an auxiliary dataset that follows the same distribution as the target dataset. Specifically, for MNIST, we pre-train the model on half of the full dataset (reserving the other half to be used for DP-SGD) for 5 epochs with a batch size of 32 and a learning rate of 0.01. For CIFAR-10, we first pre-train the model on the CIFAR-100 dataset for 300 epochs with batch size 128 and a learning rate of 0.1. Then, we (non-privately) fine-tune the model on half of the full dataset for 100 epochs, with a batch size of 256 and a learning rate of 0.1. Ultimately, we show that the gradients of normal samples are indeed minimized, achieving significantly tighter audits of DP-SGD in the black-box model.
5 Experiments
In this section, we evaluate our auditing procedure using image datasets commonly used to benchmark differentially private classifiers and CNNs that achieve reasonable utility on these datasets.
Due to computational constraints, we only train models (100 with the target sample and 100 without) to determine the empirical privacy estimates in the full model training setting and train models for the last-layer only fine-tuning setting. In theory, we could achieve tighter audits by increasing the number of models, however, this would require significantly more computational resources and time (see below). Since our work focuses on the (broader) research questions of whether tight audits are possible in the black-box setting and how various factors affect this tightness, we believe provides a reasonable tradeoff between tightness and computational overhead.
In our evaluation, we report lower bounds with 95% confidence (Clopper-Pearson [12]) and report the mean and standard deviation values of over five independent runs.222The source code needed to reproduce our experiments is available from https://github.com/spalabucr/bb-audit-dpsgd.

5.1 Experimental Setup
Datasets.
We experiment with the MNIST [26] and CIFAR-10 [25] datasets. The former includes 60,000 training and 10,000 testing 28x28 grayscale images in one of ten classes (hand-written digits), Often referred to as a “toy dataset,” is commonly used to benchmark DP machine learning models. CIFAR-10 is a more complex dataset containing 50,000 training and 10,000 testing 32x32 RGB images, also consisting of ten classes. Note that the accuracy of DP models trained on CIFAR-10 has only recently started to approach non-private model performance [13, 38]. Whether on MNIST or CIFAR-10, prior black-box audits (using average-case initial parameters) have not been tight. More precisely, for theoretical , De et al. [13] and Nasr et al. [31] only achieve empirical estimates of and , respectively, on MNIST and CIFAR-10.
To ensure a fair comparison between the average-case and worst-case initial parameter settings, we split the training data in two and privately train on only half of each dataset (30,000 images for MNIST and 25,000 for CIFAR-10). The other half of the training dataset is used as the auxiliary dataset to non-privately pre-train the models and craft the worst-case initial parameters (see Section 4). We note that for CIFAR-10, we additionally augment the pre-training dataset with the CIFAR-100 dataset to boost the model accuracy further (see Appendix A for further details).
Models.
When training the model fully, we use a (shallow) Convolutional Neural Network (CNN) for both datasets. We do so as we find accuracy to be poor with models like Logistic Regression and Fully Connected Neural Networks, and training hundreds of deep neural networks is impractical due to large computational cost. Therefore, we draw on prior work by Dörmann et al. [16] and train shallow CNNs that achieve acceptable model accuracy while being relatively fast to train. (We report exact model architectures in Appendix A). Whereas, for the last layer-only fine-tuning setting, we follow [13] and use the Wide-ResNet (WRN-28-10) [44] model pre-trained on ImageNet-32 [14].
Experimental Testbed.
All our experiments are run on a cluster using 4 NVIDIA A100 GPUs, 64 CPU cores, and 100GB of RAM. Auditing just one model took approximately 16 GPU hours on MNIST and 50 GPU hours on CIFAR-10.
5.2 Full Model Training
Dataset | |||||
---|---|---|---|---|---|
MNIST | 95.4 | 95.7 | 95.9 | 95.9 | |
CIFAR-10 | 46.0 | 51.2 | 52.2 | 53.6 |
To train models with an acceptable accuracy on MNIST and CIFAR-10, we first tune the hyper-parameters (i.e., learning rate and number of iterations ); due to space limitations, we defer discussion to Appendix A.
Model Accuracy.
In Table 1, we report the accuracy of the final models for the different values of -s we experiment with. While accuracy on MNIST is very good ( for all ), for CIFAR-10, we are relatively far from the state of the art ( at vs. at in [13]). This is due to two main reasons. First, as we need to train hundreds of models, we are computationally limited to using shallow CNNs rather than deep neural networks. Second, we need to consider a much larger batch size to ease auditing [31]; thus, the models converge much more slowly. In Appendix B, we show that, given enough iterations (), the model reaches an accuracy of at , almost approaching state-of-the-art. As a result, we audit models striking a reasonable trade-off between utility and computational efficiency, as discussed above.
Impact of initialization.
In Figure 1, we compare the difference in between worst-case initial parameters and average-case parameters (as done in prior work [13, 22, 31]). In the latter, all models are initialized to the same, randomly chosen (using Xavier initialization [20]), while, in the former, to a from a model pre-trained on an auxiliary dataset. In the rest of this section, the target sample is set as the blank sample, which is known to produce the tightest audits in previous work [13, 31].
With worst-case initial parameters, we achieve significantly tighter audits for both MNIST and CIFAR-10. Specifically, at , the empirical privacy leakage estimates reach and for MNIST and CIFAR-10, respectively. In comparison, with average-case initial parameters as in prior work, the empirical privacy leakage estimates were significantly looser, reaching only and for MNIST and CIFAR-10, respectively. Notably, at larger privacy levels and , for both MNIST and CIFAR-10, the empirical privacy leakage estimates when auditing with worst-case initial parameters are not only larger than the average-case, but are outside of the s.d. range. This shows that our auditing procedure not only produces significantly tighter audits than prior work, especially at larger s, but also that, at lower -s, our evaluations are currently limited by the number of models used to audit (resulting in large s.d.).
To shed light on why worst-case initial parameters improve tightness, we vary the amount of pre-training done to craft and, in Figure 2, plot the corresponding -s and the average gradient norm of all samples (after clipping) in the first iteration of DP-SGD. For computational efficiency, we focus on MNIST and set the theoretical to . Recall that, regardless of the initial model parameters, all models achieve accuracy on the test set. As the number of pre-training epochs increases from 0 (no pre-training) to 5, the average gradient norm steadily decreases from to , as expected. On the other hand, the empirical privacy leakage increases from to . In other words, with more pre-training, the impact of the other samples in the dataset reduces, making the target sample much more distinguishable, thus yielding tighter audits. Although there appears to be an anomaly in the obtained when the model is pre-trained for 3 epochs, this is within the standard deviation.
Remarks. With lower values, the empirical estimates are still relatively far away from the theoretical bounds. This is because, in each run, we train a small number of models and report the average across all five runs. In fact, the maximum obtained over the five runs amount to and , respectively, on MNIST and CIFAR-10, for . Nevertheless, even on the averages, our audits are still appreciably tighter than prior work—approximately by a factor of 3 for MNIST [13] and CIFAR-10 [31]. Overall, this confirms that the tightness of the audits can vary depending on the datasets, possibly due to differences in the difficulty of the classification task on the datasets. For instance, MNIST classification is known to be easy, with DP-SGD already reaching close to non-private model performance, even at low -s. By contrast, CIFAR-10 is much more challenging, with a considerable gap still existing between private and non-private model accuracy [13].




Impact of size of dataset.
In the auditing literature, empirical privacy leakage is often commonly estimated on (smaller) dataset samples both to achieve tighter audits and for computational efficiency reasons [13]. In Figure 3, we evaluate the impact of the dataset size () on the tightness of auditing. While smaller datasets generally yield tighter audits, the size’s impact also depends on the dataset itself. For instance, at , auditing with results in and , for MNIST and CIFAR-10, respectively, compared to and on the full dataset. In other words, with MNIST, the dataset size does not significantly affect tightness. However, for CIFAR-10, smaller datasets significantly improve the audit’s tightness significantly.
Sensitivity to clipping norm.
Finally, we evaluate the impact of the hyper-parameters on the tightness of black-box audits. In the black-box setting, unlike in white-box, the adversary cannot arbitrarily insert gradients that are of the same scale as the gradient clipping norm, [31]. Thus, they are restricted to gradients that naturally arise from samples. In Figure 4, we report the values obtained with varying clipping norms, . For computational efficiency, the results on CIFAR-10 are for models trained on datasets of size .
When the gradient clipping norm is small (), the audits are tighter. This is because the gradient norms are typically very small in the worst-case initial parameter setting, leading to a much higher “signal-to-noise” ratio. While this might suggest that black-box audits may not be as effective with larger gradient clipping norms, we observe that, with , model accuracy is poor too—more precisely, with , accuracy reaches , respectively, compared to with on MNIST. This indicates that audits are looser with because the model is not learning effectively, as there is too much noise. In fact, this could suggest that the privacy analysis for the setting could be further tightened in the black-box threat model, which we leave to future work.
5.3 Fine-Tuning the Last Layer Only
We now experiment with the setting where a model is first non-privately pre-trained on a large public dataset, and only the last layer is fine-tuned on the private dataset. For this setting, we choose the WRN-28-10 model pre-trained on ImageNet-32 and fine-tune the last layer on CIFAR-10 using DP-SGD, as done in [13]. At , fine-tuning the last layer yields model accuracies of and for average-case and worst-case initial model parameters, respectively. In this section, we run the ClipBKD procedure from [22] to craft a target sample, as it has been shown to produce tighter empirical privacy guarantees for Logistic Regression models.
Once again, we compare the tightness of the audits with respect to the initial model parameters. In Figure 5, we report the empirical privacy leakage estimates obtained at various levels of when the last layer is initialized to average- and worst-case (through pre-training) initial model parameters. Similar to the full model training setting, model initialization can impact audit tightness, albeit not as significantly. Specifically, at , we obtain and , respectively, when the last layers are initialized to average- and worst-case initial model parameters.
Overall, the audits in this setting are considerably tight, and increasingly so for larger , as we obtain empirical privacy leakage estimates of for , respectively. The maximum across five independent runs is also much tighter, namely, for , respectively. Although maximum exceeds the theoretical at 1.0, 2.0, and 4.0 suggests a DP violation, this is, in fact, within the standard deviation of and , respectively.
We also evaluated the impact of the dataset size and clipping norm in this setting as well; however, these factors do not significantly impact audit tightness, as this is a much simpler setting compared to training a CNN model fully. For more details, please see Appendix C.
6 Conclusion
This paper presented a novel auditing procedure for Differentially Private Stochastic Gradient Descent (DP-SGD). By crafting worst-case initial model parameters, we achieved empirical privacy leakage estimates substantially tighter than prior work for DP-SGD in the black-box model and for natural (i.e., not adversarially crafted) datasets. At , we achieve nearly tight estimates of and for datasets consisting of 1,000 samples from MNIST and CIFAR-10, respectively. While we achieve slightly weaker estimates of and on the full datasets, these still represent a roughly 3x improvement over the estimates achieved in prior work [13, 31].
Naturally, our work is not without limitations – the main one being the computational cost of auditing. Black-box auditing typically requires training hundreds of models to empirically estimate FPRs/FNRs with good accuracy and confidence. This can take hundreds of GPU hours, even for the shallow CNNs we audit in our work. Thus, auditing deep neural networks (e.g., WideResNet) trained on large datasets (e.g., ImageNet) may be computationally challenging for entities that are not huge corporations. However, as our main objective is addressing the open research question of whether tight DP-SGD audits are feasible in the black-box model, we focus on shallow models. Our results are very promising as, thanks to the intuition of using worst-case initial parameters, we do achieve nearly tight audits. Nevertheless, we leave it to future work to reduce the computational cost of the auditing procedure, e.g., by training significantly fewer models.
Relatedly, recent work [3, 37] has begun to focus on auditing within a single training run, although it has thus far achieved only loose empirical estimates. One interesting direction would be to combine the insights from this paper (i.e., using worst-case initial model parameters) with one-shot auditing techniques. However, this may not be trivial as these techniques typically employ a large number of canary samples, each with large gradient norms, which can potentially interfere with our goal to reduce the gradient norm of “other” samples.
Furthermore, we only consider full batch gradient descent (), i.e., without sub-sampling. Note that this is standard practice and eases auditing by enabling GDP auditing, which requires fewer models and is much less computationally intensive compared to auditing DP-SGD with sub-sampling using Privacy Loss Distributions (PLDs) [31]. Nevertheless, when auditing DP-SGD with sub-sampling in the “hidden-state” threat model where the adversary can only observe the final trained model but can insert gradient canaries, recent work has shown that there is a significant gap between the theoretical upper bound and the empirical lower bound privacy leakage achieved [10]. Even though this suggests a privacy amplification phenomenon in the hidden-state setting (which would extend to black-box as well), for general non-convex loss functions, prior work has also shown that the privacy analysis of DP-SGD with sub-sampling is tight even in the hidden-state [4]. Therefore, whether audits can be tight in the black-box threat model for DP-SGD with sub-sampling under realistic loss functions and datasets remains an open question for future work.
Acknowledgments. This work has partly been supported by a National Science Scholarship (PhD) from the Agency for Science Technology and Research, Singapore (A*STAR). We also wish to thank Jamie Hayes for providing ideas and feedback throughout the project.
References
- [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep Learning with Differential Privacy. In CCS, 2016.
- [2] J. M. Abowd, R. Ashmead, R. Cumings-Menon, S. Garfinkel, M. Heineck, C. Heiss, R. Johns, D. Kifer, P. Leclerc, A. Machanavajjhala, et al. The 2020 census disclosure avoidance system topdown algorithm. Harvard Data Science Review, 2022.
- [3] G. Andrew, P. Kairouz, S. Oh, A. Oprea, H. B. McMahan, and V. Suriyakumar. One-shot Empirical Privacy Estimation for Federated Learning. In ICLR, 2024.
- [4] M. S. M. S. Annamalai. It’s Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss. In AISec, 2024.
- [5] Apple Differential Privacy Team. Learning with Privacy at Scale. https://docs-assets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf, 2017.
- [6] B. Balle, L. Berrada, S. De, S. Ghalebikesabi, J. Hayes, A. Pappu, S. L. Smith, and R. Stanforth. JAX-Privacy: Algorithms for Privacy-Preserving Machine Learning in JAX. https://github.com/google-deepmind/jax_privacy, 2022.
- [7] B. Bichsel, S. Steffen, I. Bogunovic, and M. Vechev. DP-Sniper: Black-Box Discovery of Differential Privacy Violations using Classifiers. In IEEE S&P, 2021.
- [8] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer. Membership Inference Attacks From First Principles. In IEEE S&P, 2022.
- [9] T. Cebere. Privacy Leakage at low sample size. https://github.com/pytorch/opacus/issues/571, 2023.
- [10] T. Cebere, A. Bellet, and N. Papernot. Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model. arXiv:2405.14457, 2024.
- [11] R. Chourasia, J. Ye, and R. Shokri. Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent. NeurIPS, 2021.
- [12] C. J. Clopper and E. S. Pearson. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 1934.
- [13] S. De, L. Berrada, J. Hayes, S. L. Smith, and B. Balle. Unlocking High-Accuracy Differentially Private Image Classification through Scale. arXiv:2204.13650, 2022.
- [14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
- [15] J. Dong, A. Roth, and W. J. Su. Gaussian Differential Privacy. arXiv:1905.02383, 2019.
- [16] F. Dörmann, O. Frisk, L. N. Andersen, and C. F. Pedersen. Not All Noise is Accounted Equally: How Differentially Private Learning Benefits from Large Sampling Rates. In International Workshop on Machine Learning for Signal Processing, 2021.
- [17] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography, 2006.
- [18] S. Feng and F. Tramèr. Privacy Backdoors: Stealing Data with Corrupted Pretrained Models. In ICML, 2024.
- [19] A. Gadotti, F. Houssiau, M. S. M. S. Annamalai, and Y.-A. de Montjoye. Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple’s Count Mean Sketch in Practice. In USENIX Security, 2022.
- [20] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
- [21] Google. TensorFlow Privacy. https://github.com/tensorflow/privacy, 2019.
- [22] M. Jagielski, J. Ullman, and A. Oprea. Auditing Differentially Private Machine Learning: How Private is Private SGD? In NeurIPS, 2020.
- [23] B. Jayaraman and D. Evans. Evaluating differentially private machine learning in practice. In USENIX Security, 2019.
- [24] M. Johnson. fix prng key reuse in differential privacy example. https://github.com/google/jax/pull/3646, 2020.
- [25] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf, 2009.
- [26] Y. LeCun, C. Cortez, and C. C. Burges. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.
- [27] R. Liu, T. Wang, Y. Cao, and L. Xiong. PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps. In CCS, 2024.
- [28] J. Lokna, A. Paradis, D. I. Dimitrov, and M. Vechev. Group and Attack: Auditing Differential Privacy. In CCS, 2023.
- [29] S. Maddock, A. Sablayrolles, and P. Stock. CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning. In ICLR, 2023.
- [30] Microsoft. IOM and Microsoft release first-ever differentially private synthetic dataset to counter human trafficking. https://www.microsoft.com/en-us/research/blog/iom-and-microsoft-release-first-ever-differentially-private-synthetic-dataset-to-counter-human-trafficking/, 2022.
- [31] M. Nasr, J. Hayes, T. Steinke, B. Balle, F. Tramèr, M. Jagielski, N. Carlini, and A. Terzis. Tight Auditing of Differentially Private Machine Learning. In USENIX Security, 2023.
- [32] M. Nasr, S. Songi, A. Thakurta, N. Papernot, and N. Carlini. Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. In IEEE S&P, 2021.
- [33] N. Papernot, A. Thakurta, S. Song, S. Chien, and Ú. Erlingsson. Tempered Sigmoid Activations for Deep Learning with Differential Privacy. In AAAI, 2021.
- [34] A. Sablayrolles, M. Douze, C. Schmid, Y. Ollivier, and H. Jégou. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference. In ICML, 2019.
- [35] R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership Inference Attacks against Machine Learning Models. In IEEE S&P, 2017.
- [36] T. Stadler, B. Oprisanu, and C. Troncoso. Synthetic Data – Anonymisation Groundhog Day. In USENIX Security, 2022.
- [37] T. Steinke, M. Nasr, and M. Jagielski. Privacy Auditing with One (1) Training Run. NeurIPS, 2024.
- [38] F. Tramer and D. Boneh. Differentially Private Learning Needs Better Features (or Much More Data). In ICLR, 2021.
- [39] F. Tramer, A. Terzis, T. Steinke, S. Song, M. Jagielski, and N. Carlini. Debugging Differential Privacy: A Case Study for Privacy Auditing. arXiv:2202.12219, 2022.
- [40] Y. Wen, L. Marchyok, S. Hong, J. Geiping, T. Goldstein, and N. Carlini. Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. NeurIPS, 2024.
- [41] J. Ye, A. Maddi, S. K. Murakonda, V. Bindschaedler, and R. Shokri. Enhanced Membership Inference Attacks against Machine Learning Models. In CCS, 2022.
- [42] J. Ye and R. Shokri. Differentially Private Learning Needs Hidden State (Or Much Faster Convergence). NeurIPS, 2022.
- [43] A. Yousefpour, I. Shilov, A. Sablayrolles, D. Testuggine, K. Prasad, M. Malek, J. Nguyen, S. Ghosh, A. Bharadwaj, J. Zhao, et al. Opacus: User-friendly Differential Privacy Library in PyTorch. arXiv:2109.12298, 2021.
- [44] S. Zagoruyko and N. Komodakis. Wide Residual Networks. The British Machine Vision Conference, 2016.
- [45] S. Zanella-Béguelin, L. Wutschitz, S. Tople, A. Salem, V. Rühle, A. Paverd, M. Naseri, B. Köpf, and D. Jones. Bayesian Estimation of Differential Privacy. In ICML, 2023.
Appendix A Model Architectures and Hyper-parameters
In this work, we use the shallow Convolutional Neural Networks used by Dörmann et al. [16], who make minor modifications to the CNNs used earlier by Tramèr and Boneh [38] to achieve good model utilities on MNIST and CIFAR-10. Exact model architectures for MNIST and CIFAR-10 are reported in Tables 2 and 3, respectively.
Layer | Parameters |
---|---|
Convolution | 16 filters of 5x5 |
Max-Pooling | 2x2 |
Convolution | 32 filters of 4x4 |
Max-Pooling | 2x2 |
Fully connected | 32 units |
Fully connected | 10 units |
Layer | Parameters |
---|---|
Convolution | 32 filters of 3x3, stride 1, padding 1 |
Convolution | 32 filters of 3x3, stride 1, padding 1 |
Max-Pooling | 2x2, stride 2, padding 0 |
Convolution | 64 filters of 3x3, stride 1, padding 1 |
Convolution | 64 filters of 3x3, stride 1, padding 1 |
Max-Pooling | 2x2, stride 2, padding 0 |
Convolution | 128 filters of 3x3, stride 1, padding 1 |
Convolution | 128 filters of 3x3, stride 1, padding 1 |
Max-Pooling | 2x2, stride 2, padding 0 |
Fully connected | 128 units |
Fully connected | 10 units |
Hyper-parameter tuning.
To achieve the best model utilities, we tune the hyper-parameters of DP-SGD (i.e., number of iterations, and learning rate, ). For MNIST, we train for iterations, with a learning rate of for both average-case and worst-case initial model parameters. For CIFAR-10, we train for iterations, with a learning rate of for the average-case initial model parameter setting. For the worst-case initial model parameter setting, we use a learning rate of instead, as this resulted in the best model performance for CIFAR-10. When fine-tuning just the last layer, we train for iterations, with a learning rate of . Note that, for the experiments varying the dataset size, the learning rate was scaled accordingly so that the size of the dataset does not affect the “influence” of each sample (i.e., ). All clipping norms are set to as done in prior work [13], and batch sizes are set to the dataset size to ease auditing [31]. Finally, the noise multiplier is calculated from the batch size and number of iterations using the Privacy loss Random Variable accountant provided by Opacus [43].
Appendix B Model Convergence



In Figure 6, we plot the test accuracies obtained when training the CNN model on the CIFAR-10 dataset for varying number of iterations, , and learning rates, , at a fixed theoretical . Specifically, we train with full batch gradient descent (i.e., ) and re-calculate the noise multiplier for each and such that all points on the Figure have the same privacy level of .
First, we observe that and result in comparable test accuracies that increase with the number of iterations, reaching close to 70% accuracy at ( and , respectively). Therefore, even though at and , the models we audit only achieve accuracy, this is a computational limitation, and the model can, in fact, achieve good model utilities given enough iterations. However, increasing the learning rate to results in poor model performance.
In their original work, Dörmann et al. achieve a test accuracy of 70.1% at by training the model with a batch size of 8,500 (sampling rate, ). However, following prior work [31], we choose not to consider sub-sampling as this makes tight auditing difficult. This is because the theoretical privacy region of the sub-sampled Gaussian mechanism (the underlying mechanism for DP-SGD with sub-sampling) is loose compared to the empirical privacy region observed, thus making audits in this setting inherently weaker.
Appendix C Last layer only fine-tuning
Finally, we evaluate the impact of the clipping norm and dataset size on the tightness of auditing in the last layer only fine-tuning setting. Overall, we find that these factors have little to no impact, as tight audits are achieved in the most difficult settings considered.
In Figures 7(a) and 7(b), we report the empirical privacy estimates obtained when using different gradient clipping norms and when training on datasets with different sizes, respectively. While there may be some minor differences between the empirical obtained in different settings, overall, the gradient clipping norm and dataset size do not significantly affect tightness. We believe this is because it is much simpler to fine-tune the last (Logistic Regression) layer of a CNN than to train the full model. Here, the empirical privacy leakage is already maximized ( for theoretical ) in the most difficult setting (, ). Therefore, “relaxing the setting” by using smaller clipping norms and dataset sizes does not significantly improve the privacy leakage.