Differentially Private Adaptation of Diffusion Models via
Noisy Aggregated Embeddings
Abstract
We introduce a novel method for adapting diffusion models under differential privacy (DP) constraints, enabling privacy-preserving style and content transfer without fine-tuning model weights. Traditional approaches to private adaptation, such as DP-SGD, incur significant computational and memory overhead when applied to large, complex models. In addition, when adapting to small-scale specialized datasets, DP-SGD incurs large amount of noise that significantly degrades the performance. Our approach instead leverages an embedding-based technique derived from Textual Inversion (TI) and adapted with differentially private mechanisms. We apply TI to Stable Diffusion for style adaptation using two private datasets: a collection of artworks by a single artist and pictograms from the Paris 2024 Olympics. Experimental results show that the TI-based adaptation achieves superior fidelity in style transfer, even under strong privacy guarantees.

1 Introduction
In recent years, diffusion models (Ho et al., 2020; Song et al., 2021b), particularly latent diffusion models (Rombach et al., 2022), have spearheaded high quality text-to-image generation, and have been widely adopted by researchers and the general public alike. Trained on massive datasets like LAION-5B (Schuhmann et al., 2022), these models have developed a broad understanding of visual concepts, enabling new creative and practical applications. Notably, tools such as Stable Diffusion (Rombach et al., 2022; Podell et al., 2023) have been made readily accessible for general use. Building on this foundation, efficient adaptation methods such as parameter efficient fine-tuning (PEFT) methods (Hu et al., 2022; von Platen et al., 2022; Ruiz et al., 2023), guidance based approaches (Ho & Salimans, 2021; Kim et al., 2022; Bansal et al., 2024), and pseudo-word generation (Gal et al., 2023) enable users to leverage this extensive pretraining for customizing models that can specialize on downstream tasks with smaller datasets.
The rapid adoption of diffusion models has raised significant privacy and legal concerns. These models are vulnerable to privacy attacks, such as membership inference (Duan et al., 2023), where attackers determine if a specific data point was used for training, and data extraction (Carlini et al., 2023), which enables reconstruction of training data. This risk is amplified during fine-tuning on smaller, domain-specific datasets, where each record has a greater impact. Additionally, reliance on large datasets scraped without consent raises copyright concerns (Vyas et al., 2023), as diffusion models can reproduce original artworks without credit or compensation. These issues highlight the urgent need for privacy-preserving technologies and clearer ethical and legal guidelines for generative models.
Differential privacy (DP) (Dwork et al., 2006; Dwork, 2006) is a widely adopted approach to addressing these challenges, where controlled noise is added during training or inference to prevent information leakage from individual data points while still enabling the model to learn effectively from the overall dataset. One standard approach for ensuring DP in deep learning is Differentially Private Stochastic Gradient Descent (DP-SGD) (Abadi et al., 2016), which modifies traditional SGD by adding noise to clipped gradients.
However, applying DP-SGD to train diffusion models poses several challenges. It introduces significant computational and memory overhead due to per-sample gradient clipping (Hoory et al., 2021), which is essential for bounding gradient sensitivity (Dwork et al., 2006; Abadi et al., 2016). DP-SGD is also incompatible with batch-wise operations like batch normalization, as these link samples and hinder sensitivity analysis. Furthermore, training large models with DP-SGD often leads to substantial performance degradation, particularly under realistic privacy budgets since the required noise scales with the gradient norm. Consequently, existing diffusion models trained with DP-SGD are limited to relatively small-scale images. (Dockhorn et al., 2023; Ghalebikesabi et al., 2023).
As a result, recent research has focused on privacy-preserving strategies for adapting diffusion models without the need for full DP-SGD training. One approach adapts large, publicly pre-trained models to new domains under DP constraints, leveraging their representational strengths while reducing computational and memory costs (Ghalebikesabi et al., 2023). Similarly, PEFT methods like DP-LoRA (Yu et al., 2022) fine-tune a small subset of parameters, enabling efficient adaptation with lower privacy costs. Methods like DP-RDM (Lebensold et al., 2024) avoid direct model updates by using retrieval mechanisms that condition image generation on private data retrieved during inference. However, these alternate approaches often fail to capture the detail of style, underscoring the challenges of balancing privacy, efficiency, and generative performance.
Independent of privacy concerns, Textual Inversion (TI) (Gal et al., 2023) provides an effective method for adapting diffusion models to specific styles or content without modifying the model. Instead, TI learns an external embedding vector that captures the style or content of a target image set, which is then incorporated into text prompts to guide the model’s outputs. A key advantage of TI is its ability to compress a style into a compact vector, reducing computational and memory demands while simplifying the application of privacy-preserving mechanisms, as privacy constraints can be applied directly to embeddings rather than the full model. Additionally, since TI avoids direct model optimization, it remains efficient and compatible with DP constraints on smaller datasets.
In this work, we propose a novel privacy-preserving adaptation method for smaller datasets, leveraging TI to avoid the extensive model updates required by DP-SGD or DP-PEFT approaches. Standard TI inherently compresses the target dataset into a single, low-dimensional vector, providing some obfuscation benefits, but it does not offer formal privacy guarantees. To address this limitation, we introduce a private variant of TI, called Differentially Private Aggregation via Textual Inversion (DPAgg-TI) and summarize it in Figure 1. Our method decouples interactions among samples by learning a separate embedding for each target image, which are then aggregated into a noisy centroid. This approach ensures efficient and secure adaptation to private datasets.
Our experiments demonstrate the effectiveness of DPAgg-TI, showing that TI remains robust in preserving stylistic fidelity even under privacy constraints. Applying our method to a private artwork collection by @eveismyname and Paris 2024 Olympics pictograms (Paris 2024, ), we show that DPAgg-TI captures nuanced stylistic elements while ensuring privacy. We observe a trade-off between privacy (controlled by DP parameter ) and image quality: lower reduces fidelity but maintains the target style under moderate noise. Subsampling further amplifies privacy by reducing sensitivity to individual data points, mitigating noise impact on image quality. This framework enables privacy-preserving adaptation of diffusion models to new styles and domains while protecting sensitive data.
Our contributions can be summarized as follows:
-
•
We propose DPAgg-TI that ensures privacy by learning separate embeddings for individual images and aggregating them into a noisy centroid.
-
•
Our approach enables style adaptation without extensive model updates, reducing computational overhead while preserving privacy.
-
•
We analyze the trade-off between privacy and image quality, showing that moderate noise maintains stylistic fidelity while protecting sensitive data.
-
•
We validate our method on diverse datasets, demonstrating its effectiveness in capturing stylistic elements under privacy constraints.
2 Preliminaries and Related Work
2.1 Diffusion Models
Diffusion models (Ho et al., 2020; Song et al., 2021b, a; Rombach et al., 2022) leverage an iterative denoising process to generate high-quality images that align with a given conditional input from random noise. In text-to-image generation, this conditional input is based on a textual description (a prompt) that guides the model in shaping the image to reflect the content and style specified by the text. To convert the text prompt into a suitable conditional format, it is first broken down into discrete tokens, each representing a word or sub-word unit. These tokens are then converted into a sequence of embedding vectors that encapsulate the meaning of each token within the model’s semantic space. Next, these embeddings pass through a transformer text encoder, such as CLIP (Radford et al., 2021), outputting a single text-conditional vector that serves as the conditioning input. This vector is then incorporated at each denoising step, guiding the model to align the output image with the specific details outlined in the prompt.
The image generation process, also known as the reverse diffusion process, comprises of discrete timesteps and starts with pure Gaussian noise . At each decreasing timestep , the denoising model, which often utilizes a U-Net structure with cross-attention layers, takes a noisy image and text conditioning as inputs and predicts the noise component , where denotes the denoising model’s parameters. The predicted noise is then used to make a reverse diffusion step from to , iteratively refining the noisy image closer to a coherent output that aligns with the text conditional .
The objective function for a text-conditioned diffusion model, given both the noisy image and the text conditioning , is typically a mean squared error (MSE) between the true noise and the predicted noise . The denoising model is therefore trained over the following optimization problem:
(1) |
Textual Inversion.
Textual Inversion (TI) (Gal et al., 2023) is an adaptation technique that enables personalization using a small dataset of typically 3-5 images. This approach essentially learns a new token that encapsulates the semantic meaning of the training images, allowing the model to associate specific visual features with a custom token.
To achieve this, TI trains a new token embedding, denoted as , representing a placeholder token, denoted as . During training, images are conditioned on phrases such as “A photo of ” or “A painting in the style of ”. However, unlike the fixed embeddings of typical tokens , is a learnable parameter. Let denote the text conditioning vector resulting from a prompt containing the token . Through gradient descent, TI minimizes the diffusion model loss given in (1) with respect to , while keeping the diffusion model parameters fixed, iteratively refining this embedding to capture the unique characteristics of the training images. The resulting optimal embedding is formalized as follows:
(2) |
Hence, represents an optimized placeholder token , which can employed in prompts such as “A photo of floating in space” or “A drawing of a capybara in the style of ”, enabling the generation of personalized images that reflect the learned visual characteristics.
2.2 Differential Privacy
In this work, we adopt differential privacy (DP) (Dwork et al., 2006; Dwork, 2006) as our privacy framework. Over the past decade, DP has become the gold standard for privacy protection in both research and industry. It measures the stability of a randomized algorithm with respect to changes in an input instance, thereby quantifying the extent to which an adversary can infer the existence of a specific input based on the algorithm’s output.
Definition 2.1 ((Approximate) Differential Privacy).
For , a randomized mechanism satisfies -DP if for all neighboring datasets which differ in a single record (i.e., where is the Hamming distance) and all measurable in the range of , we have that
When , we say satisfies -pure DP or (-DP).
To achieve DP, the Gaussian mechanism is often applied (Dwork et al., 2014; Balle & Wang, 2018), adding Gaussian noise scaled by the sensitivity of the function and privacy parameters and . Specifically, noise with standard deviation is added to the output111In practice, we use numerical privacy accountant such as (Balle & Wang, 2018; Mironov, 2017) to calibrate the noise. (Balle & Wang, 2018), where represents -sensitivity of the target function . When the context is clear, we may omit the subscript . This mechanism enables a smooth privacy-utility tradeoff and is widely used in privacy-preserving machine learning, including in DP-SGD (Abadi et al., 2016), which applies Gaussian noise during model updates to achieve DP.
Privacy Amplification by Subsampling.
Subsampling is a standard technique in DP, where a full dataset of size is first subsampled to records without replacement (typically with ) before the privatization mechanism (such as the Gaussian mechanism) is applied. Specifically, if a mechanism provides -DP on a dataset of size , it achieves -DP on the subsampled dataset, where and
(3) |
This result is well-known (Steinke (2022, Theorem 29)), with tighter amplification bounds available for Gaussian mechanisms (Mironov, 2017).
2.3 Differentially Private Adaptation of Diffusion Models
Recent advancements in applying DP to diffusion models have aimed to balance privacy preservation with the high utility of generative outputs. Dockhorn et al. (Dockhorn et al., 2023) proposed a Differentially Private Diffusion Model (DPDM) that enables privacy-preserving generation of realistic samples, setting a foundational approach for adapting diffusion processes using DP-SGD. Another common strategy involves training a model on a large public dataset, followed by differentially private fine-tuning on a private dataset, as explored by Ghalebikesabi et al. (2023). While effective in certain contexts, this approach raises privacy concerns, particularly around risks of information leakage during the fine-tuning phase (Tramèr et al., 2024).
In response to these limitations, various adaptation techniques have emerged. Although not specific to diffusion models, some methods focus on training models on synthetic data followed by DP-constrained fine-tuning, as in the VIP approach (Yu et al., 2024), which demonstrates the feasibility of applying DP in later adaptation stages. Other approaches explore differentially private learning of feature representations (Sander et al., 2024), aiming to distill private information into a generalized embedding space while maintaining DP guarantees. Although these adaptations are not yet implemented for diffusion models, they lay essential groundwork for developing secure and efficient privacy-preserving generative models.

3 Differentially Private Adaptation via Textual Inversion
TI is inherently parameter-efficient and offers certain privacy benefits, as information from an entire dataset of images is compressed into a single token embedding vector. This compression limits the model’s capacity to memorize specific images, making data extraction attacks difficult. However, this privacy is merely heuristic and yet to be proven, so TI may still be vulnerable to privacy attacks such as membership inference. A similar adaptation technique with privacy guarantees may therefore be desirable.
Let represent a target dataset of images whose characteristics we wish to privately adapt our image generation towards. Instead of training a single token embedding on the entire dataset as in regular TI, we train a separate embedding on each to obtain a set of embeddings , as illustrated in Figure 1. We can formalize the encoding process as follows:
(4) |
Then, we can aggregate the embeddings by calculating the centroid. The purpose of this aggregation is to limit the sensitivity of the final output to each . In order to provide DP guanrantees, we also add isotropic Gaussian noise to the centroid. We can therefore define the resulting embedding vector as follows:
(5) |
where the minimum required to provide -DP is given by the following expression based on Balle & Wang (2018, Theorem 1):
(6) |
In the context of our problem, . Since our embedding vectors are directional, we can normalize each , allowing us to set .
The noisy centroid embedding can then be used to adapt the downstream image generation process. Similar to regular TI’s , we can use to represent a new placeholder token that can be incorporated into prompts for personalized image generation. While may not fully solve the TI optimization problem presented in (2), it provides provable privacy guarantees, with only a minimal trade-off in accurately representing the style of the target dataset.
To reduce the amount of noise needed to provide the same level of DP, we employ subsampling: instead of computing the centroid over all embedding vectors, we randomly sample embedding vectors without replacement and compute the centroid over only the sampled vectors. Then the standard privacy amplification by subsampling bounds (such as (3)) can be applied. Formally, we sample where , and compute the output embedding as follows:
(7) |
where can be computed numerically for any target and subsampling rate .
4 Experimental Results
4.1 Datasets
We compiled two datasets to evaluate our style adaptation method, specifically selecting content unlikely to be recognized by Stable Diffusion v1.5, our base model.
The first dataset consists of 158 artworks by the artist @eveismyname, who has granted consent for non-commercial use. This dataset allows us to assess whether models can capture artistic styles without memorizing individual works. While some of these artworks may have been publicly accessible on social media, making incidental inclusion in Stable Diffusion’s pretraining possible, the artist’s limited recognition and relatively small portfolio reduce the likelihood that the model has internalized her unique style. This dataset serves as a controlled test for privacy-preserving style transfer on individual artistic collections.
The second dataset contains 47 pictograms from the Paris 2024 Olympics (Paris 2024, ), permitted strictly for non-commercial editorial use (International Olympic Committee, ). These pictograms were officially released in February 2023, several months after the release of Stable Diffusion v1.5, ensuring they were absent from the model’s pretraining data. This dataset allows us to assess how well our approach adapts to newly introduced visual styles that the base model has never encountered.
Both datasets are used to test the ability of our method to extract and transfer stylistic elements while preserving privacy. Representative samples are shown in Figure 2.
4.2 Style Transfer Results


Using both the @eveismyname and Paris 2024 pictograms dataset, we trained TI (Gal et al., 2023) embeddings on Stable Diffusion v1.5 (Rombach et al., 2022) using DPAgg-TI. Our primary goal is to investigate how DP configurations, specifically the privacy budget and subsampling size , affect the generated images quality and privacy resilience. For regular TI, we utilize the default process to embed the private dataset without any additional noise. For the DPAgg-TI, we test multiple configurations of and to analyze the trade-off between image fidelity and privacy.
Figures 3 and 4 present generated images across two key configurations: (1) regular TI without DP, (2) DPAgg-TI with DP at different values of and . We used the same random seed to generate embeddings, subsample images, and sample DP noise for ease of visual comparision between different configurations. As with common practice, we set . Since is undefined for , we demonstrate the results of , in other words, infinite noise, by setting . The purpose of this parameter value is to demonstrate the image generated when contains zero information about the target dataset. Images generated without DP closely resemble the unique stylistic elements of the target dataset. In particular, images adapted using @eveismyname images displayed crisp details and nuanced color gradients characteristic of the artist’s work, while those of Paris 2024 pictograms captured the logo’s original structure. In contrast, DP configurations introduce a discernible degradation in image quality, with lower epsilon values and smaller subsampling sizes resulting in more noticeable noise and diminished stylistic fidelity.
As , the resulting token embedding gradually loses its semantic meaning, leading to a loss of stylistic fidelity. In particular, tends towards (a conditioning vector independent of the learnable embedding). In our results, this manifests as a painting of Taylor Swift devoid of the artist-specific stylistic elements, or a generic icon of a dragon (with color, as opposed to the black and white design of the pictograms). With this in mind, can be interpreted as a drift parameter, representing the progression from the optimal towards infinity, gradually steering the generated image away from the target style in exchange for stronger privacy guarantees. We also observe instances where there is a temporary drop in prompt fidelity (e.g., in Figure 3 and intermediate values in Figure 4) which restores as drifts even further from its optimal value. We hypothesize that this is due to drifted capturing a different meaning unrelated to the prompt, before losing any meaning that could be interpreted by Stable Diffusion’s text encoder, causing to be disregarded from and the prompt fidelity to be restored. Another possible explanation is that the temporary drop in prompt fidelity is due to the drift path of passing through non-linear regions within embedding space. We leave further investigations into this observation for future work.
Meanwhile, reducing also reduces the sensitivity of the generated image to , as evident by the observation that, on both datasets at , (subsampling rate below ) image generation can tolerate as low as without significant changes in visual characteristics, and retaining stylistic elements of the target dataset at as low as . This strong boost in robustness comes at a small price of base style capture fidelity. As observed in Figures 3 and 4, we can also treat subsampling as an introduction of noise. Mathematically, the subsample centroid is an unbiased estimate of the true centroid, and so the subsampling process itself defines a distribution centered at the true centroid. However, the amount of noise introduced by the subsampling process is limited by the individual image embeddings, as a subsample centroid can only stray from the true centroid as much as the biggest outlier in the dataset.
4.3 Quantitative Evaluation
4.3.1 User Study
To evaluate the utility of our approach under different DP and subsampling configurations, we conducted a user study with 25 participants. Each participant was shown reference images from the target dataset and asked to compare pairs of generated images, selecting the one that better captured the style of the reference images. Images were generated using 10 prompts and adapted TI embeddings for the @eveismyname and Paris 2024 Pictogram datasets, resulting in 20 groups of images. Each participant evaluated two groups, one randomly selected from each dataset, with comparisons focusing on model configurations differing by DP noise and subsampling size.
Survey results, summarized in Table 3 in Appendix A, align with our design goals. Participants showed no clear preference between regular TI and DPAgg-TI, suggesting that our privacy-preserving approach maintains perceptual quality. As expected, both DP noise and reduced subsampling size degraded style fidelity, consistent with the trade-offs inherent in differential privacy. Preferences at were split, but subsampling was generally favored, reinforcing its role in reducing noise impact while preserving style.
4.3.2 Kernel Inception Distance
The Kernel Inception Distance (KID) (Bińkowski et al., 2018) is a metric for evaluating generative models by measuring the difference between the distributions of generated and training images in an embedding space. To compute KID, images generated by the model and real training images are passed through an Inception network (Szegedy et al., 2015), and their distributional differences are estimated. Unlike the more commonly used Fréchet Inception Distance (FID) (Heusel et al., 2017), KID is an unbiased estimator of the true divergence between the learned and target distributions (Jayasumana et al., 2024), making it more suitable for smaller datasets, as in our case.
We report KID scores for different parameters in Tables 2 and 2, showing that DPAgg-TI maintains the style transfer fidelity of TI while ensuring differential privacy. Further discussion of these results is provided in Appendix B.
No DP | 5.0 | 2.0 | 1.0 | 0.5 | 0.1 | 0 | |
---|---|---|---|---|---|---|---|
– | 0.0444 | 0.0794 | 0.0422 | 0.0529 | 0.0690 | 0.1117 | 0.0654 |
32 | 0.0752 | 0.0845 | 0.0865 | 0.1167 | 0.0300 | 0.0649 | 0.0657 |
16 | 0.0351 | 0.0379 | 0.0430 | 0.0663 | 0.1309 | 0.0438 | 0.0658 |
8 | 0.0359 | 0.0366 | 0.0350 | 0.0366 | 0.0396 | 0.0530 | 0.0658 |
4 | 0.0245 | 0.0250 | 0.0249 | 0.0250 | 0.0258 | 0.0314 | 0.0653 |
ctrl | 0.0318 | – | – | – | – | – | – |
No DP | 5.0 | 2.0 | 1.0 | 0.5 | 0.1 | 0 | |
---|---|---|---|---|---|---|---|
– | 0.1146 | 0.1202 | 0.1368 | 0.1314 | 0.1389 | 0.1209 | 0.1274 |
32 | 0.1220 | 0.1036 | 0.1258 | 0.1377 | 0.1307 | 0.1245 | 0.1259 |
16 | 0.1311 | 0.1424 | 0.1170 | 0.1311 | 0.1381 | 0.1335 | 0.1278 |
8 | 0.1317 | 0.1307 | 0.1220 | 0.1117 | 0.1295 | 0.1313 | 0.1272 |
4 | 0.1141 | 0.1094 | 0.1137 | 0.1190 | 0.1194 | 0.1583 | 0.1259 |
ctrl | 0.1388 | – | – | – | – | – | – |
4.4 Ablation Study
4.4.1 Textual Inversion with DP-SGD

A natural question that arises is how well our approach compares to the naive method of applying DP-SGD to regular TI training. We therefore integrated DP-SGD into the TI codebase using the Opacus library and trained similar embeddings on the @eveismyname and Paris 2024 datasets. We found that in most cases, notably the @eveismyname dataset, the amount of noise required for DP-SGD to achieve a reasonable value of for DP is so high that the resulting embedding contains negligible information about the training dataset. In particular, the results for are almost indistinguishable to , as shown in Figure 5. We believe that this is simply because DP-SGD is not designed to handle such small datasets in the order of 100 images. Additional results can be found in Appendix D.
4.4.2 Differentially Private Adaptation Using Style Guidance

We extend our approach to style guidance (SG) by leveraging the framework of Universal Guidance (Bansal et al., 2024). Specifically, we focus on CLIP-based style guidance, which optimizes the similarity between the CLIP embeddings of a target image and the generated image.
We encode each target image as via a CLIP image encoder, then aggregate the embeddings into using (5) or (7), depending on whether subsampling is applied. The aggregated embedding is then incorporated into the reverse diffusion process as a style guide. Further implementation detailes are provided in Appendix C.
We apply our SG-based approach to both datasets. While it provides privacy protection by obfuscating embedding details, the resulting images captured only generalized stylistic elements and lack the detailed fidelity and coherence achieved with the TI-based method. As shown in Figure 6, this highlights the superiority of TI in balancing privacy and high-quality image generation.
The reduced effectiveness of SG for style transfer may stem from its sensitivity to hyperparameters such as the guidance weight , leading to instability. Although Bansal et al. (2024) proposed remedies, namely backward guidance and per-step self-recurrence, these proved insufficient for our application. Additionally, the CLIP embeddings may not retain enough stylistic detail after the aggregation.
5 Conclusion
We presented a differentially private adaptation method for diffusion models using Textual Inversion for privacy-preserving style transfer. Experiments on private artwork and Paris 2024 pictograms showed TI preserves stylistic fidelity and outperforms Style Guidance. Our results demonstrate embedding-driven methods as efficient, scalable alternatives to DP-SGD, balancing style quality and privacy.
Impact Statement
The use of images without owner consent raises significant ethical concerns, particularly regarding the exploitation of intellectual property. This work introduces a method for visual generative models to adapt to new styles and classes while ensuring privacy and copyright protection for data owners. By providing a framework for privacy-preserving adaptation, this technology aims to respect intellectual property and address ethical challenges in generative AI. While it does not eliminate the need for consent from data owners, we hope that it represents a step toward balancing innovation with ethical considerations in AI development. Beyond creative applications, the proposed method has broader potential uses, including synthetic data generation, privacy-preserving personalization, and fine-tuning diffusion models for private or domain-specific tasks.
Acknowledgements
We sincerely thank Tatchamon Wongworakul (@eveismyname) for providing her artwork for use in this study. We are also grateful to Anwar Hithnawi and Varun Chandrasekaran for their insightful discussions and feedback, as well as to all participants in our user study. Sanmi Koyejo acknowledges support by NSF 2046795 and 2205329, IES R305C240046, the MacArthur Foundation, Stanford HAI, OpenAI, and Google.
References
- Abadi et al. (2016) Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. Deep learning with differential privacy. In ACM SIGSAC, 2016.
- Balle & Wang (2018) Balle, B. and Wang, Y.-X. Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising. In ICML, 2018.
- Bansal et al. (2024) Bansal, A., Chu, H.-M., Schwarzschild, A., Sengupta, S., Goldblum, M., Geiping, J., and Goldstein, T. Universal guidance for diffusion models. In ICLR, 2024.
- Bińkowski et al. (2018) Bińkowski, M., Sutherland, D. J., Arbel, M., and Gretton, A. Demystifying mmd gans, 2018.
- Carlini et al. (2023) Carlini, N., Hayes, J., Nasr, M., Jagielski, M., Sehwag, V., Tramer, F., Balle, B., Ippolito, D., and Wallace, E. Extracting training data from diffusion models. In USENIX Security, 2023.
- Dockhorn et al. (2023) Dockhorn, T., Cao, T., Vahdat, A., and Kreis, K. Differentially private diffusion models. TMLR, 2023.
- Duan et al. (2023) Duan, J., Kong, F., Wang, S., Shi, X., and Xu, K. Are diffusion models vulnerable to membership inference attacks? In ICML, 2023.
- Dwork (2006) Dwork, C. Differential privacy. In ICALP, 2006.
- Dwork et al. (2006) Dwork, C., McSherry, F., Nissim, K., and Smith, A. Calibrating noise to sensitivity in private data analysis. In TCC, 2006.
- Dwork et al. (2014) Dwork, C., Roth, A., et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
- Gal et al. (2023) Gal, R., Alaluf, Y., Atzmon, Y., Patashnik, O., Bermano, A. H., Chechik, G., and Cohen-or, D. An image is worth one word: Personalizing text-to-image generation using textual inversion. In ICLR, 2023.
- Ghalebikesabi et al. (2023) Ghalebikesabi, S., Berrada, L., Gowal, S., Ktena, I., Stanforth, R., Hayes, J., De, S., Smith, S. L., Wiles, O., and Balle, B. Differentially private diffusion models generate useful synthetic images. arXiv preprint arXiv:2302.13861, 2023.
- Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/8a1d694707eb0fefe65871369074926d-Paper.pdf.
- Ho & Salimans (2021) Ho, J. and Salimans, T. Classifier-free diffusion guidance. In NeurIPS Workshop on Deep Generative Models and Downstream Applications, 2021.
- Ho et al. (2020) Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. NeurIPS, 2020.
- Hoory et al. (2021) Hoory, S., Feder, A., Tendler, A., Erell, S., Peled-Cohen, A., Laish, I., Nakhost, H., Stemmer, U., Benjamini, A., Hassidim, A., et al. Learning and evaluating a differentially private pre-trained language model. In Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 1178–1189, 2021.
- Hu et al. (2022) Hu, E. J., yelong shen, Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. In ICLR, 2022.
- (18) Innat. Van gogh paintings. https://www.kaggle.com/datasets/ipythonx/van-gogh-paintings.
- (19) International Olympic Committee. Olympic properties. https://olympics.com/ioc/olympic-properties.
- Jayasumana et al. (2024) Jayasumana, S., Ramalingam, S., Veit, A., Glasner, D., Chakrabarti, A., and Kumar, S. Rethinking fid: Towards a better evaluation metric for image generation. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9307–9315. IEEE, June 2024. doi: 10.1109/cvpr52733.2024.00889. URL http://dx.doi.org/10.1109/CVPR52733.2024.00889.
- Kim et al. (2022) Kim, G., Kwon, T., and Ye, J. C. Diffusionclip: Text-guided diffusion models for robust image manipulation. In CVPR, 2022.
- Lebensold et al. (2024) Lebensold, J., Sanjabi, M., Astolfi, P., Romero-Soriano, A., Chaudhuri, K., Rabbat, M., and Guo, C. Dp-rdm: Adapting diffusion models to private domains without fine-tuning. arXiv preprint arXiv:2403.14421, 2024.
- Mironov (2017) Mironov, I. Rényi differential privacy. In CSF, 2017.
- (24) Paris 2024. Paris 2024 - pictograms. https://olympics.com/en/paris-2024/the-games/the-brand/pictograms.
- Podell et al. (2023) Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., and Rombach, R. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023.
- Radford et al. (2021) Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, 2021.
- Rombach et al. (2022) Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
- Ruiz et al. (2023) Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., and Aberman, K. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023.
- Sander et al. (2024) Sander, T., Yu, Y., Sanjabi, M., Durmus, A. O., Ma, Y., Chaudhuri, K., and Guo, C. Differentially private representation learning via image captioning. In ICML, 2024.
- Schuhmann et al. (2022) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al. Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 2022.
- Song et al. (2021a) Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In ICLR, 2021a.
- Song et al. (2021b) Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In ICLR, 2021b.
- Steinke (2022) Steinke, T. Composition of differential privacy & privacy amplification by subsampling. arXiv preprint arXiv:2210.00597, 2022.
- Szegedy et al. (2015) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision, 2015. URL https://arxiv.org/abs/1512.00567.
- Tramèr et al. (2024) Tramèr, F., Kamath, G., and Carlini, N. Position: Considerations for differentially private learning with large-scale public pretraining. In ICML, 2024.
- von Platen et al. (2022) von Platen, P., Patil, S., Lozhkov, A., Cuenca, P., Lambert, N., Rasul, K., Davaadorj, M., Nair, D., Paul, S., Berman, W., Xu, Y., Liu, S., and Wolf, T. Diffusers: State-of-the-art diffusion models. https://github.com/huggingface/diffusers, 2022.
- Vyas et al. (2023) Vyas, N., Kakade, S. M., and Barak, B. On provable copyright protection for generative models. In ICML, 2023.
- Yu et al. (2022) Yu, D., Naik, S., Backurs, A., Gopi, S., Inan, H. A., Kamath, G., Kulkarni, J., Lee, Y. T., Manoel, A., Wutschitz, L., et al. Differentially private fine-tuning of language models. In ICLR, 2022.
- Yu et al. (2024) Yu, Y., Sanjabi, M., Ma, Y., Chaudhuri, K., and Guo, C. Vip: A differentially private foundation model for computer vision. In ICML, 2024.
Appendix A User Study
A.1 Study Design and Objective
The user study aimed to assess the utility of our approach under different DP and subsampling configurations by evaluating the models’ ability to adapt to novel styles. The study involved 25 participants, each of whom was tasked with comparing images generated using various configurations and selecting the one that better captured the style of reference images.
A.2 Experimental Setup
Participants were shown reference images from two datasets:
-
•
The @eveismyname dataset of private artwork.
-
•
The Paris 2024 Pictogram dataset.
For each dataset, 10 prompts were used to generate images, resulting in 20 groups of images (10 prompts per dataset). Each group included images generated using the same prompt and dataset but with different model configurations. Configurations varied in the addition of DP noise and the size of subsampling.
-
•
Original Textual Inversion (TI)
-
•
DPAgg-TI (, no DP) w/o subsampling
-
•
DPAgg-TI () without subsampling
-
•
No Adaptation
-
•
DPAgg-TI (, no DP) with subsampling
-
•
DPAgg-TI () with subsampling
-
•
Style Guidance (SG)
A.3 Survey Procedure
Participants were asked to evaluate two groups of images: one randomly selected from the @eveismyname dataset and one from the Paris 2024 Pictogram dataset. For each group:
-
1.
Participants were shown reference images from the target dataset.
-
2.
They were presented with pairs of images generated using different model configurations for the same prompt.
-
3.
Participants selected the image they felt better captured the style of the reference images.
A.4 Evaluation Metrics
The study focused on assessing:
-
•
Participants’ preference between regular TI and DPAgg-TI for style adaptation.
-
•
The impact of DP noise and subsampling size on the perceived utility of style transfer.
A.5 Results and Analysis
The results are summarized in Table 3. Key observations include:
-
•
Participants showed no clear preference between regular TI and DPAgg-TI in capturing styles for either dataset.
-
•
Both DP noise and reduced subsampling size decreased the perceived quality of style transfer.
-
•
Preferences were split between configurations with with and without subsampling, though subsampling generally had favorable outcomes.
These findings highlight the trade-off between increased DP robustness and reduced utility, suggesting that the optimal configuration may depend on subjective preferences and specific application requirements.
regular TI | No Adaptation | Unsure | |
---|---|---|---|
@eveismyname | 19 | 4 | 2 |
Paris 2024 | 16 | 6 | 3 |
DPAgg-TI (no DP, no subsampling) | No Adaptation | Unsure | |
---|---|---|---|
@eveismyname | 16 | 9 | 0 |
Paris 2024 | 15 | 4 | 6 |
regular TI | DPAgg-TI (no DP, no subsamp.) | Unsure | |
---|---|---|---|
@eveismyname | 12 | 13 | 0 |
Paris 2024 | 9 | 10 | 6 |
regular TI | DPAgg-TI (no DP, subsamp. ) | Unsure | |
---|---|---|---|
@eveismyname | 16 | 6 | 3 |
Paris 2024 | 7 | 13 | 5 |
DPAgg-TI (no DP, no subsampling) | DPAgg-TI (no DP, subsamp. ) | Unsure | |
@eveismyname | 18 | 4 | 3 |
Paris 2024 | 10 | 8 | 7 |
DPAgg-TI () no subsampling | DPAgg-TI (, subsamp. ) | Unsure | |
---|---|---|---|
@eveismyname | 14 | 10 | 1 |
Paris 2024 | 3 | 16 | 6 |
DPAgg-TI (no DP, no subsampling) | Style Guidance | Unsure | |
---|---|---|---|
@eveismyname | 16 | 8 | 1 |
Paris 2024 | 20 | 2 | 3 |
DPAgg-TI (, subsamp. ) | Style Guidance | Unsure | |
---|---|---|---|
@eveismyname | 16 | 8 | 1 |
Paris 2024 | 19 | 2 | 4 |
DPAgg-TI (no DP, subsamp. ) | DPAgg-TI (, subsamp. ) | Unsure | |
---|---|---|---|
@eveismyname | 8 | 5 | 12 |
Paris 2024 | 15 | 4 | 6 |

Appendix B Kernel Inception Distance Discussion
Our results indicate that DPAgg-TI preserves the style transfer fidelity of TI while also ensuring differential privacy. Notably, for @eveismyname () at low privacy budgets, we observe even lower KID values than standard TI, suggesting enhanced style alignment. Similarly, results for the Paris 2024 dataset follow a comparable trend, with DPAgg-TI achieving KID scores similar to TI at low privacy budgets. However, the overall KID scores for this dataset remain high within the context of diffusion model style transfer.
Upon inspecting the generated images (Figure 8), we hypothesize that the abstract and out-of-distribution nature of the Paris 2024 images poses a challenge for the Inception network, leading to less meaningful feature embeddings. This likely inflates the measured embedding distances between generated and reference images, resulting in higher-than-expected KID values.
For KID evaluations, we used prompts similar to those employed during TI training: “A painting/icon in the style of ”. Consistent with the training image captions, these prompts do not specify a subject.

Appendix C Style Guidance
C.1 Background: Denoising Diffusion Implicit Models
Denoising Diffusion Implicit Models (DDIM) sampling (Song et al., 2021a) uses the predicted noise and a noise schedule represented by an array of scalars to first predict a clean image , then makes a small step in the direction of to obtain . The reverse diffusion process for DDIM sampling can be formalized as follows:
(8) |
(9) |
C.2 Implementation
We follow the style guidance process introduced by Bansal et al. (2024), modifying it to include differential privacy mechanisms. Let denote the target style image, the noisy image at step , and the CLIP image encoder. The forward guidance process is defined as follows:
(10) |
where is a guidance weight and is the negative cosine similarity loss. For a detailed description of Universal Guidance, including the backward guidance process and per-step self-recurrence, we refer the reader to the original paper. The reverse diffusion step replaces with , generating an image that aligns with the text conditioning while incorporating the stylistic characteristics of .
To integrate differential privacy, we encode each target image into and aggregate these embeddings into using the centroid method. The aggregated guides the reverse diffusion process:
(11) |
This ensures privacy-preserving style transfer while maintaining high stylistic fidelity.
C.3 Ablation


To better understand the limited effectiveness of style guidance in our experiments, despite its success in (Bansal et al., 2024), we applied our approach to a dataset of 143 paintings from Van Gogh’s Saint-Paul Asylum, Saint-Rémy collection(Innat, ) (Figure 9). Unlike the @eveismyname and Paris 2024 datasets, it is highly likely that Stable Diffusion has been trained on these images. Additionally, Bansal et al. (2024) demonstrated successful adaptation towards the style of Van Gogh’s Starry Night as a single reference image, making this dataset a reasonable interpolation between their successful results and our more limited findings.
Without DP noise or subsampling, we obtained reasonable style transfer results, as shown in Figure 10. This suggests that style guidance struggles when applied to previously unseen target styles, and that its effectiveness may depend on prior exposure within the pre-training data.
Appendix D Additional Style Transfer and Ablation Results


