HandCraft: Anatomically Correct Restoration of Malformed Hands
in Diffusion Generated Images
Abstract
Generative text-to-image models, such as Stable Diffusion, have demonstrated a remarkable ability to generate diverse, high-quality images. However, they are surprisingly inept when it comes to rendering human hands, which are often anatomically incorrect or reside in the “uncanny valley”. In this paper, we propose a method HandCraft for restoring such malformed hands. This is achieved by automatically constructing masks and depth images for hands as conditioning signals using a parametric model, allowing a diffusion-based image editor to fix the hand’s anatomy and adjust its pose while seamlessly integrating the changes into the original image, preserving pose, color, and style. Our plug-and-play hand restoration solution is compatible with existing pretrained diffusion models, and the restoration process facilitates adoption by eschewing any fine-tuning or training requirements for the diffusion models. We also contribute MalHand datasets that contain generated images with a wide variety of malformed hands in several styles for hand detector training and hand restoration benchmarking, and demonstrate through qualitative and quantitative evaluation that HandCraft not only restores anatomical correctness but also maintains the integrity of the overall image.








1 Introduction
Text-to-image diffusion models, such as Stable Diffusion [25], have gained wide popularity due to their remarkable capability to generate diverse, high-quality images across a wide range of styles [27, 23]. However, they struggle to accurately render human hands, often producing anatomically incorrect or highly unusual forms [22]. These errors can include hands with supernumerary or missing digits, atypical relative finger lengths, and other distortions. Fig. 1 illustrates two cases of such malformed hands, with a missing finger in the top row and abnormal relative finger lengths in the bottom row. These examples highlight the discrepancy between the generated depictions and human anatomy.
Due to humans’ high sensitivity to deviations from the expected human form, generating malformed hands often leads to an “uncanny valley” [21] effect, which affects the realism of these images. This in turn hinders the use of these models as artistic tools. We note here that we do not use the term “malformed” in the pejorative sense, since we recognize that a wide variety of hand shapes are naturally present in the human population or may arise from misadventure. That is, the model is inadvertently forming the hands atypically, rather than intentionally depicting the difference that exists in the human population.
Diffusion models’ propensity for generating malformed hands has been widely recognized [19, 22, 3]. There has been growing interest for techniques to repair these malformed hands, reflected by a large number of tutorials across various languages for this purpose [2, 7, 15, 1]. However, the restoration methods proposed in these tutorials often necessitate human intervention. For instance, repeatedly inpainting the manually-annotated affected areas until a satisfactory outcome is achieved [12]. The requirement for human involvement makes the correction process laborious. Prompt engineering has also emerged as a popular strategy to mitigate the issue of malformed hands in images generated by diffusion models [17, 26]. By meticulously designing and refining text prompts, users attempt to guide the model towards generating more anatomically accurate hands [5]. Despite these efforts, even well-crafted prompts often fail to prevent the occurrence of malformed hands [4].
We introduce an end-to-end framework designed to repair malformed hands in generated images while minimizing the need for human intervention. To achieve this, we propose an approach for generating a hand shape as a conditioning image to guide ControlNet [29], a diffusion-based image editing method, in correcting malformed hands. Our method is capable of responsively adjusting the size and angle of the hand shape, ensuring that the restored hand seamlessly integrates with the original human figure, while preserving the surrounding regions of the image unaltered. Experimental results demonstrate the robustness of our approach. Furthermore, our restoration process is designed to be plug-and-play, requiring no further fine-tuning or training of the ControlNet, and is therefore easy to integrate into various diffusion models. Our contributions are
-
1.
HandCraft, a framework for detecting and restoring malformed hands generated by diffusion models while minimizing alterations to other image regions;
-
2.
a simple yet robust control image generation method to construct a mask and an aligned depth image for the hand region as condition signals, enabling a diffusion-based image editor to restore malformed hands; and
-
3.
the MalHand datasets, comprising portraits with malformed hands across diverse styles, that can be used to train a malformed hand detector and thoroughly evaluate baseline models.
HandCraft achieves state-of-the-art performance on both the MalHand-realistic and MalHand-artistic datasets.
2 Related Work
In this section, we provide a brief overview of image synthesis and editing techniques before discussing approaches for restoring malformed hands in generated images.
Image Synthesis.
After earlier successes with Variational Autoencoders (VAEs) [14] and Generative Adversarial Networks (GANs) [8], diffusion models [9] have emerged as a powerful new class of generative models. They are characterized by their ability to map noise into complex images through a gradual denoising process. This technique was refined by the development of Latent Diffusion Models (LDMs) [25], which tackle the computational challenges by operating in a latent space, significantly improving both efficiency and the quality of generated images. By leveraging pretrained autoencoders, LDMs offer a versatile and flexible architecture that supports a wide range of conditioning inputs, such as text descriptions. This advance enables efficient and adaptable image synthesis models like Stable Diffusion [25]. However, despite the impressive capabilities of these models, generated images of humans often exhibit malformed hands. The complex structure and fine details of hands pose a challenge for these models, often resulting in anatomically incorrect hand representations [22].
Image Editing
has become a key application of generative models, allowing users to modify images as desired. Early methods used encoder-decoder architectures to encode, manipulate, and decode images [11, 28]. Recent techniques employ GANs for editing [30, 10]. ControlNet [29] uses diffusion models with spatially-localized conditioning to enhance editing, allowing conditions like Canny edges and human poses to guide image generation and editing.
Malformed Hand Restoration.
Contemporary research addresses correcting malformed hands in Stable Diffusion-generated images. HandDiffuser [22] generates humans with correct hands from text. HandRefiner [19] modifies entire images to fix hands, potentially altering unintended aspects. Our research, however, targets only the malformed hand area, aiming for precise corrections with minimal impact on the rest of the image, thus preserving the artwork’s integrity and originality.

3 Detecting and Restoring Malformed Hands
In this section, we detail the proposed HandCraft framework. We begin by establishing the notation and providing an overview of the framework’s pipeline. Subsequently, we delve into the generation of a conditional hand shape that ensures anatomical plausibility and accurate positioning. This is followed by a discussion on defining the restoration region within the image for localized correction while preserving the overall image integrity.
3.1 The HandCraft Framework
Our HandCraft framework is designed to address restoring malformed hands in images generated by diffusion models. Additionally, this framework can also edit well-formed hands in images, allowing us to modify them into desired hand gestures. As illustrated in Fig. 2, this framework consists of three stages: hand detection, conditioning signals generation, and hand restoration.
At the hand detection stage, a hand detector is applied to the input image to identify the region of interest, producing a bounding box mask for the hand. Any pretrained hand detection model, such as YOLOv8 [18], can be used as the hand detector, but both standard and malformed hands will be detected. If we want to avoid modifying the images when the generated hand is not malformed, we can fine-tune the hand detector on stylized images to only detect malformed hands. In addition to the hand detector, a body pose estimator (Mediapipe [20]) is also used to predict the body pose , which includes the hand keypoints that facilitate the correct positioning and orientation of the hand template . Mediapipe exhibits robust capability in estimating keypoints of malformed hands, despite not being specifically designed for this purpose. Its strength lies in leveraging estimations of other body parts to inform hand keypoint predictions, allowing for reasonable estimates even in cases of malformed hands. To assess Mediapipe’s success rate in detecting malformed hand keypoints, we conducted an experiment. We randomly selected 500 images containing malformed hands and applied Mediapipe for hand keypoint detection. The results show that Mediapipe successfully provided keypoints for all malformed hands, demonstrating its effectiveness in this challenging scenario.
At the conditioning signals generation stage, the primary objective is to create a control image that guides the hand restoration process and a corresponding control mask which defines the restoration region. To this end, a control image generator aligns a pre-defined hand template , which is a depth map of a hand, using the extracted body pose to create the control image , as well as a template mask . The control mask is obtained by the union of the bounding box mask extracted from the original image and the hand template mask , to precisely localize the hand. The depth image and the mask for hand are crucial conditioning signals for the editing process to achieve a seamless integration of the restored hand.
The final stage is hand restoration. A pretrained ControlNet [29] model with frozen weights is provided with the control image and mask to adjust the input image , given text prompt that describes the shape of the hand template . This restoration process is training-free and thus compatible with various diffusion models. It focuses only on the hand region, while preserving the integrity of the rest of the image. The output of this stage is the restored image , where the malformed hand has been restored to match the desired shape and pose specified by the hand template and text prompt. The restored hand blends with the original image’s style and aesthetics, resulting in a more realistic and anatomically correct representation.
Our framework’s versatility is evidenced by its successful application to various instances of Stable Diffusion models, demonstrating its efficacy across diverse image styles.



3.2 Control Image Generation
The two main challenges when generating the conditioning signals for hand restoration are (1) ensuring the hand template is anatomically plausible for the body pose; and (2) accurately positioning in the input image to make a seamless generation, inclusive of its rotation and handedness (left or right hand). Our detailed solutions to address these two challenges are provided below.
Ensuring Anatomical Plausibility.
To guarantee the anatomical plausibility of , we utilize a predefined library [12] to select an appropriate hand template. This hand template serves as a novel control image, guiding ControlNet to generate anatomically accurate images. This significantly enhances the restoration’s anatomical accuracy—a critical factor since methods like mesh fitting (e.g., using MeshFormer [16]) for severely deformed hands in can lead to unnatural hand shapes that deviate from typical human anatomy. Such deviations are evident when observing inputs with malformed hands in which fingers may appear, with visualization examples provided in the supplement. In contrast, the template-based approach of aligning within the region defined by and subsequently adjusting within the union mask ensures a more faithful restoration, demonstrating the method’s efficacy in maintaining anatomical fidelity.
To select a hand template, we propose two methods: (1) silhouette-consistent and (2) random selection. Silhouette-consistent selection aims to encourage geometric consistency with the original malformed hand, insofar as is possible while still correcting anatomical issues. To do so, we generate multiple renders with different hand template parameters and automatically select the render that most closely matches the silhouette, as shown in Fig. 3. In contrast, random selection from the template library encourages hand gesture diversity. The intended use-case for this approach is for a user to select their preferred gesture from a set of renders. While we do not consider this co-design element in the experiments of this paper, it is part of our creativity-enabling software package. Note that the primary goal of HandCraft is to ensure anatomical correctness of hand renders and seamless integration with the original image, rather than consistency with the original (potentially corrupt) hand render.
Finally, we note that artifacts may emerge at the border of the regenerated area. To mitigate these, we generate multiple results where the repainted region has slightly different scales and select the result with the highest realism according to our metrics. This approach ensures that the final output maintains a high level of realism and seamlessly blends with the original image.



Accurate Positioning and Orientation.
The restoration of deformed hands necessitates that the hand is not only anatomically accurate but also precisely positioned and oriented. This means that the hand template should align correctly in terms of location, rotation, and handedness (left or right hand), corresponding to the detected deformation. Fig. 5 illustrates that an inaccurate rotation will result in misalignment between the hand and the wrist, and that the wrong handedness (chirality) leads to generation errors.
As shown in Fig. 4, the procedure is as follows.
-
1.
Identification of Keypoints: Utilizing pose estimation (e.g., MediaPipe [20]), we identify keypoints on the deformed hand within . Corresponding keypoints are also defined on the hand template , denoted as .
-
2.
Scaling: The hand template is scaled to match the size of the detected hand, based on the ratio of distances between key pairs in and , ensuring fits the size of the deformation in .
-
3.
Translation: is then translated such that its reference point (e.g., the base of the palm) aligns with the equivalent point in .
-
4.
Rotation: To correct for orientation, a rotation matrix is applied to , where is the angle calculated from the orientation discrepancy between and . For handedness correction, a conditional mirroring transformation may also be applied if necessary.
This ensures that the transformed hand template aligns accurately with the orientation and position of the detected hand within the image . This procedure highlights the significance of precise alignment in hand restoration, where even minor deviations can lead to an unnatural appearance. By applying these steps, we ensure that the restored hand is correctly aligned and integrated within the input image, maintains anatomical accuracy and seamlessly blends into the original scene, thus preserving the overall authenticity of the image.
3.3 Hand Restoration
To ensure that the hand restoration process is targeted and does not inadvertently modify other parts of the image, we introduce the concept of a restoration region. This guides ControlNet [29] to concentrate exclusively on the identified deformity. Such a strategy reflects our goal of maintaining the original image’s integrity, ensuring only the malformed hand is rectified.
The restoration region should: (1) encompass the entire area of the malformed hand, ensuring comprehensive correction, as represented by the mask ; and (2) be sufficiently large to accommodate the corrected hand shape, i.e., the hand template , as represented by the template bounding box mask . This ensures that the restoration does not introduce any spatial constraints that could compromise the correction’s effectiveness. The final restoration region, denoted as , is then given by the union of and (). This union ensures that the restoration region is optimally sized to cover both the detected deformity and the area required for the corrected hand shape.
This methodical definition of the restoration region is pivotal, as it directly influences the restoration’s quality and the preservation of the image’s overall composition. Experimental analyses affirm the efficacy of this dual-region approach, highlighting its superiority in achieving precise and aesthetically cohesive hand restorations within the broader context of the original images.








4 Experiments
In this section, we present our experiments, where we evaluate the performance of HandCraft qualitatively and quantitatively and compare it to a baseline method.
4.1 Dataset
We propose MalHand datasets, including training and evaluation datasets. The evaluation dataset is divided into photorealistic images (MalHand-realistic) and stylized artistic images (MalHand-artistic). Here, we outline the process of generating these datasets.
Training data.
While using a pretrained hand detection model, such as YOLOv8 [18], provides satisfactory performance to detect the malformed hands, greater accuracy can be achieved by finetuning the model. To do so, it is necessary to compile a training dataset with malformed hands and their locations. For this purpose, we utilize the HaGRID dataset [13], which contains portrait photos featuring hands, along with bounding boxes that mark the hand positions.
To create instances of malformed hands for our training data, we leveraged Stable Diffusion [25] to modify the hands within the provided bounding boxes, using “hands” as the guiding text prompt. This process aimed to generate a variety of hand abnormalities similar to those encountered in images produced by Stable Diffusion models. After generating these modified hands, we manually selected images that clearly displayed malformed hands. This selection process resulted in a dataset comprising 60,000 images, each featuring at least one malformed hand.
Additionally, to ensure the model can distinguish between malformed and normal hands, we also evaluate our metrics on the unaltered images from HaGRID [13]. The bounding boxes from the original dataset were preserved to provide the locations of both normal and malformed hands.
Evaluation data.
To assess the effectiveness of our algorithm in restoring malformed hands across various styles, we compiled a dataset comprising 1,500 portrait images featuring malformed hands. This dataset is divided into two categories: 1,000 images in a realistic style (MalHand-realistic) and 500 images in artistic styles (MalHand-artistic). By incorporating a mix of styles, we aim to evaluate the algorithm’s robustness and adaptability to different visual representations. The realistic images consist of those sourced from the HaGRID dataset [13]. The generating process is similar to the training data, using a different set of images. These images are used for quantitative evaluation.
We also generated artistic-style portrait images using Stable Diffusion for qualitative evaluation, including Japanese anime and Disney cartoon styles, among others. Complete prompts and instructions for generation are provided in the supplement. We then manually identified portraits with malformed hands. Bounding boxes for these malformed hands were obtained via crowdsourcing, with detailed instructions provided to ensure consistency and accuracy. These instructions are included in the supplement.
4.2 Evaluation Metrics
Mean hand pose confidence.
This metric assesses the naturalness of the hand’s anatomy by averaging the confidence scores predicted by Mediapipe [20] across all detected hand keypoints. Let denote the confidence of the th hand keypoint out of the set of detected keypoint indices for hand in a dataset containing hands. Then the mean hand pose confidence is given by
(1) |
Mean hand classifier confidence.
This metric assesses the model’s ability to generate hands that can be confidently classified as non-malformed by our YOLOv8-based hand detector [18]. Let denote the confidence of the hand classifier for hand in a dataset containing hands. Then the mean hand classifier confidence is given by
(2) |
Masked peak signal-to-noise ratio (PSNR) / masked structural similarity index measure (SSIM).
These metrics assess how well the image outside the restored area is preserved, at a per-pixel and structural level: whether the restoration has altered or corrupted the rest of the image.


Validation.
User studies were conducted to investigate whether the confidence scores (i.e., and ) correlated with naturalness. Seven unaffiliated individuals rated the naturalness of hands in each image, from a stratified random subset of 200 restored images, on a scale of 1 to 5, where 1 is least natural and 5 is most natural. As shown in Fig. 6 (opacity 1%), the hand pose confidence score () and the hand classifier confidence score () correlates with perceived naturalness. The average Pearson’s correlation coefficient between and human-rated naturalness scores is 0.44 and the average correlation for is 0.82.


4.3 Comparison with Non-Malformed Images
We quantitatively compare the anatomical accuracy of HandCraft’s restored images to a control group of images without malformed hands. Two sets are assembled for evaluation: a control set composed of the real images from the HaGRID dataset [13] and a set composed of realistic images with malformed hands from the MalHand-realistic dataset, which have undergone hand restoration with HandCraft. For each image, we calculate the hand pose confidence and hand classifier confidence , reflecting anatomical correctness as perceived by hand detection and pose estimation algorithms, with higher scores indicating closer resemblance to authentic hand anatomy.
Fig. 7 shows overlapping hand pose confidence intervals between the restored and Non-Malformed images. While there is a statistically significant difference between the two groups, this is partly due to inherent differences between real and restored generated images. The overlap in mean hand pose confidence scores between HandCraft restorations and the Non-Malformed group indicates HandCraft’s superior performance in restoring anatomical correctness compared to HandRefiner.
Method | PSNR | SSIM | ||
---|---|---|---|---|
Null Intervention | 0.68 | 0.00 | N/A | N/A |
CtrlNetHP [29] | 0.57 | 0.09 | 12.26 | 0.5848 |
CtrlNetSeg [29] | 0.61 | 0.17 | 14.62 | 0.6261 |
HandRefiner [19] | 0.54 | 0.25 | 12.93 | 0.3839 |
HandCraft (Ours) | 0.79 | 0.25 | 23.40 | 0.6462 |
4.4 Comparison with Baselines
To demonstrate the effectiveness of HandCraft, we compare its performance with several ControlNet [29] baselines, including ControlNet Hand Pose (CtrlNetHP), and ControlNet Segmentation (CtrlNetSeg), as well as the current state-of-the-art, HandRefiner [19], on the MalHand-realistic dataset. The results in Tab. 1 demonstrate HandCraft’s superior performance. It achieves the highest hand pose confidence scores, indicating more natural hand postures. HandCraft also outperforms the null intervention and ControlNet variants in hand classifier confidence, while being comparable to HandRefiner, suggesting realistic, non-malformed hand restorations. Moreover, HandCraft attains the highest PSNR and SSIM scores in masked non-hand regions, showcasing its ability to maintain image integrity while correcting hand anatomy. The results are further supported by qualitative comparisons conducted on the MalHand-artistic dataset, as shown in Fig. 8. These visual examples demonstrate that HandCraft not only corrects the malformed hands but also does so with a seamless integration into the original portrait, surpassing the performance of HandRefiner [19]. More visual results can be found in the supplement.
Malformed Hand




HandRefiner [19]




HandCraft




4.5 Ablation Study
S | T | R | H | ||||
---|---|---|---|---|---|---|---|
✗ | ✓ | ✓ | ✓ | ✓ | ✓ | 0.63 | 0.11 |
✓ | ✗ | ✓ | ✓ | ✓ | ✓ | 0.72 | 0.24 |
✓ | ✓ | ✗ | ✓ | ✓ | ✓ | 0.43 | 0.02 |
✓ | ✓ | ✓ | ✗ | ✓ | ✓ | 0.38 | 0.00 |
✓ | ✓ | ✓ | ✓ | ✗ | ✓ | 0.59 | 0.21 |
✓ | ✓ | ✓ | ✓ | ✓ | ✗ | 0.77 | 0.25 |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 0.79 | 0.25 |
As shown in Tab. 2, we ablate several components of our method. We demonstrate that removing any component of our mask input, including the bounding box mask and the template mask , results in poorer restoration quality. We also show that removing any of the template alignment components (scale, translation, rotation, and handedness alignment) also results in significantly poorer performance. See the supplement for a qualitative ablation.
5 Discussion and Conclusion
Despite advances in generative image models, the issue of malformed hands has persisted across generations of such models, even in recently-released state-of-the-art systems like Stable Diffusion XL [24] and Sora [6]. We provide visualizations in the supplement highlighting instances of anatomically incorrect hands produced by these latest models. As such, we expect HandCraft to remain useful for some time, since this issue appears to be quite persistent. Even if this challenge is overcome by a new generative model, our method provides functionality to change hand gestures and poses for creative control, offering utility beyond just correcting malformed hands.
Our HandCraft framework addresses the challenge of correcting malformed hands in images generated by text-to-image diffusion models. Through the use of a parametric hand model to guide an image editor, we achieve seamless anatomical corrections that integrate with the original image’s aesthetics. Our approach is effective and accessible, requiring no additional training. The Malhand datasets further provide resources for training and benchmarking. Comparisons with HandRefiner [19] showed that HandCraft surpassed it in restoring hand anatomy while maintaining the integrity of the rest of the image.
References
-
[1]
Stable diffusion finger repair method with various gesture posture adjustments (in chinese), 2024.
https://youtu.be/MRiVy2MgWCU?si=Wjp72LIblKrNjDmb. - [2] Ahfaz Ahmed. How to fix hands in stable diffusion. In OpenAI Journey, Feb 2024.
-
[3]
Endangered AI.
Ultimate way to fix hands, 2024.
https://youtu.be/sKaEd2XfqZE?si=7KI-qIHar7tNopI9. -
[4]
Mikhail Avady.
Bad hands, 2024.
https://civitai.com/models/116230/bad-hands-5. -
[5]
Mikhail Avady.
Stable diffusion negative prompts, 2024.
https://github.com/mikhail-bot/stable-diffusion-negative-prompts. - [6] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024.
-
[7]
DigitalDreamer.
Fix hand, 2023.
https://civitai.com/models/103942/fix-hand. - [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
- [9] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020.
- [10] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Eur. Conf. Comput. Vis., pages 172–189, 2018.
- [11] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4):1–14, 2017.
-
[12]
Sebastian Kamph.
Controlnet guidance tutorial. fixing hands?, 2023.
https://youtu.be/wNOzW1N_Fxw. - [13] Alexander Kapitanov, Karina Kvanchiani, Alexander Nagaev, Roman Kraynov, and Andrei Makhliarchuk. Hagrid–hand gesture recognition image dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4572–4581, 2024.
- [14] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on Machine Learning (ICML), 2013.
-
[15]
SUJEET KUMAR.
Ways to fix hand in stable diffusion, 2024.
https://www.pixcores.com/2023/05/fix-hand-in-stable-diffusion. - [16] Yuan Li, Xiangyang He, Yankai Jiang, Huan Liu, Yubo Tao, and Lin Hai. Meshformer: High-resolution mesh segmentation with graph transformer. In Computer Graphics Forum, volume 41, pages 37–49. Wiley Online Library, 2022.
- [17] Vivian Liu and Lydia B Chilton. Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–23, 2022.
-
[18]
Ultralytics LLC.
Ultralytics yolov8, 2023.
https://github.com/ultralytics/ultralytics. - [19] Wenquan Lu, Yufei Xu, Jing Zhang, Chaoyue Wang, and Dacheng Tao. Handrefiner: Refining malformed hands in generated images by diffusion-based conditional inpainting. arXiv preprint arXiv:2311.17957, 2023.
- [20] Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, et al. Mediapipe: A framework for building perception pipelines. arXiv preprint arXiv:1906.08172, 2019.
- [21] Masahiro Mori, Karl F MacDorman, and Norri Kageki. The uncanny valley from the field. IEEE Robotics & Automation Magazine, 19(2):98–100, 2012.
- [22] Supreeth Narasimhaswamy, Uttaran Bhattacharya, Xiang Chen, Ishita Dasgupta, and Minh Hoai1. Handiffuser: Text-to-image generation with realistic hand appearances. In IEEE/CVF Conference on Computer Visision Pattern Recognition (CVPR), 2024.
- [23] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195–4205, 2023.
- [24] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. Stability AI, 2023.
- [25] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
- [26] Zhendong Wang, Yifan Jiang, Yadong Lu, Pengcheng He, Weizhu Chen, Zhangyang Wang, Mingyuan Zhou, et al. In-context learning unlocked for diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
- [27] Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 56(4):1–39, 2023.
- [28] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In IEEE/CVF Conference on Computer Visision Pattern Recognition (CVPR), pages 5505–5514, 2018.
- [29] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In IEEE International Conference on Computer Vision (ICCV), 2023.
- [30] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), pages 2242–2251. IEEE, 2017.