Diff-PCC: Diffusion-based Neural Compression for 3D Point Clouds
Abstract
Stable diffusion networks have emerged as a groundbreaking development for their ability to produce realistic and detailed visual content. This characteristic renders them ideal decoders, capable of producing high-quality and aesthetically pleasing reconstructions. In this paper, we introduce the first diffusion-based point cloud compression method, dubbed Diff-PCC, to leverage the expressive power of the diffusion model for generative and aesthetically superior decoding. Different from the conventional autoencoder fashion, a dual-space latent representation is devised in this paper, in which a compressor composed of two independent encoding backbones is considered to extract expressive shape latents from distinct latent spaces. At the decoding side, a diffusion-based generator is devised to produce high-quality reconstructions by considering the shape latents as guidance to stochastically denoise the noisy point clouds. Experiments demonstrate that the proposed Diff-PCC achieves state-of-the-art compression performance (e.g., 7.711 dB BD-PSNR gains against the latest G-PCC standard at ultra-low bitrate) while attaining superior subjective quality. Source code will be made publicly available.
1 Introduction
Point clouds, composed of numerous discrete points with coordinates (x, y, z) and optional attributes, offer a flexible representation of diverse 3D shapes and are extensively applied in various fields such as autonomous driving fan20244d , game rendering virtanen2020interactive , robotics ebadi2023present , and others. With the rapid advancement of point cloud acquisition technologies and 3D applications, effective point cloud compression techniques have become indispensable to reduce transmission and storage costs.

1.1 Background
Prior to the widespread adoption of deep learning techniques, the most prominent traditional point cloud compression methods were the G-PCC GPCCdescription and V-PCC VPCCdescription proposed by the Moving Picture Experts Group(MPEG). G-PCC compresses point clouds by converting them into a compact tree structure, whereas V-PCC projects point clouds onto a 2D plane for compression. In recent years, numerous deep learning-based methods have been proposed zhang2022transformer ; you2022ipdae ; he2022density ; huang20223qnet ; ebadi2023present ; song2023ehem ; you2024pointsoup ; 2024ecmopcc ; xue2024neri , which primarily employ the Variational Autoencoder (VAE) balle2016end ; balle2018variational architecture. By learning a prior distribution of the data, the VAE projects the original input into a higher-dimensional latent space, and reconstructs the latent representation effectively using a posterior distribution. However, previous VAE-based point cloud compression architectures still face recognized limitations: 1) Assuming a single Gaussian distribution in the latent space may prove inadequate to capture the intricate diversity of point cloud shapes, yielding blurry and detail-deficient reconstructions zhao2017towards ; hao2023coupled ; 2) The Multilayer Perceptron (MLP) based decoders zhang2022transformer ; you2022ipdae ; he2022density ; huang20223qnet ; you2024pointsoup suffer from feature homogenization, which leads to point clustering and detail degradations in the decoded point cloud surfaces, lacking the ability to produce high-quality reconstructions. Recently, Diffusion models (DMs) diffusion_survey have attracted considerable attention in the field of generative modeling ulhaq2022efficient ; zhang2023text ; wu2023not ; liu2024cdformer due to their outstanding performance in generating high-quality samples and adapting to intricate data distributions, thus presenting a novel and exciting opportunity within the domain of neural compression theis2022lossy ; yang2024lossy ; pezone2024semantic . By generating a more refined and realistic 3D point cloud shape, DMs offer a distinctive approach to reduce the heavy dependence of reconstruction quality on the information loss of bottleneck layers.
1.2 Our Approach
Building on the preceding discussion, we introduce Diff-PCC, a novel lossy point cloud compression framework that leverages diffusion models to achieve superior rate-distortion performance with exceptional reconstruction quality. Specifically, to enhance the representation ability of simplistic Gaussian priors in VAEs, this paper devises a dual-space latent representation that employs two independent encoding backbones to extract complementary shape latents from distinct latent spaces. At the decoding side, a diffusion-based generator is devised to produce high-quality reconstructions by considering the shape latents as guidance to stochastically denoise the noisy point clouds. Experiments demonstrate that the proposed Diff-PCC achieves state-of-the-art compression performance (e.g., 7.711 dB BD-PSNR gains against the latest G-PCC standard at ultra-low bitrate) while attaining superior subjective quality.
1.3 Contribution
Main contributions of this paper are summarized as follows:
-
•
We propose Diff-PCC, a novel diffusion-based lossy point cloud compression framework. To the best of our knowledge, this study presents the first exploration of diffusion-based neural compression for 3D point clouds.
-
•
We introduce a dual-space latent representation to enhance the representation ability of the conventional Gaussian priors in VAEs, enabling the Diff-PCC to extract expressive shape latents and facilitate the following diffusion-based decoding process.
-
•
We devise an effective diffusion-based generator to produce high-quality noises by considering the shape latents as guidance to stochastically denoise the noisy point clouds.
2 Related Work
2.1 Point Cloud Compression
Classic point cloud compression standards, such as G-PCC, employ octreeschnabel2006octree to compress point cloud geometric information. In recent years, inspired by deep learning methods in point cloud analysisqi2017pointnet ; qi2017pointnet++ and image compressionballe2016end ; balle2018variational ; minnen2018joint , researchers have turned their attention to learning-based point cloud compression. Currently, point cloud compression methods can be primarily divided into two branches: voxel-based and point-based approaches. Voxel-based methods further branch into sparse convolutionwang2021multiscale ; wang2023lossless ; wang2022sparse ; zhang2023yoga ; zhang2023scalable ; zhang2023g and octreefu2022octattention ; nguyen2023lossless ; song2023efficient . Among them, sparse convolution derives from 2D-pixel representations but optimizes for voxel sparsity. On the other hand, octree-based methods, utilize tree structures to eliminate redundant voxels, representing only the occupied ones. Point-based methodshe2022density ; zhang2022transformer ; you2022ipdae ; you2024pointsoup are draw inspiration from PointNet qi2017pointnet , utilizing symmetric operators (max pooling, average pooling, attention pooling) to handle permutation-invariant point clouds and capture geometric shapes. For compression, different quantization operations categorize point cloud compression into lossy and lossless types. In this paper, we focus on lossy compression to achieve higher compression ratios by sacrificing some precision in the original data.
2.2 Diffusion Models for Point Cloud
Recently, diffusion models have ignited the image generation fieldzhu2023denoising ; lee2023score ; takagi2023high , inspiring researchers to explore their potential in point cloud applications. DPMluo2021diffusion pioneered the introduction of diffusion models in this domain. Starting from DPM, PVDZhou_2021_ICCV combines the strengths of point cloud and voxel representations, establishing a baseline based on PVCNN. LIONzeng2022lion employs two diffusion models to separately learn shape representations in latent space and point representations in 3D space. Dit-3Dmo2024dit innovates by integrating transformers into DDPM, directly operating on voxelized point clouds during the denoising process. PDRLyu2021ACP employs diffusion model twice during the process of generating coarse point clouds and refined point clouds. Point·Enichol2022point utilizes three diffusion models for the following processes: text-to-image generation, image-to-point cloud generation, and point cloud upsampling. PointInfinityhuang2024pointinfinity utilizes cross-attention mechanism to decouple fixed-size shape latent and variable-size position latent, enabling the model to train on low-resolution point clouds while generating high-resolution point clouds during inference. DiffCompletechu2024diffcomplete enhances control over the denoising process by incorporating ControlNetzhang2023adding , achieving new state-of-the-art performances. These advancements demonstrate the promise of DMs in point cloud generation tasks, which motivates our exploring its applicability in point cloud compression. Our research objective is to explore the effective utilization of diffusion models for point cloud compression while preserving its critical structural features.
3 Method
Figure 1 illustrates the pipeline of the proposed Diff-PCC, which can also represent the general workflow of diffusion-based neural compression. A concise review for Denoising Diffusion Probabilistic Models (DDPMs) and Neural Network (NN) based point cloud compression is first provided in Sec. 3.1; The proposed Diff-PCC is detailed in Sec. 3.2.

3.1 Preliminaries
Denoising Diffusion Probabilistic Models (DDPMs) comprise two Markov chains of length T: diffusion process and denoising process. Diffusion process adds noise to clean data , resulting in a series of noisy samples . When is large enough, . The denoising process is the reverse process, gradually removing the noise added during the diffusion process. We formulate them as follows:
(1) | |||
(2) |
where is a hyperparameter representing noise level. represents time step. Via reparameterization trick, we can sample from and as following:
(3) | |||
(4) |
where , denotes random noise sampled from . Note that is a neural network used to predict noise during the denoising process, and can be directly sampled via .
DDPMs train the reverse process by optimizing the model parameters through noise distortion. The loss function is defined as the expected squared difference between the predicted noise and the actual noise, with the mathematical expression as follows:
(5) |
3.2 DIFF-PCC
3.2.1 Overview
As shown in Fig. 2, two key components, i.e., compressor and generator, are respectively utilized in the diffusion process and denoising process. In Diff-PCC, the diffusion process is identified as the encoding, in which a compressor extracts latents from the point cloud and compresses latents into bitstreams; at the decoding side, the generator accepts the latents as a condition and gradually restoring point cloud shape from noisy samples.
3.2.2 Dual-Space Latent Encoding
Several research have demonstrated that a simplistic Gaussian distribution in the latent space may prove inadequate to capture the complex visual signals zhao2017towards ; casale2018gaussian ; dai2019diagnosing ; hao2023coupled . Although previous works have proposed to solve these problems using different technologies such as non-gaussian prior joo2020dirichlet or coupling between the prior and the data distribution hao2023coupled , these techniques may not be able to directly employed on neural compression tasks.
In this paper, a simple yet effective compressor is introduced, which composed of two independent encoding backbones to extract expressive shape latents from distinct latent spaces. Motivated by PointPN zhang2023parameter , which excels in capturing high-frequency 3D point cloud structures characterized by sharp variations, we design a dual-space latent encoding approach that utilizes PointNet to extract low-frequency shape latent and leverages PointPN to characterize complementary latent from high frequency domain. Let be the original input point cloud, we formulate the above process as:
(6) |
where and represent the low-frequency and high-frequency latent features, respectively; and refer to the PointNet and PointPN backbones, respectively. Next, the quantization process is applied on the obtained features and , i.e.,
(7) |
where function refers to the operation of adding uniform noise during training balle2016end and the rounding operation during test.
Then, fully factorized density model balle2016end and the hyperprior density model balle2018variational are employed to fit the distribution of quantized features and , respectively. Particularly, the hyperprior density model can be described as:
(8) |
where refers to the uniform noise ranging from to ; refers to the normal distribution with expectation and standard deviation , which can be further estimated by a hyperprior encoder and decoder :
(9) |
In this way, a triplet containing quantized low-frequency feature , quantized high-frequency feature , and quantized hyperprior will be compressed into three separate streams. Let and respectively represents the actual distribution and estimated distribution of latent features, then the bitrate can be estimated as follows:
(10) |
3.2.3 Diffusion-based Generator
The generator takes noisy point cloud at time and necessary conditional information as input. We hope generator to learn positional distribution of and fully integrate with to predict noise at time . In this paper, we consider all information that could potentially guide the generator as conditional information, including time , class label , noise coefficient , and decoded latent features ( and ).
DiffComplete chu2024diffcomplete uses ControlNet controlnet to achieve refined noise generation. However, the denoiser of DiffComplete is a 3D-Unet, adapted from its 2D version krithika2022review . This structure is not suitable for our method, because we directly deal with points, instead of voxels. We embraced this idea and specially designed a hierarchical feature fusion mechanism to adapt to our method. Note that 3D-Unet can directly downsample features through 3D convolution with a stride greater than one. It is very complex for point-based methods to achieve equivalent processing. Therefore, we did not replicate the same structure as DiffComplete does, but directly used AdaLN to inject conditional information, formulated as:
(11) |
where denotes the original features in the Generator and denotes the condition information.
Now we detail the structure: First, we need to exact the shape latent of noise point cloud and we choose PointNet for structural consistency. However, in the early stages of the denoising process, lacks a regular surface shape for the generator to learn. Therefore, we adopt the suggestion from PDR mo2024dit , adding positional encoding to each noise point so that the generator can understand the absolute position of each point in 3D space. Then we inject shape latent from the compressor via ADaLN. We formulate the above process as:
(12) | |||
(13) |
Next, we need to fuse high-frequency features. We extract the local high-frequency features of using PointPN and add them to from the previous step. Then we inject the high-frequency features from the compressor via AdaLN. We use K-Nearest Neighbor (KNN) operation to partition locally and set the number of neighbor points to 8, which allows the generator to learn local details. We formulate the above process as:
(14) | |||
(15) |
After that, we use the self-attention mechanism to interact with information from different local areas. And through a feature up-sampling module, we generate features for n points. Finally, we output noise through a linear layer. We formulate the above process as:
(16) | |||
(17) | |||
(18) |
3.2.4 Training Objective
We follow the conventional rate-distortion trade-off as our loss function as follows:
(19) |
where refers to the evaluated distortion; represents bitrate as shown in Eq. 10; serves as the balance the distortion and bitrate. Specifically, a combined form of distortion is used in this paper, which considers both intermediate noises (, ) and global shapes (, ):
(20) |
where denotes the Mean Squared Error (MSE) distance; refers to the Chamfer Distance; means the weighting factor. Here, the overall point cloud shape is additively supervised under the Chamfer Distance to provide a global optimization. The following function is utilized to predict the reconstructed point cloud in practice:
(21) |
where means the noise level; refers to the noisy point cloud at time step t; denotes the predicted noise from the generator; represent the conditional information we inject into the generator.
4 Experiments
4.1 Experimental Setup
Datasets Based on previous work, we used ShapeNet as our training set, sourced from luo2021diffusion . This dataset contains 51,127 point clouds, across 55 categories, which we allocated in an 8:1:1 ratio for training, validation, and testing. Each point cloud has 15K points, and following the suggestions from qian2022pointnext , we randomly select 2K points from each for training. Additionally, we also used ModelNet10 and ModelNet40 as our test sets, sourced from pointflow . These datasets contain 10 categories and 40 categories respectively, totaling 10,582 point clouds. During training and testing, we perform individual normalization on the shape of each point cloud.
Baselines & Metric We compare our method with the state-of-the-art non-learning-based method: G-PCC, and the latest learning-based methods from the past two years: IPDAE, PCT-PCC, Following you2022ipdae ; you2024pointsoup , we use point-to-point PSNR to measure the geometric accuracy and the number of bits per point to measure the compression ratio.
Implementation
Our model is implemented using PyTorch [27] and CompressAI [4], trained on the NVIDIA 4090X GPU (24GB Memory) for 80,000 steps with a batch size of 48. We utilize the Adam optimizer [21] with an initial learning rate of 1e-4 and a decay factor of 0.5 every 30,000 steps, with set to 0.9 and set to 0.999. Since the positional encoding method requires the dimension (dim) to be a multiple of 6, we designed the bottleneck layer size to be 288. For diffusion, we employ a cosine preset noise parameter, setting the denoising steps T to 200, which is used for both training and testing.
Dataset | Metric | G-PCC | IPDAE | PCT-PCC | Diff-PCC |
ShapeNet | BD-Rate (%) | - | -34.594 | -87.563 | -99.999 |
BD-PSNR (dB) | - | +3.518 | +8.651 | +11.906 | |
ModelNet10 | BD-Rate (%) | - | -35.640 | -68.899 | -56.910 |
BD-PSNR (dB) | - | +4.060 | +6.333 | +5.876 | |
ModelNet40 | BD-Rate (%) | - | -53.231 | -34.127 | -56.451 |
BD-PSNR (dB) | - | +4.245 | +6.167 | +5.350 | |
Avg. | BD-Rate (%) | - | -41.550 | -63.530 | -71.117 |
BD-PSNR (dB) | - | +3.941 | +4.384 | +7.711 | |
Time (s/frame) | Encoding | 0.002 | 0.004 | 0.046 | 0.152 |
Decoding | 0.001 | 0.006 | 0.001 | 1.913 |

4.2 Baseline Comparisons
Objective Quality Comparison
Table 1 shows the quantitative indicators using BD-Rate and BD-PSNR, and Fig. 3 demonstrates the rate-distortion curves of different methods. It can be seen that, under identical reconstruction quality conditions, our method achieves superior rate-distortion performance, conserving between 56% to 99% of the bitstream compared to G-PCC. At the most minimal bit rates, point ot point PSNR of our proposed method surpasses that of G-PCC by 7.711 dB.
Subjective Quality Comparison
Fig 4 presents the ground truth and decoded point clouds from different methods. We choose three point cloud:airplane, chair ,and mug. to be tested across a comparable bits per pixel (bpp) range. The comparative analysis reveals that at the lowest code rate, our method preserves the ground truth’s shape information to the greatest extent while simultaneously achieving the highest Peak Signal-to-Noise Ratio (PSNR).
4.3 Ablation Studies
We conduct ablation studies to examine the impact of key components in the model. Specifically, we investigate the effectiveness of low-frequency features, high-frequency features, and the loss function designed in Sec. 3.2.4. As shown in Table 2, utilizing solely low-frequency features to guide the reconstruction of the diffusion model results in a 20% reduction in the code rate, along with a decrease in the reconstruction quality by 0.397dB. This indicates that high-frequency features play an effective role in guiding the model during the reconstruction process. Conversely, discarding the low-frequency features, which represent the shape of the point cloud, leads to a reduction in the code rate and significantly diminishes the reconstruction quality. Therefore, we argue that the loss of the shape variable is not worth it. Lastly, we ascertain the impact of , and the results indicate that this loss marginally increases the bits per point (bpp) while diminishing the reconstruction quality.

backbone | backbone | BD-PSNR (dB) | BD-Rate (%) | |
---|---|---|---|---|
✔ | ✗ | ✔ | -0.397 | -20.637 |
✗ | ✔ | ✔ | -2.276 | -16.523 |
✔ | ✔ | ✗ | -0.132 | +4.658 |
5 Limitations
Although our method has achieved advanced rate distortion performance and excellent visual reconstruction results, there are several limitations that warrant discussion. Firstly, the encoding and decoding time are relatively long, which could potentially be improved by the acceleration techniques employed in several explorations li2023q ; liu2024cdformer . Secondly, the model is currently limited to compressing small-scale point clouds, and further research is required to enhance its capability to handle large-scale instances.
6 Conclusion
We propose a diffusion-based point cloud compression method, dubbed Diff-PCC, to leverage the expressive power of the diffusion model for generative and aesthetically superior decoding. We introduce a dual-space latent representation to enhance the representation ability of the conventional Gaussian priors in VAEs, enabling the Diff-PCC to extract expressive shape latents and facilitate the following diffusion-based decoding process. At the decoding side, an effective diffusion-based generator produces high-quality reconstructions by considering the shape latents as guidance to stochastically denoise the noisy point clouds. The proposed method achieves state-of-the-art compression performance while attaining superior subjective quality. Future works may include reducing the coding complexity and extending to large-scale point cloud instances.
References
- [1] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. arXiv preprint arXiv:1611.01704, 2016.
- [2] Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018.
- [3] Francesco Paolo Casale, Adrian Dalca, Luca Saglietti, Jennifer Listgarten, and Nicolo Fusi. Gaussian process prior variational autoencoders. Advances in neural information processing systems, 31, 2018.
- [4] Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi-Wing Fu, and Jiaya Jia. Diffcomplete: Diffusion-based generative 3d shape completion. Advances in Neural Information Processing Systems, 36, 2024.
- [5] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023.
- [6] Bin Dai and David Wipf. Diagnosing and enhancing vae models. arXiv preprint arXiv:1903.05789, 2019.
- [7] Kamak Ebadi, Lukas Bernreiter, Harel Biggie, Gavin Catt, Yun Chang, Arghya Chatterjee, Christopher E Denniston, Simon-Pierre Deschênes, Kyle Harlow, Shehryar Khattak, et al. Present and future of slam in extreme environments: The darpa subt challenge. IEEE Transactions on Robotics, 2023.
- [8] Lili Fan, Junhao Wang, Yuanmeng Chang, Yuke Li, Yutong Wang, and Dongpu Cao. 4d mmwave radar for autonomous driving perception: a comprehensive survey. IEEE Transactions on Intelligent Vehicles, 2024.
- [9] Chunyang Fu, Ge Li, Rui Song, Wei Gao, and Shan Liu. Octattention: Octree-based large-scale contexts model for point cloud compression. In Proceedings of the AAAI conference on artificial intelligence, pages 625–633, 2022.
- [10] Xiaoran Hao and Patrick Shafto. Coupled variational autoencoder. arXiv preprint arXiv:2306.02565, 2023.
- [11] Yun He, Xinlin Ren, Danhang Tang, Yinda Zhang, Xiangyang Xue, and Yanwei Fu. Density-preserving deep point cloud compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2333–2342, 2022.
- [12] Tianxin Huang, Jiangning Zhang, Jun Chen, Zhonggan Ding, Ying Tai, Zhenyu Zhang, Chengjie Wang, and Yong Liu. 3qnet: 3d point cloud geometry quantization compression network. ACM Transactions on Graphics (TOG), 41(6):1–13, 2022.
- [13] Zixuan Huang, Justin Johnson, Shoubhik Debnath, James M Rehg, and Chao-Yuan Wu. Pointinfinity: Resolution-invariant point diffusion models. arXiv preprint arXiv:2404.03566, 2024.
- [14] Yiqi Jin, Ziyu Zhu, Tongda Xu, Yuhuan Lin, and Yan Wang. Ecm-opcc: Efficient context model for octree-based point cloud compression. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7985–7989, 2024.
- [15] Weonyoung Joo, Wonsung Lee, Sungrae Park, and Il-Chul Moon. Dirichlet variational autoencoder. Pattern Recognition, 107:107514, 2020.
- [16] M Krithika Alias AnbuDevi and K Suganthi. Review of semantic segmentation of medical images using modified architectures of unet. Diagnostics, 12(12):3064, 2022.
- [17] Jin Sub Lee, Jisun Kim, and Philip M Kim. Score-based generative modeling for de novo protein design. Nature Computational Science, 3(5):382–392, 2023.
- [18] Xiuyu Li, Yijiang Liu, Long Lian, Huanrui Yang, Zhen Dong, Daniel Kang, Shanghang Zhang, and Kurt Keutzer. Q-diffusion: Quantizing diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17535–17545, 2023.
- [19] Qingguo Liu, Chenyi Zhuang, Pan Gao, and Jie Qin. Cdformer: When degradation prediction embraces diffusion model for blind image super-resolution. arXiv preprint arXiv:2405.07648, 2024.
- [20] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021.
- [21] Zhaoyang Lyu, Zhifeng Kong, Xudong Xu, Liang Pan, and Dahua Lin. A conditional point diffusion-refinement paradigm for 3d point cloud completion. ArXiv, abs/2112.03530, 2021.
- [22] David Minnen, Johannes Ballé, and George D Toderici. Joint autoregressive and hierarchical priors for learned image compression. Advances in neural information processing systems, 31, 2018.
- [23] Shentong Mo, Enze Xie, Ruihang Chu, Lanqing Hong, Matthias Niessner, and Zhenguo Li. Dit-3d: Exploring plain diffusion transformers for 3d shape generation. Advances in Neural Information Processing Systems, 36, 2024.
- [24] Dat Thanh Nguyen and André Kaup. Lossless point cloud geometry and attribute compression using a learned conditional probability model. IEEE Transactions on Circuits and Systems for Video Technology, 2023.
- [25] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022.
- [26] Francesco Pezone, Osman Musa, Giuseppe Caire, and Sergio Barbarossa. Semantic-preserving image coding based on conditional diffusion models. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 13501–13505. IEEE, 2024.
- [27] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017.
- [28] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017.
- [29] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
- [30] Ruwen Schnabel and Reinhard Klein. Octree-based point-cloud compression. PBG@ SIGGRAPH, 3:111–121, 2006.
- [31] Rui Song, Chunyang Fu, Shan Liu, and Ge Li. Efficient hierarchical entropy model for learned point cloud compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14368–14377, 2023.
- [32] Rui Song, Chunyang Fu, Shan Liu, and Ge Li. Efficient hierarchical entropy model for learned point cloud compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14368–14377, 2023.
- [33] Yu Takagi and Shinji Nishimoto. High-resolution image reconstruction with latent diffusion models from human brain activity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14453–14463, 2023.
- [34] Lucas Theis, Tim Salimans, Matthew D Hoffman, and Fabian Mentzer. Lossy compression with gaussian diffusion. arXiv preprint arXiv:2206.08889, 2022.
- [35] Anwaar Ulhaq, Naveed Akhtar, and Ganna Pogrebna. Efficient diffusion models for vision: A survey. arXiv preprint arXiv:2210.09292, 2022.
- [36] Juho-Pekka Virtanen, Sylvie Daniel, Tuomas Turppa, Lingli Zhu, Arttu Julin, Hannu Hyyppä, and Juha Hyyppä. Interactive dense point clouds in a game engine. ISPRS Journal of Photogrammetry and Remote Sensing, 163:375–389, 2020.
- [37] Jianqiang Wang, Dandan Ding, Zhu Li, and Zhan Ma. Multiscale point cloud geometry compression. In 2021 Data Compression Conference (DCC), pages 73–82. IEEE, 2021.
- [38] Jianqiang Wang, Dandan Ding, and Zhan Ma. Lossless point cloud attribute compression using cross-scale, cross-group, and cross-color prediction. In 2023 Data Compression Conference (DCC), pages 228–237. IEEE, 2023.
- [39] Jianqiang Wang and Zhan Ma. Sparse tensor-based point cloud attribute compression. In 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), pages 59–64. IEEE, 2022.
- [40] MPEG 3D Graphics WG 7 and Haptics Coding. G-pcc 2nd edition codec description. ISO/IEC JTC 1/SC 29/WG 7, 2023.
- [41] MPEG 3D Graphics Coding WG 7. V-pcc codec description. ISO/IEC JTC 1/SC 29/WG 7, 2020.
- [42] Yankun Wu, Yuta Nakashima, and Noa Garcia. Not only generative art: Stable diffusion for content-style disentanglement in art analysis. In Proceedings of the 2023 ACM International conference on multimedia retrieval, pages 199–208, 2023.
- [43] Ruixiang Xue, Jiaxin Li, Tong Chen, Dandan Ding, Xun Cao, and Zhan Ma. Neri: Implicit neural representation of lidar point cloud using range image sequence. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8020–8024. IEEE, 2024.
- [44] Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge Belongie, and Bharath Hariharan. Pointflow: 3d point cloud generation with continuous normalizing flows. arXiv, 2019.
- [45] Ruihan Yang and Stephan Mandt. Lossy image compression with conditional diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
- [46] Kang You, Pan Gao, and Qing Li. Ipdae: Improved patch-based deep autoencoder for lossy point cloud geometry compression. In Proceedings of the 1st International Workshop on Advances in Point Cloud Compression, Processing and Analysis, pages 1–10, 2022.
- [47] Kang You, Kai Liu, Li Yu, Pan Gao, and Dandan Ding. Pointsoup: High-performance and extremely low-decoding-latency learned geometry codec for large-scale point cloud scenes. arXiv preprint arXiv:2404.13550, 2024.
- [48] Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. Lion: Latent point diffusion models for 3d shape generation. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
- [49] Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. Text-to-image diffusion model in generative ai: A survey. arXiv preprint arXiv:2303.07909, 2023.
- [50] Junteng Zhang, Tong Chen, Dandan Ding, and Zhan Ma. Yoga: Yet another geometry-based point cloud compressor. In Proceedings of the 31st ACM International Conference on Multimedia, pages 9070–9081, 2023.
- [51] Junteng Zhang, Gexin Liu, Dandan Ding, and Zhan Ma. Transformer and upsampling-based point cloud compression. In Proceedings of the 1st International Workshop on Advances in Point Cloud Compression, Processing and Analysis, pages 33–39, 2022.
- [52] Junteng Zhang, Jianqiang Wang, Dandan Ding, and Zhan Ma. Scalable point cloud attribute compression. IEEE Transactions on Multimedia, 2023.
- [53] Junzhe Zhang, Tong Chen, Dandan Ding, and Zhan Ma. G-pcc++: Enhanced geometry-based point cloud compression. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1352–1363, 2023.
- [54] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023.
- [55] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836–3847, 2023.
- [56] Renrui Zhang, Liuhui Wang, Ziyu Guo, Yali Wang, Peng Gao, Hongsheng Li, and Jianbo Shi. Parameter is not all you need: Starting from non-parametric networks for 3d point cloud analysis. arXiv preprint arXiv:2303.08134, 2023.
- [57] Shengjia Zhao, Jiaming Song, and Stefano Ermon. Towards deeper understanding of variational autoencoding models. arXiv preprint arXiv:1702.08658, 2017.
- [58] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5826–5835, October 2021.
- [59] Yuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, and Luc Van Gool. Denoising diffusion models for plug-and-play image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1219–1229, 2023.