Latent Semantic Diffusion-based Channel Adaptive De-Noising SemCom for Future 6G Systems
Abstract
Compared with the current Shannon’s Classical Information Theory (CIT) paradigm, semantic communication (SemCom) has recently attracted more attention, since it aims to transmit the meaning of information rather than bit-by-bit transmission, thus enhancing data transmission efficiency and supporting future human-centric, data-, and resource-intensive intelligent services in 6G systems. Nevertheless, channel noises are common and even serious in 6G-empowered scenarios, limiting the communication performance of SemCom, especially when Signal-to-Noise (SNR) levels during training and deployment stages are different, but training multi-networks to cover the scenario with a broad range of SNRs is computationally inefficient. Hence, we develop a novel De-Noising SemCom (DNSC) framework, where the designed de-noiser module can eliminate noise interference from semantic vectors. Upon the designed DNSC architecture, we further combine adversarial learning, variational autoencoder, and diffusion model to propose the Latent Diffusion DNSC (Latent-Diff DNSC) scheme to realize intelligent online de-noising. During the offline training phase, noises are added to latent semantic vectors in a forward Markov diffusion manner and then are eliminated in a reverse diffusion manner through the posterior distribution approximated by the U-shaped Network (U-Net), where the semantic de-noiser is optimized by maximizing evidence lower bound (ELBO). Such design can model real noisy channel environments with various SNRs and enable to adaptively remove noises from noisy semantic vectors during the online transmission phase. The simulations on open-source image datasets demonstrate the superiority of the proposed Latent-Diff DNSC scheme in PSNR and SSIM over different SNRs than the state-of-the-art schemes, including JPEG, Deep JSCC, and ADJSCC.
Index Terms:
Semantic communication, sixth-generation (6G), diffusion model, image transmission.I Introduction
To support the transition from the Internet of Things (IoT) to the Internet of Everything (IoE), the future sixth-generation (6G) communication systems are expected to enable a wide range of services, including multisensory Extended Reality (XR), connected robotics and autonomous systems, wireless brain-computer interactions, and more. This will be achieved through emerging techniques such as higher millimeter wave (mmWave) frequencies, large intelligent surfaces, edge Artificial Intelligent (AI), and integrated terrestrial, airborne, and satellite networks [1]. However, the current Shannon’s Classical Information Theory (CIT) paradigm faces several challenges in supporting human-centric, data-, and resource-intensive intelligent services. These challenges include wireless transmission of a large amount of data, rapid system response as well as reliable and efficient information interaction, and consumption of more network resources for real-time updating of information and analysis of user data [2].
In response, semantic communication (SemCom) is a promising revolutionary paradigm in 6G to breakout the “Shannon’s trap” through filtering redundant information and extracting the meaning of effective information. Owing to the development of distributed computation and wide connectivity of ubiquitous intelligent devices, SemCom is expected to be deployed on a large scale in 6G. Particularly, SemCom has the following superiorities, including relieving the pressure of data transmission, improving network management efficiency, and enhancing resource allocation effectiveness [3][4].
Thanks to the growing up of Deep Learning (DL) techniques in comprehending language and audio as well as pictures, DL-based communication architectures have been recently developed. O’Shea et al. [5] design an Auto-Encoders (AEs)-based end-to-end wireless communication system, where the channel AEs include the encoder, channel regularizer, and decoder, and convolutional layers and domain specific regularizing effects are introduced to reduce the number of parameters and avoid over-fitting issues, respectively. O’Shea et al. [5] further verify that the proposed system not only has comparable performance potential with modern systems, but also almost reaches Shannon capacity, while maintaining universality and low complexity.
More recently, the joint source-channel coding (JSCC) schemes for structured sources in SemCom have attracted more attention, since such design is more feasible for actual communication systems with constrained block lengths. Xie et al. [6] present a SemCom framework based on Transformer and transfer learning to extract the semantic information of texts during noisy environments. Xie et al. [6] also consider cross-entropy and mutual information in the loss function to maximize the system capacity and propose a novel metric to measure the semantic error of sentences. Weng et al. [7] exploit squeeze-and-excitation (SE) networks to learn essential speech semantic information and employ the attention mechanism to enhance the accuracy of signal recovery. Bourtsoulatze et al. [8] first develop a SemCom architecture based on Convolution Neural Networks (CNNs) for radio transmission of high-resolution pictures over additive white Gaussian noise (AWGN) and fading channels. Dong et al. first design semantic slice-models to adaptively realize semantic transmission tasks in different circumstances [4]. Du et al. [9] propose an AI-generated incentive mechanism based on contract theory to facilitate semantic information sharing among users. The diffusion model is utilized to generate the optimal contract design, which effectively promote the sharing of semantic information between users.
Although many researchers pay attention to DL-enabled SemCom and utilize its potential in enhancing communication efficiency, SemCom is still facing some challenges. For example, channel noises severely affect the recovery and reception of semantic information. Hence, suppressing channel noises in the restoration of semantic information should be appropriately addressed to enhance transmission performance.
To resist channel noises, the schemes proposed in [6] and [8] introduce channels when training models to recover signals at the specific SNR. However, the above-mentioned schemes do not consider that there may exist differences between the SNR levels during the training and deployment stages. On the other hand, it is computationally inefficient for multi-networks to cover the scene with a wide range of SNRs. To realize highly robust SemCom during different SNRs, Hu et al. [10] develop a vector quantization-based semantic communication system, which utilizes adversarial networks to perform semantic de-noising for image classification tasks. For image generation, Xu et al. [11] propose an Attention DL-based JSCC (ADJSCC) scheme by the attention mechanism to dynamically adjust SNRs during the training phase. Such design can capture inherent channel characteristics at different SNRs and remove channel noises during actual deployment. Nevertheless, adaptive de-noising in different SNR environments remains a major challenge.
To support highly reliable image transmission for future 6G systems, we combine Variational Autoencoder (VAE), adversarial learning, and diffusion model to realize de-noising semantic communication under noisy channel environments. The main contributions are described as follows:
-
•
We develop the De-Noising SemCom (DNSC) framework consisting of the encoder and decoder to achieve highly robust transmission for future 6G systems, where the proposed semantic de-noiser module in the decoder can relieve the effects of channel noises on semantic transmission.
-
•
To realize adaptive online semantic de-noising under various SNR environments, we further propose the Latent Diffusion DNSC (Latent-Diff DNSC) scheme upon the designed DNSC system. The constructed objective function of the encoder and decoder joint training considers the reconstruction loss and adversarial loss as well as regularization loss for optimizing VAEs, optimizing the discriminator that differentiate original and generated images, and avoiding arbitrarily scaled latent semantic spaces, respectively.
-
•
The semantic de-noiser in the proposed Latent-Diff DNSC scheme is obtained through forward and reverse diffusion processes. The gradually added noises in the forward phase to model real channel noises are iteratively eliminated in the reverse phase by maximizing the logarithmic likelihood of the distribution predicted by the DL model. Under such design, the channel noises under different SNR conditions can be adaptively removed in the online inference stage without knowing channel information in the offline training stage.
-
•
The simulation results on open-source LAION2B-EN dataset [12] demonstrate that, under the same compression ratio, the proposed Latent-Diff DNSC model achieves higher Peak Signal-to-Noise Ratio (PSNR) by and higher Structural SIMilarit (SSIM) by compared to the ADJSCC and JSCC model.
II System Model
As shown in Fig. 1, a typical SemCom architecture is considered, including the transmitter, physical channels, and the receiver, which are described as follows in detail.

II-A Transmitter
The input data undergoes joint semantic-channel encoder, leading to the formation of a semantic latent vector .
(1) |
As shown in (2), the transmitted signal is subject to a power constraint of magnitude before being transmitted through the physical channel.
(2) |
II-B Physical Channel
The physical channel is usually modeled as the additive white Gaussian noise (AWGN), denoted by , where represents the power of the noise. Hence, the received signal at the receiver is denoted as
(3) |
where represents the coefficients of the physical channel between the transmitter and receiver.
II-C Receiver
Definition 1 (Semantic De-Noiser): Unlike the existing SemCom frameworks, in the proposed DNSC architecture, the semantic de-noiser module at the receiver is defined to remove noise components from the noisy semantic vector. Such design can be achieved by neural networks parameterized by .
By the prediction of the trained semantic de-noiser, latent semantic variables are progressively denoised and eventually transformed into . Then, the de-noised semantic vector is sent to the joint semantic-channel decoder to generate as
(4) |
Overall, the proposed semantic de-noiser allows it to accurately model noise characteristics of physical channels, leading to effective de-noising of the semantic information. In this way, the proposed DNSC architecture can intelligently remove noises from the latent semantic vectors to improve the robustness and generalization ability of SemCom models. The specific implementation of the semantic de-noiser will be provided in Section III.
III The Proposed Latent-Diff DNSC system
As illustrated in Fig. 2, upon the designed DNSC system, we present the Latent-Diff DNSC scheme, including the offline training and online inference phases.


III-A Offline Training phase
As illustrated in Algorithm 1, the offline training phase includes two training stages.
The first training stage
The second training stage
III-A1 The First Training Stage
During the initial phase, the encoder and decoder are jointly trained to obtain the distribution of the semantic latent space. As illustrated in Table I, the encoder-decoder structure in [13] is adopted.
Proposition 1: After the joint training, to learn the probability distribution of the latent semantic space, the model is trained by the constructed loss function involved the reconstruction loss , self-similarity loss , and regularized Kullback-Leibler (KL) divergence loss as
(5) |
where and denote the weights of the loss function, quantifies the disparity between the original and reconstructed images, obtained through adversarial training preserves local consistency, and computed in the latent space encourages the retention of maximal information while encoding images, which can be denoted as
(6) |
(7) |
where is the discriminator of the parameter .
(8) |
Encoder | Decoder |
---|---|
Conv2D | Conv2D |
ResBlok | |
ResBlok | Non-Local |
Non-Local | ResBlok |
ResBlok | |
GroupNorm,Swish,Conv2D | GroupNorm,Swish,Conv2D |
III-A2 The Second Training Stage
A forward diffusion chain progressively adds noise to the semantic latent vectors via a Markov chain, using an ascending variance schedule as
(9) |
where
(10) |
After reparameterization, can be get by
(11) |
where , .
The training data is transformed into semantic vectors with a distribution of through the trained encoder. Gaussian noises are also added to model real-world noise conditions. is iteratively added with noise for steps and gradually becomes a Gaussian noise distribution . In the reverse process, our goal is to train a U-shaped Network (U-Net) with parameters to learn the inverse distribution , which enables us to gradually recover the original latent distribution of from the diffused latent through the reverse diffusion process. The reverse process is given by
(12) |
where . Although it is difficult to obtain the probability distribution of the reverse process, can be obtained by (13) if is known.
(13) |
where . The parameters can be optimized by maximizing the evidence lower bound (ELBO) as
(14) | ||||
Proposition 2: Essentially, the proposed semantic de-noiser is composed of a set of neural networks with similar parameters and weights, and is trained to estimate the added noise variables for input . Besides, considering that is unknown, is replaced by transforming (11), we further obtain the loss function as (15).
(15) |
Hence, the semantic vector is the input of the semantic de-noiser at time , and the corresponding noise variables are estimated and outputted. After the training process gradually converges by minimizing the loss function in (15), the trained semantic de-noiser can accurately eliminate noises.
III-B Online inference phase
The online inference process is illustrated in Algorithm 2. Adding noises in the training phase can result in a higher overall level of noises, which can negatively impact the transmission accuracy. To mitigate this problem, power normalization is employed to ensure that the variance of noises is proportional to the power of signals, leading to a final noise power level of after steps of diffusion. Therefore, to eliminate semantic noise effectively during the inference phase, it is necessary to obtain the normalization factor at the receiver through SNR estimation, thus normalizing the power of the received signal as
(16) |
where is the semantic vector transmitted through the channel in (3). Subsequently, the signal is fed into de-noiser for noise elimation. After iterations, the final output is the recovered semantic vector , which is then input into the decoder to obtain .
IV Simulation Results and Analysis
In this section, we evaluate the performance of the proposed Latent-Diff DNSC scheme on AWGN channel with perfect SNR estimation. The proposed model is pre-trained by [13]. The diffusion steps for the pre-trained model is , the learning rate is , the training batchsize is , and the noise schedule is linear with the increase in .
IV-A Open-Source Image Dataset
We select 660 images from the LAION2B-EN dataset [12] to compare the transmission performance of the proposed scheme with baseline schemes. The LAION2B-EN dataset is an image-text dataset, and all images are resized to have the shape of , where the dimensions correspond to RGB channels, width, and height, respectively.
IV-B Baseline Schemes
We compare the performance of our proposed scheme with DeepJSCC[8], ADJSCC[11], and the traditional JPEG image compression algorithm. DeepJSCC is trained with two SNRs, including 10 dB and 20 dB, ADJSCC is trained with SNRs ranging from 0-28 dB. The Latent-Diff DNSC scheme without the de-noiser module is also provided as a baseline scheme to highlight the significance of the de-noiser.
IV-C Performance Comparison
PSNR and SSIM[14] are used as metrics, representing image quality and similarity, respectively. Fig. 3 illustrates the visualization of a subset of images recovered under different SNRs using the proposed Latent-Diff DNSC scheme with a compression ratio of .

Fig. 4 presents the PSNR performance of the proposed Latent-Diff DNSC scheme and four baselinse schemes. The PSNR performance of the proposed Latent-Diff DNSC scheme is significantly better than all baseline schemes over different SNRs. Specifically, at high SNRs, DeepJSCC performs better when trained with SNR of 20 dB compared to SNR of 10 dB, while at low SNRs, DeepJSCC performs better when trained with SNR of 10 dB compared with SNR of 20 dB. The reason is that DeepJSCC has difficulty in online capturing varying channel features under the fixed offline training SNR. ADJSCC addresses this issue and has better performance than JSCC, whether trained with SNR of 10 or 20 dB. Furthermore, our proposed Latent-Diff DNSC scheme, with its superior ability to capture channel characteristics and de-noising capability, outperforms ADJSCC. Under low SNRs, the Latent-Diff DNSC improves PSNR by approximately 3 dB compared to ADJSCC. As the SNR increases, this advantage becomes more pronounced, with the PSNR improvement of approximately 5.7 dB at the SNR of 30 dB. Additionally, the Latent-Diff DNSC scheme without the de-noiser performs poorly at low SNRs. It can be explained that the absence of added noises during the first training stage prevents the encoder and decoder from learning accurate distribution of channel properties under various SNRs. However, it performs well at high SNRs, verifying the excellent image reconstruction ability of the Latent-Diff DNSC scheme under high .


Fig. 5 provides the SSIM performance of the proposed Latent-Diff DNCS scheme and baseline schemes versus different SNRs. Similar with Fig. 4, the proposed Latent-Diff DNSC scheme outperforms all baseline schemes in different channel conditions. Furthermore, under low SNRs, the SSIM of the proposed Latent-Diff DNSC is approximately 0.1 higher than that of ADJSCC. AS the SNR decreases, this advantage becomes even more pronounced.
IV-D De-noising Trade-off Between Semantic and Noise Components
Fig. 6 and 7 show the performance variations of the Latent-Diff DNSC system in terms of PSNR and SSIM indicators under different de-noising steps. When the de-noising steps are 15, the system performance is severely degraded, indicating that a large amount of noises in semantic vectors are still not eliminated. When the de-noising steps are 45 and 60, the system performance is lower than that of the system with de-noising steps of 30. This is due to the fact that in the iterative de-noising process, each de-noising step subtracts the estimated noise component from the received semantic vector. Excessive de-noising steps may lead to removal of semantic vector components that do not belong to the noise component, resulting in a decrease in system performance.


V Conclusions
In this paper, we propose the DNSC system to combat channel noises by introducing the semantic de-noiser. Based on the DNSC framework, we further design the Latent-Diff DNSC scheme, where the VAE serves as the encoder and decoder, and the semantic de-noiser is achieved by a diffusion process in the latent space involved VAEs and adversarial learning. In this way, chanel noises are modeled in a diffusion manner and gradually eliminated by variational inference. Such design can enable adaptive online de-noising under different SNRs estimated by an SNR estimator. Experimental results on the open-source datasets show that the proposed Latent-Diff DNSC approach can achieve better transmission performance in PSNR and SSIM than four baseline schemes with high compression ratio of . Therefore, the proposed method is a promising solution for image semantic communications under dynamic channel environments in future 6G systems.
References
- [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6g wireless systems: Applications, trends, technologies, and open research problems,” IEEE network, vol. 34, no. 3, pp. 134–142, 2019.
- [2] W. Yang, H. Du, Z. Q. Liew, W. Y. B. Lim, Z. Xiong, D. Niyato, X. Chi, X. S. Shen, and C. Miao, “Semantic communications for future internet: Fundamentals, applications, and challenges,” IEEE Communications Surveys & Tutorials, 2022.
- [3] W. Yang, Z. Q. Liew, W. Y. B. Lim, Z. Xiong, D. Niyato, X. Chi, X. Cao, and K. B. Letaief, “Semantic communication meets edge intelligence,” IEEE Wireless Communications, vol. 29, no. 5, pp. 28–35, 2022.
- [4] C. Dong, H. Liang, X. Xu, S. Han, B. Wang, and P. Zhang, “Semantic communication system based on semantic slice models propagation,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 202–213, 2022.
- [5] T. J. O’Shea, K. Karra, and T. C. Clancy, “Learning to communicate: Channel auto-encoders, domain specific regularizers, and attention,” in 2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, 2016, pp. 223–228.
- [6] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” IEEE Transactions on Signal Processing, vol. 69, pp. 2663–2675, 2021.
- [7] Z. Weng and Z. Qin, “Semantic communication systems for speech transmission,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 8, pp. 2434–2444, 2021.
- [8] E. Bourtsoulatze, D. B. Kurka, and D. Gündüz, “Deep joint source-channel coding for wireless image transmission,” IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 3, pp. 567–579, 2019.
- [9] H. Du, J. Wang, D. Niyato, J. Kang, Z. Xiong, and D. I. Kim, “Ai-generated incentive mechanism and full-duplex semantic communications for information sharing,” arXiv preprint arXiv:2303.01896, 2023.
- [10] Q. Hu, G. Zhang, Z. Qin, Y. Cai, G. Yu, and G. Y. Li, “Robust semantic communications with masked vq-vae enabled codebook,” arXiv preprint arXiv:2206.04011, 2022.
- [11] J. Xu, B. Ai, W. Chen, A. Yang, P. Sun, and M. Rodrigues, “Wireless image transmission using deep source channel coding with attention modules,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 4, pp. 2315–2328, 2021.
- [12] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman et al., “Laion-5b: An open large-scale dataset for training next generation image-text models,” arXiv preprint arXiv:2210.08402, 2022.
- [13] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models. 2022 ieee,” in CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10 674–10 685.
- [14] A. Horé and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th International Conference on Pattern Recognition, 2010, pp. 2366–2369.