This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On Self-Adaptive Perception Loss Function for Sequential Lossy Compression

Sadaf Salehkalaibar    Buu Phan    Likun Cai    Joao Atz Dick    Wei Yu    Jun Chen    Ashish Khisti
Abstract

We consider causal, low-latency, sequential lossy compression, with mean squared-error (MSE) as the distortion loss, and a perception loss function (PLF) to enhance the realism of reconstructions. As the main contribution, we propose and analyze a new PLF that considers the joint distribution between the current source frame and the previous reconstructions. We establish the theoretical rate-distortion-perception function for first-order Markov sources and analyze the Gaussian model in detail. From a qualitative perspective, the proposed metric can simultaneously avoid the error-permanence phenomenon and also better exploit the temporal correlation between high-quality reconstructions. The proposed metric is referred to as self-adaptive perception loss function (PLF-SA), as its behavior adapts to the quality of reconstructed frames. We provide a detailed comparison of the proposed perception loss function with previous approaches through both information theoretic analysis as well as experiments involving moving MNIST and UVG datasets.

Machine Learning, ICML

For first-order Markov sources, let the information RDP region, denoted by 𝒟𝒫\mathcal{RDP}, be the set of all tuples (𝖱,𝖣,𝖯)(\mathsf{R},\mathsf{D},\mathsf{P}) which satisfy the following {IEEEeqnarray}rCl R_1 &≥ I(X_1;X_r,1),
R_2 ≥ I(X_2;X_r,2—X_r,1),
R_3 ≥ (X_3;X_r,3—X_r,1,X_r,2)
D_j ≥ E[∥X_j-^X_j∥^2],
P_j≥ ϕ_j(P_^X_1…^X_j-1 X_j, P_^X_1…^X_j-1^X_j),   j=1,2,3,
for auxiliary random variables (Xr,1,Xr,2,Xr,3)(X_{r,1},X_{r,2},X_{r,3}) and (X^1,X^2,X^3)(\hat{X}_{1},\hat{X}_{2},\hat{X}_{3}) satisfying the following {IEEEeqnarray}rCl &^X_1= η_1(X_r,1),   ^X_2=η_2(X_r,1,X_r,2),   ^X_3=X_r,3,

X