This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Difference Statement of Learning Radiance Fields from a Single Snapshot Compressive Image

Yunhao Li, Xiang Liu, Xiaodong Wang, Xin Yuan and Peidong Liu

1 Statements

In this work, we recover the underlying 3D scenes from a single snapshot compressive image (SCI), leveraging powerful 3D scene representation capabilities of neural radiance fields (NeRF) and 3D Gaussian Splatting (3DGS). The preliminary of this work was presented and published on CVPR 2024 by the authors [1], in which we proposed SCINeRF to represent the 3D scene from a single SCI image. In this work, we extend the original work in the following significant ways:

  • 1)

    We further enhance the scene reconstruction quality and training/rendering speed of SCINeRF by introducing SCISplat, a 3DGS-based approach, to reconstruct the underlying 3D scene from a single SCI image.

  • 2)

    We propose a novel initialization protocol to kick off 3DGS training by estimating point clouds and camera poses from a single SCI image robustly, which will benefit downstream 3D tasks from SCI measurements.

  • 3)

    All the experiments related to SCISplat are newly conducted;

  • 4)

    Extensive experiments on both synthetic and real datasets show that SCISplat outperforms SCINeRF by 2.3dB on image quality, 820×\times inference/rendering speed and 10×\times training speed.

References

  • [1] Y. Li, X. Wang, P. Wang, X. Yuan, and P. Liu, “SCINeRF: Neural Radiance Fields from a Snapshot Compressive Image,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 10 542–10 552.