(eccv) Package eccv Warning: Package ‘hyperref’ is loaded with option ‘pagebackref’, which is *not* recommended for camera-ready version
2Tencent AI Lab 3City University of Hong Kong
4School of Data Science, The Chinese University of Hong Kong, Shenzhen https://lzhnb.github.io/project-pages/analytic-splatting/
Analytic-Splatting: Anti-Aliased 3D Gaussian Splatting via Analytic Integration
Abstract
The 3D Gaussian Splatting (3DGS) gained its popularity recently by combining the advantages of both primitive-based and volumetric 3D representations, resulting in improved quality and efficiency for 3D scene rendering. However, 3DGS is not alias-free, and its rendering at varying resolutions could produce severe blurring or jaggies. This is because 3DGS treats each pixel as an isolated, single point rather than as an area, causing insensitivity to changes in the footprints of pixels. Consequently, this discrete sampling scheme inevitably results in aliasing, owing to the restricted sampling bandwidth. In this paper, we derive an analytical solution to address this issue. More specifically, we use a conditioned logistic function as the analytic approximation of the cumulative distribution function (CDF) in a one-dimensional Gaussian signal and calculate the Gaussian integral by subtracting the CDFs. We then introduce this approximation in the two-dimensional pixel shading, and present Analytic-Splatting, which analytically approximates the Gaussian integral within the 2D-pixel window area to better capture the intensity response of each pixel. Moreover, we use the approximated response of the pixel window integral area to participate in the transmittance calculation of volume rendering, making Analytic-Splatting sensitive to the changes in pixel footprint at different resolutions. Experiments on various datasets validate that our approach has better anti-aliasing capability that gives more details and better fidelity.
Keywords:
3D Gaussian Splatting Anti-Aliasing View Synthesis Cumulative Distribution Function (CDF) Analytic Approximation
1 Introduction
Novel view synthesis of a scene captured from multiple images has achieved great progress due to the rapid advancements of neural rendering. As a prominent representative, Neural Radiance Field (NeRF) [23] models the scene using a neural volumetric representation, enabling photorealistic rendering of novel views via ray marching. Ray marching trades off rendering efficiency with quality, and subsequent works [33, 8, 24] are proposed to have a better quality-efficiency balance. More recently, 3D Gaussian Splatting (3DGS) [16] proposes a GPU-friendly differentiable rasterization pipeline that incorporates an explicit point-based representation, achieving high-quality and real-time renderings for novel view synthesis. In contrast to ray marching in NeRF, which renders a pixel by accumulating the radiance of samples along the ray that intersects the image plane at the pixel, 3DGS employs a forward-mapping technique that can be rasterized very efficiently. Specifically, 3DGS represents the scene as a set of anisotropic 3D Gaussians with scene properties; when rendering a pixel, 3DGS orders and projects these 3D Gaussians onto the image plane as 2D Gaussians, and then queries values and scene properties associated with the Gaussians that have overlaps with the pixel, and finally shades the pixel by cumulatively compositing these queried values and properties.
3DGS works for scene representation learning and novel view synthesis at constant resolutions; however, its performance degrades greatly either when the multi-view images are captured at varying distances, or when the novel view to be rendered has a resolution different from those of the captured images. The main reason is that the footprint 111The footprint is defined as the ratio between the pixel window area in screen space and its covered Gaussian signals region in the world space. of the pixel changes at different resolutions and 3DGS is insensitive to such changes since it treats each pixel as an isolated point (i.e. merely pixel center) when retrieving the corresponding Gaussian values; Fig. 1a gives an illustration. As a result, 3DGS could produce significant artifacts (e.g. blurry or jaggies) especially when pixel footprints change drastically (e.g. synthesizing novel views with zooming-in and zooming-out effects).
By delving into the details, we know that 3DGS represents a continuous signal in the image space as a set of -blended 2D Gaussians, and the pixel shading is a process of integrating the signal response within each pixel area; artifacts in 3DGS are caused by the limited sampling bandwidth for the Gaussians that retrieves erroneous responses, especially when the pixel footprint changes drastically. It is possible to increase sampling bandwidth (i.e. via super sampling) or use prefiltering techniques to alleviate this problem; for example, Mip-Splatting [36] employs the prefiltering technique and presents a hybrid filtering mechanism to regularize the high-frequency components of 2D and 3D Gaussians to achieve anti-aliasing. While Mip-Splatting overcomes most aliasing in 3DGS, it is limited in capturing details and synthesizes over-smoothing results. Consequently, solving the integral of Gaussian signals within the pixel window area as intensity responses is crucial for both anti-aliasing and capturing details.
In this paper, we revisit pixel shading in 3DGS and introduce an analytic approximation of the window integral response of Gaussian signals for anti-aliasing. Rather than discrete sampling in 3DGS and prefiltering in Mip-Splatting, we analytically approximate the integral within each pixel area as shown in Fig. 1b. We term our method as Analytic-Splatting. Compared with Mip-Splatting, which approximates the pixel window as a 2D Gaussian low-pass filter, our proposed method does not suppress the high-frequency components in Gaussian signals and can better preserve high-quality details. Experiments show that our method removes the aliasing existing in 3DGS and other methods while synthesizing more details with better fidelity. We summarize our contributions as follows.
-
•
We revisit the causes of aliasing in 3D Gaussian Splatting from the perspective of signal window response and derive an analytic approximation of the window response for Gaussian signals;
-
•
Based on the derivation, we present Analytic-Splatting that improves the pixel shading in 3D Gaussian Splatting to achieve anti-aliasing and better detail fidelity.
-
•
Our experiments on challenging datasets demonstrate the superiority of our method to other approaches in terms of anti-aliasing and synthesizing results.
2 Related Works
Neural Rendering. Recently, neural rendering techniques exemplified by Neural Radiance Field (NeRF) [23] have achieved impressive results in novel view synthesis, and further enhanced several advanced tasks [30, 34, 21, 14, 22, 25]. Nevertheless, the backward-mapping volume rendering used in NeRF hinders the real-time rendering performance, restricting the application prospects of NeRF. While several NeRF variants adopt efficient sampling strategies [35, 24, 19] or use explicit/hybrid representations [8, 29, 5, 9] with higher capacities, they still suffer from the tough sampling problem and struggle with real-time rendering. To overcome these limitations, 3DGS [16] employs forward mapping volume rendering technology and implements GPU-friendly tile-based rasterization to achieve real-time rendering and impressive rendering results. Due to its real-time rendering capability and impressive rendering performance, 3DGS has been widely used in advanced tasks such as Human/Avatar modeling [27, 12, 38, 40], surface reconstruction [10, 6], inverse rendering [20, 15, 28], physical simulation [32, 7], etc. Although rasterization makes 3DGS avoid tough sampling problems along rays and achieve promising results, it also introduces aliasing caused by restricted sampling bandwidth when shading pixels using 2D Gaussians. And the aliasing will be noticeable when the pixel footprint changes drastically (e.g. zooming in and out). In this paper, we study the errors introduced by the discrete sampling scheme used in 3DGS and introduce our advanced resolution.
Anti-Aliasing. Aliasing is the phenomenon of overlapping frequency components when the discrete sampling rate is below the Nyquist rate. Anti-aliasing is critical for rendering high-fidelity results, which has been extensively explored in the computer graphics and vision community [1, 31, 18]. In the neural rendering context, MipNeRF [2] and Zip-NeRF [4] pioneer the use of prefiltering and multi-sampling to address the aliasing issue in neural radiance fields (NeRF). Recent works also explored the anti-aliased NeRF for unbounded scenes [3], efficient reconstruction [13], and surface reconstruction [39]. All these works are built upon the backward-mapping volume rendering to consider the pixel footprint, by replacing the original ray-casting with cone-casting. However, the backward-mapping volume rendering is too computationally expensive to achieve real-time rendering. On the other hand, 3DGS [16] introduced real-time forward-mapping volume rendering but suffers from aliasing artifacts due to the discrete sampling during shading pixels using projected Gaussians. To this end, Mip-Splatting [36] presents a hybrid filtering mechanism to restrict the high-frequency components of 2D and 3D Gaussians to achieve anti-aliasing. Nevertheless, this low-pass filtering strategy hinders the capability to preserve high-quality details. In contrast, our approach introduces an analytic approximation of the integral within the pixel area to better capture the intensity response of each pixel, harvesting both aliasing-free and detail-preserving rendering results.
3 Preliminary
In this section, we give the technical background necessary for presentation of our proposed method.
3D Gaussian Splatting (3DGS) explicitly represents 3D scene as a set of points . Given any point , 3DGS models it as a 3D Gaussian signal with mean vector and covariance matrix :
(1) |
where is the position of point , and is an anisotropic covariance matrix, which is factorized into a scaling matrix and a rotation matrix as . Note that indicates a diagonal matrix and refers to a matrix constructed from a unit quaternion .
Given a viewing transformation with extrinsic matrix and projection matrix , we get the projected position and covariance matrix in 2D screen space as:
(2) |
where is the Jacobian matrix of the affine approximation of the perspective projection. Note that 3DGS only retains the second-order values of and as and respectively. The projected 2D Gaussian signal for the pixel is given:
(3) |
using the projected 2D Gaussian signal, 3DGS derives the volume transmittance and shades the color of pixel through:
(4) |
where the symbols with subscripts indicate the attributes related to the point . Specifically, and respectively denote the opacity and view-dependent color of point .
For better understanding, we further formulate the 2D Gaussian signal in Eq. 3 as a flattened expression. Considering is a real-symmetric matrix, we numerically express and as:
(5) | ||||
given the pixel and the projected position in 2D screen space, Eq. 3 can be rewrited as:
(6) | ||||
Remark. It is worth noting that 3DGS treats each pixel as an isolated, single point when calculating its corresponding Gaussian value, as shown in Eq. (6). This approximating scheme functions effectively when training and testing images to capture the scene content from a relatively consistent distance. However, when the pixel footprint changes due to focal length or camera distance adjustments, 3DGS renderings exhibit considerable artifacts, such as thin Gaussians observed during zooming in. As a result, it becomes crucial to define the pixel as a window area and calculate the corresponding intensity response by integrating the Gaussian signal within this domain. Rather than using the intuitive but time-consuming super sampling, it would be better to tackle the problem more analytically, given that the Gaussian Signal is a continuous function.
4 Methods
In Sec. 3, we observe that 3DGS ignores the window area of each pixel, and only considers the Gaussian value corresponding to the pixel center as its intensity response. This approach would inevitably produce artifacts due to fluctuations of the pixel footprint under different resolutions. To address this problem, we are motivated to derive an analytical approximation of a 2D Gaussian signal within the pixel window area to accurately describe the intensity response of the imaging pixel. Subsequently, we plan to apply this derived integration to replace in the 3DGS framework.





4.1 Revisit One-dimensional Gaussian Signal Response
Before describing our proposed method, we first revisit the example of the integrated response of a one-dimensional Gaussian Signal within a window area for better understanding. Given a signal and a window area , we aim to get the response by integrating the signal within this domain as shown in Fig. 2(a). For an unknown signal, Monte Carlo sampling within the window area is a feasible approach to approximate the integral as demonstrated in Fig. 2(b) and Fig. 2(c), and the approximation result will be more accurate as the number of samples increases. Nonetheless, increasing the number of samples (i.e., super sampling) increases the computation burden significantly.
Fortunately, our goal in the 3DGS framework is to obtain the intensity response of the Gaussian signal within the window area . Given that the Gaussian signal is a continuous real-valued function, it is natural to derive an analytical approximation to the Gaussian definite integral (Fig. 2(a)) which is more accurate compared to the numerical integration (Fig. 2(b) and Fig. 2(c)). For instance, in Mip-Splatting, the window area is treated as a Gaussian kernel , and the integral is approximated as the result of sampling after convolving the Gaussian signal with the Gaussian kernel 222the standard deviation in is set to . Note that the result of convolving a Gaussian signal with a Gaussian kernel is still a Gaussian signal as shown in Fig. 2(d). While this prefiltering uses the great convolution properties of Gaussian signals, this approximation will introduce a large gap when the Gaussian signal mainly consists of high-frequency components (i.e., with small standard variation ).
To overcome these shortcomings, we are motivated to calculate the integral within the window area analytically. Specifically, the problem of evaluating the definite integral within can be simplified to the subtraction of two improper integrals by applying the first part of the fundamental theorem of calculus. Let be the cumulative distribution function (CDF) of the standard Gaussian distribution defined by,
(7) |
and the definite integral of within can be expressed as,
(8) |
However, this CDF of the Gaussian function (defined as the error function erf) is not closed-form. We start by approximating the CDF and find an important corollary, i.e.,
Definition 1
The logistic function is the analytic approximation of the CDF with standard deviation , which is defined as,
(9) |
and could be derivative-friendly.
This analytic approximation also contains similar properties of the CDF :
-
1.
is non-decreasing and right-continuous, satisfying
-
2.
The curve of has 2-fold rotational symmetry around the point ,
For any Gaussian signals with different standard deviations, we can approximate their CDFs by scaling in Eq. 9 by the reciprocals of their standard deviations. Once in scales by the reciprocal of , we express it as . For more details, please refer to the supplementary material.
In summary, given the sample and setting the window area as , the integral of Gaussian signal within the area is defined as:
(10) |
In the following section, we will discuss how to employ the above approximation to 2D-pixel shading in Analytic-Splatting.


4.2 Analytic-Splatting
After revisiting the one-dimensional Gaussian signal integration, we tend to approximate the integral of projected 2D Gaussians within each pixel window area in Analytic-Splatting. Mathematically, we replace the sampled in Eq. 4 with the approximated integral . For the pixel in 2D screen space, which corresponds to the window area as shown in Fig. 3(a), the integration of Gaussian signal in Eq. 3 is represented as:
(12) |
Notably, handling the correlation term in this integral is intractable. To unravel the correlation term and thus feasibly solve the integral, we diagonalize the covariance matrix of the 2D Gaussian and slightly rotate the integration domain as shown in Fig. 3(b), thus approximating the integral by multiplying two independent 1D Gaussian integrals.
In detail, we first perform eigenvalue decomposition on the covariance matrix (refer to Eq. 5) to obtain eigenvalues and the corresponding eigenvectors . After diagonalization, for better description, we take (the mean vector of ) as the origin and the eigenvectors as the axis to construct a new coordinate system.
In this coordinate system, given a pixel , we rewrite the in Eq. 6 as the multiplication of two independent 1D Gaussians:
(13) | ||||
where denotes the diagonalized coordinate of the pixel center. After diagonalization, we further rotate the pixel integration domain along the pixel center to align it with the eigenvectors and get for approximating the integral. Thus the integral in Eq. 12 can be approximated as:
(14) | ||||
where subscripts of and respectively correspond to Gaussian signals with the standard deviation . and denote the standard derivations of the independent Gaussian signals along two eigenvectors respectively. In summary, the volume shading in Analytic-Splatting is given by:
(15) | ||||
5 Experiments
5.1 Approximation Error Analysis
In this section, we first comprehensively explore the approximation errors in our scheme and then conduct an elaborate comparison against other schemes. It is worth noting that during training, the pruning and densification schemes proposed in 3DGS tend to maintain the standard deviations of Gaussian signals within an appropriate range (i.e. ), and each Gaussian signal only responds to pixels within the confidence interval (i.e. ) for shading. Therefore, we only consider the Gaussian signals with standard deviations in , and merely discuss the approximation error of pixels within the confidence interval.
Referring to Eq. 7 and Eq. 9, we get the approximation error function about the CDF of the Gaussian function:
(16) |
and referring to Eq. 10 and Eq. 11, the approximation error function regarding the 1-width window area integral response is:
(17) |
the approximation error and referring to different standard deviations are shown in Fig. 4(a) and Fig. 4(b) respectively. 333Since Eq. 16 and Eq. 17 are even functions, we show the results for the positive semi-axis over in Fig. 4. Please note that the errors shown in Fig. 4 are scaled by a tiny factor of .



For approximating the integral response over the 1-width window area, our scheme significantly outperforms other schemes. Fig. 5(a) and Fig. 5(b) respectively show that one-dimensional approximation error with different standard deviations and variable distributions. Our scheme is robust to these two conditions, especially when the standard deviations of Gaussian signals become smaller, our advantage becomes more obvious, which means that our scheme is better at capturing the high-frequency signals (i.e. details) of the scene, and our subsequent experimental results also verify this.
PSNR | SSIM | LPIPS | |||||||||||||
Full Res. | Res. | Res. | Res. | Avg. | Full Res. | Res. | Res. | Res. | Avg. | Full Res. | Res. | Res. | Res. | Avg. | |
NeRF w/o | 31.20 | 30.65 | 26.25 | 22.53 | 27.66 | 0.950 | 0.956 | 0.930 | 0.871 | 0.927 | 0.055 | 0.034 | 0.043 | 0.075 | 0.052 |
NeRF [23] | 29.90 | 32.13 | 33.40 | 29.47 | 31.23 | 0.938 | 0.959 | 0.973 | 0.962 | 0.958 | 0.074 | 0.040 | 0.024 | 0.039 | 0.044 |
MipNeRF [2] | 32.63 | 34.34 | 35.47 | 35.60 | 34.51 | 0.958 | 0.970 | 0.979 | 0.983 | 0.973 | 0.047 | 0.026 | 0.017 | 0.012 | 0.026 |
Plenoxels [8] | 31.60 | 32.85 | 30.26 | 26.63 | 30.34 | 0.956 | 0.967 | 0.961 | 0.936 | 0.955 | 0.052 | 0.032 | 0.045 | 0.077 | 0.051 |
TensoRF [5] | 32.11 | 33.03 | 30.45 | 26.80 | 30.60 | 0.956 | 0.966 | 0.962 | 0.939 | 0.956 | 0.056 | 0.038 | 0.047 | 0.076 | 0.054 |
Instant-NGP [24] | 30.00 | 32.15 | 33.31 | 29.35 | 31.20 | 0.939 | 0.961 | 0.974 | 0.963 | 0.959 | 0.079 | 0.043 | 0.026 | 0.040 | 0.047 |
Tri-MipRF [13] | 32.65 | 34.24 | 35.02 | 35.53 | 34.36 | 0.958 | 0.971 | 0.980 | 0.987 | 0.974 | 0.047 | 0.027 | 0.018 | 0.012 | 0.026 |
3DGS [16] | 28.79 | 30.66 | 31.64 | 27.98 | 29.77 | 0.943 | 0.962 | 0.972 | 0.960 | 0.960 | 0.065 | 0.038 | 0.025 | 0.031 | 0.040 |
3DGS-SS [16] | 32.05 | 33.78 | 33.92 | 31.12 | 32.71 | 0.964 | 0.975 | 0.980 | 0.977 | 0.974 | 0.039 | 0.021 | 0.016 | 0.020 | 0.024 |
Mip-Splatting [36] | 32.81 | 34.49 | 35.45 | 35.50 | 34.56 | 0.967 | 0.977 | 0.983 | 0.988 | 0.979 | 0.035 | 0.019 | 0.013 | 0.010 | 0.019 |
Ours | 33.22 | 34.92 | 35.98 | 36.00 | 35.03 | 0.967 | 0.977 | 0.984 | 0.989 | 0.979 | 0.033 | 0.019 | 0.012 | 0.010 | 0.018 |
Last but not least, we employ this scheme to approximate the window integral responses of two-dimensional Gaussian signals, which requires rotating the integration domain from to as shown in Fig. 3 and inevitably introduces additional errors. To study this error, we record the approximation errors caused by rotating the integration domain from to 444Since we always hold in practice, thus we only consider the approximation error caused by the rotation angle from to . under different standard deviations of Gaussian signals and distributions, as shown in Fig. 5(c). Although the approximation error of our scheme slightly increases as the rotation angle becomes larger, our scheme still surpasses other schemes.
5.2 Comparison
To verify the anti-aliasing capability and versatility of Analytic-Splatting, we conduct experiments against state-of-the-art methods under the multi-scale training & multi-scale testing (MTMT) setting on Blender Synthetic [23, 2] and Mip-NeRF 360 [3] datasets. We further evaluate the performance of 3DGS and its variants under the super-resolution setting.
Dataset & Metric. We conduct experiments using benchmark datasets of multi-scale Blender Synthetic [23, 2] and multi-scale Mip-NeRF 360 [3]. They respectively contain 8 objects and 9 scenes, each object and scene is compiled by downscaling the original dataset with a factor of 2, 4, and 8, and combining. For the Blender Synthetic dataset, each object contains 100 images for training and 200 images for testing. For the Mip-NeRF 360 dataset, we select 1 image from every 8 images for testing and the remaining images for training. To verify the efficacy of our method, we evaluate the synthesized novel view at multiple scales on both datasets in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Path Similarity (LPIPS) [37].

Implementation. We implement Analytic-Splatting upon 3DGS [16] and custom our shading module with CUDA extensions. Following 3DGS, we train Analytic-Splatting using the same parameters, training schedule, and loss functions, ensuring the efficacy of our scheme. To achieve super sampling of Gaussian signals, we implement 3DGS-SS, which first renders an image at twice the target resolution and obtains the final image at the target resolution through average pooling. For the MTMT setting, we follow previous works [2, 13, 36] and tend to select more full-resolution images as supervision samples during training. Please refer to supplementary material for more details on the backpropagation implementation of rendering.
PSNR | SSIM | LPIPS | |||||||||||||
Full Res. | Res. | Res. | Res. | Avg. | Full Res. | Res. | Res. | Res. | Avg. | Full Res. | Res. | Res. | Res. | Avg. | |
Mip-NeRF 360 [3] | 27.50 | 29.19 | 30.45 | 30.86 | 29.50 | 0.778 | 0.864 | 0.912 | 0.931 | 0.871 | 0.254 | 0.136 | 0.077 | 0.058 | 0.131 |
Mip-NeRF 360 + iNGP | 26.46 | 27.92 | 27.67 | 25.58 | 26.91 | 0.773 | 0.855 | 0.866 | 0.804 | 0.824 | 0.253 | 0.142 | 0.117 | 0.159 | 0.167 |
Zip-NeRF [4] | 28.25 | 30.01 | 31.56 | 32.52 | 30.58 | 0.822 | 0.891 | 0.933 | 0.955 | 0.900 | 0.198 | 0.099 | 0.056 | 0.038 | 0.098 |
3DGS [16] | 26.55 | 28.00 | 28.51 | 27.45 | 27.63 | 0.779 | 0.854 | 0.891 | 0.888 | 0.853 | 0.274 | 0.162 | 0.102 | 0.087 | 0.156 |
3DGS-SS [16] | 27.20 | 28.75 | 29.89 | 29.71 | 28.89 | 0.800 | 0.871 | 0.914 | 0.928 | 0.878 | 0.246 | 0.138 | 0.081 | 0.061 | 0.131 |
Mip-Splatting [36] | 27.20 | 28.74 | 29.90 | 30.66 | 29.12 | 0.802 | 0.870 | 0.915 | 0.944 | 0.883 | 0.244 | 0.146 | 0.090 | 0.056 | 0.134 |
Ours | 27.50 | 28.99 | 30.35 | 31.21 | 29.51 | 0.808 | 0.874 | 0.919 | 0.945 | 0.887 | 0.231 | 0.132 | 0.077 | 0.051 | 0.123 |
Evaluation on Blender Synthetic Dataset. We compare our Analytic-Splatting with several state-of-the-art methods i.e. NeRF [23], MipNeRF [2], Plenoxels [8], TensoRF [5], Instant-NGP [24], Tri-MipRF [13], 3DGS [16] and its variants (i.e. 3DGS-SS, Mip-Splatting [36]) on the Blender Synthetic dataset. The quantitative results in Tab. 1 show that Analytic-Splatting outperforms other methods in all aspects. The qualitative results in Fig. 6 demonstrate that Analytic-Splatting can better capture high-frequency details while being anti-aliased. More results can be found in the supplementary material.

Evaluation on Mip-NeRF 360 Dataset. We compare our Analytic-Splatting with several cut-edge methods i.e. Mip-NeRF 360 and its iNGP encoding version [3], Zip-NeRF [4], 3DGS [16] and its variants (i.e. 3DGS-SS, Mip-Splatting [36]) on the challenging Mip-NeRF 360 dataset. The results of Zip-NeRF are reported from the available official implementation 555https://github.com/jonbarron/camp_zipnerf. Please note that Mip-NeRF 360 and Zip-NeRF struggle with real-time rendering, especially Zip-NeRF performs supersampling techniques in the rendering phase. The quantitative results in Tab. 2 show that our method is second only to Zip-NeRF, and we outperform other methods with real-time rendering capability (i.e. 3DGS and its variants). The qualitative comparisons in Fig. 7 demonstrate that our method has better anti-aliasing capability and detail fidelity despite facing complex scenes. More results can be found in the supplementary material.
Super-Resolution Evaluation on Mip-NeRF 360 Dataset. We further evaluate our method against other 3DGS and its variants under the super-resolution setting ( Res.) on the Mip-NeRF 360 dataset. All methods are trained on the Multi-Scale Mip-NeRF 360 dataset. The quantitative results are shown in Tab. 3 and the qualitative results are shown in Fig. 8. Both results demonstrate the superior performance of our method, further supporting the capability of Analytic-Splatting in capturing details. Conversely, Mip-Splatting is insufficient in capturing details due to pre-filtering.
PSNR | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
3DGS | 23.14 | 20.28 | 24.62 | 25.44 | 21.90 | 30.27 | 28.08 | 29.51 | 30.34 | 25.95 |
3DGS-SS | 23.98 | 20.84 | 25.48 | 26.24 | 22.12 | 30.90 | 28.69 | 30.53 | 31.34 | 26.68 |
Mip-Splatting | 23.82 | 20.60 | 24.97 | 25.78 | 21.82 | 30.95 | 28.68 | 30.45 | 31.07 | 26.46 |
Ours | 24.22 | 20.97 | 25.72 | 26.29 | 22.04 | 31.04 | 28.90 | 31.10 | 31.83 | 26.90 |
SSIM | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
3DGS | 0.639 | 0.492 | 0.707 | 0.706 | 0.578 | 0.902 | 0.891 | 0.893 | 0.916 | 0.747 |
3DGS-SS | 0.675 | 0.527 | 0.747 | 0.739 | 0.596 | 0.908 | 0.901 | 0.907 | 0.926 | 0.769 |
Mip-Splatting | 0.671 | 0.526 | 0.737 | 0.728 | 0.589 | 0.906 | 0.898 | 0.900 | 0.924 | 0.764 |
Ours | 0.683 | 0.535 | 0.761 | 0.739 | 0.596 | 0.910 | 0.904 | 0.911 | 0.930 | 0.774 |
LPIPS | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
3DGS | 0.382 | 0.471 | 0.325 | 0.378 | 0.462 | 0.324 | 0.314 | 0.241 | 0.321 | 0.358 |
3DGS-SS | 0.345 | 0.438 | 0.281 | 0.340 | 0.435 | 0.314 | 0.297 | 0.220 | 0.307 | 0.333 |
Mip-Splatting | 0.341 | 0.433 | 0.291 | 0.338 | 0.439 | 0.309 | 0.295 | 0.216 | 0.300 | 0.329 |
Ours | 0.342 | 0.429 | 0.268 | 0.338 | 0.434 | 0.307 | 0.291 | 0.209 | 0.300 | 0.324 |

6 Conclusion
In this paper, we first revisit the window response of one-dimensional Gaussian signals and reason about an analytical and accurate approximation using a conditioned logistic function. We then introduce this approximation in the two-dimensional pixel shading and present Analytic-Splatting, which approximates the pixel area integral response to achieve anti-aliasing capability and better detail fidelity. Our extensive experiments demonstrate the efficacy of Analytic-Splatting in achieving state-of-the-art novel view synthesis results under multi-scale and super-resolution settings.
Limitations. Compared with 3DGS and Mip-Splatting, our shading implementation introduces more root and exponential operations, which inevitably increases the computational burden and reduces the frame rate. Despite this, our frame rate is only lower than Mip-Splatting, which is also an anti-aliasing approach.
References
- [1] Akeley, K.: Reality engine graphics. In: Conference on Computer graphics and interactive techniques (1993)
- [2] Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5855–5864 (2021)
- [3] Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5470–5479 (2022)
- [4] Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-nerf: Anti-aliased grid-based neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2023)
- [5] Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: Tensorial radiance fields. In: European Conference on Computer Vision. pp. 333–350. Springer (2022)
- [6] Chen, H., Li, C., Lee, G.H.: Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846 (2023)
- [7] Feng, Y., Feng, X., Shang, Y., Jiang, Y., Yu, C., Zong, Z., Shao, T., Wu, H., Zhou, K., Jiang, C., et al.: Gaussian splashing: Dynamic fluid synthesis with gaussian splatting. arXiv preprint arXiv:2401.15318 (2024)
- [8] Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: Radiance fields without neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5501–5510 (2022)
- [9] Gao, Q., Xu, Q., Su, H., Neumann, U., Xu, Z.: Strivec: Sparse tri-vector radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 17569–17579 (2023)
- [10] Guédon, A., Lepetit, V.: Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. arXiv preprint arXiv:2311.12775 (2023)
- [11] Hedman, P., Philip, J., Price, T., Frahm, J.M., Drettakis, G., Brostow, G.: Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (ToG) 37(6), 1–15 (2018)
- [12] Hu, L., Zhang, H., Zhang, Y., Zhou, B., Liu, B., Zhang, S., Nie, L.: Gaussianavatar: Towards realistic human avatar modeling from a single video via animatable 3d gaussians. arXiv preprint arXiv:2312.02134 (2023)
- [13] Hu, W., Wang, Y., Ma, L., Yang, B., Gao, L., Liu, X., Ma, Y.: Tri-miprf: Tri-mip representation for efficient anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 19774–19783 (2023)
- [14] Huang, X., Zhang, Q., Feng, Y., Li, H., Wang, X., Wang, Q.: Hdr-nerf: High dynamic range neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18398–18408 (2022)
- [15] Jiang, Y., Tu, J., Liu, Y., Gao, X., Long, X., Wang, W., Ma, Y.: Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces. arXiv preprint arXiv:2311.17977 (2023)
- [16] Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics 42(4) (2023)
- [17] Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG) 36(4), 1–13 (2017)
- [18] Kuznetsov, A.: Neumip: Multi-resolution neural materials. ACM Transactions on Graphics (ToG) 40(4) (2021)
- [19] Li, R., Tancik, M., Kanazawa, A.: Nerfacc: A general nerf acceleration toolbox. arXiv preprint arXiv:2210.04847 (2022)
- [20] Liang, Z., Zhang, Q., Feng, Y., Shan, Y., Jia, K.: Gs-ir: 3d gaussian splatting for inverse rendering. arXiv preprint arXiv:2311.16473 (2023)
- [21] Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: Barf: Bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5741–5751 (2021)
- [22] Ma, L., Li, X., Liao, J., Zhang, Q., Wang, X., Wang, J., Sander, P.V.: Deblur-nerf: Neural radiance fields from blurry images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12861–12870 (2022)
- [23] Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. In: European Conference on Computer Vision. pp. 405–421 (2020)
- [24] Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG) 41(4), 1–15 (2022)
- [25] Park, K., Sinha, U., Barron, J.T., Bouaziz, S., Goldman, D.B., Seitz, S.M., Martin-Brualla, R.: Nerfies: Deformable neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5865–5874 (2021)
- [26] Petersen, K.B., Pedersen, M.S., et al.: The matrix cookbook. Technical University of Denmark 7(15), 510 (2008)
- [27] Saito, S., Schwartz, G., Simon, T., Li, J., Nam, G.: Relightable gaussian codec avatars. arXiv preprint arXiv:2312.03704 (2023)
- [28] Shi, Y., Wu, Y., Wu, C., Liu, X., Zhao, C., Feng, H., Liu, J., Zhang, L., Zhang, J., Zhou, B., et al.: Gir: 3d gaussian inverse rendering for relightable scene factorization. arXiv preprint arXiv:2312.05133 (2023)
- [29] Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5459–5469 (2022)
- [30] Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689 (2021)
- [31] Wu, L., Zhao, S., Yan, L.Q., Ramamoorthi, R.: Accurate appearance preserving prefiltering for rendering displacement-mapped surfaces. ACM Transactions on Graphics (ToG) 38(4), 1–14 (2019)
- [32] Xie, T., Zong, Z., Qiu, Y., Li, X., Feng, Y., Yang, Y., Jiang, C.: Physgaussian: Physics-integrated 3d gaussians for generative dynamics. arXiv preprint arXiv:2311.12198 (2023)
- [33] Xu, Q., Xu, Z., Philip, J., Bi, S., Shu, Z., Sunkavalli, K., Neumann, U.: Point-nerf: Point-based neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5438–5448 (2022)
- [34] Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems 34, 4805–4815 (2021)
- [35] Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: Plenoctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 5752–5761 (2021)
- [36] Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-splatting: Alias-free 3d gaussian splatting. arXiv preprint arXiv:2311.16493 (2023)
- [37] Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586–595 (2018)
- [38] Zheng, S., Zhou, B., Shao, R., Liu, B., Zhang, S., Nie, L., Liu, Y.: Gps-gaussian: Generalizable pixel-wise 3d gaussian splatting for real-time human novel view synthesis. arXiv preprint arXiv:2312.02155 (2023)
- [39] Zhuang, Y., Zhang, Q., Feng, Y., Zhu, H., Yao, Y., Li, X., Cao, Y.P., Shan, Y., Cao, X.: Anti-aliased neural implicit surfaces with encoding level of detail. In: SIGGRAPH Asia 2023 Conference Papers. pp. 1–10 (2023)
- [40] Zielonka, W., Bagautdinov, T., Saito, S., Zollhöfer, M., Thies, J., Romero, J.: Drivable 3d gaussian avatars. arXiv preprint arXiv:2311.08581 (2023)
Appendix 0.A Shading Module
Since we greatly improved the shading module, our implementation is quite different from the vanilla 3DGS, especially the backward part. In this section, we give the forward and backward propagation step by step so that everyone can learn the details and reproduce our shading module easily.


0.A.1 Forward
Given a pixel with center , we use a 2D Gaussian signal to respond to the pixel and shade it. Assuming that the 2D Gaussian signal has a mean vector and a real-symmetric covariance matrix . For better formulation, we express them as:
(18) |
In Analytic-Splatting, we first perform eigendecomposition on to achieve diagonalization. After the decomposition, we obtain the eigenvalues and eigenvectors of :
(19) | ||||
where , .
Then we use the eigenvectors to construct a new coordinate system with its origin at (refer to the yellow lines in Fig. 3(b)) to unravel the correlation in the covariance . In this way, the coordinate of the pixel center is transformed into :
(20) |
then the intensity response of the pixel center can be written as:
(21) |
In Sec. 4.1 of the main page, we propose to use a conditioned logistic function to approximate the cumulative distribution function (CDF) of the standard Gaussian distribution as:
(22) |
and for the Gaussian distribution with standard deviation , we use the reciprocal of to scale and express the logistic function as:
(23) | ||||
Given the CDF approximation Eq. 23, we approximate the response of 1-width window around sample as:
(24) |
In Analytic-Splatting, we calculate the intensity response by approximating the integral of Gaussian signal Eq. 21 over the domain in Fig. 9, and the integral over 2D domain can be represented as the product of the integrals of two one-dimensional Gaussian signals:
(25) | ||||
where denote the standard derivations of the independent Gaussian signals along two eigenvectors respectively. In summary, the volume shading in Analytic-Splatting is given by:
(26) | ||||
0.A.2 Backward
Derivation of the Conditioned Logistic Function Before introducing the backpropagation in Analytic-Splatting, we first give the derivation of the conditioned logistic function in Eq. 22:
(27) |
further for the derivation of in Eq. 23, we can get and through the chain rule:
(28) | ||||
Backpropogation in Shading Our backpropagation aims to derive the gradients of with respect to and , as and :
(29) |
It is quite difficult to express the above gradient directly. Still, we can use the chain rule to boil down the above results layer by layer to be available. We note that our key insight is to construct a new coordinate system using the mean vector , eigenvalues and eigenvectors . Therefore, we can use them as an intermediary to solve the final gradient by the chain rule. For the gradients of with respect to the mean vector , according to Eq. 20 and Eq. 26, we have:
(30) | ||||
where , , and the gradient has been solved in Eq. 28.
Appendix 0.B Additional Results
In this section, we report more quantitative and qualitative results in detail. In addition, we provide the results of our method combined with 3D smoothing filter [36] for better study. According to the quantitative results of our experiments, the conclusion is that the 3D smoothing filter slightly improves the single-scale training and testing results. Still, it has no significant impact on the multi-scale training and testing results.
Mip-NeRF 360 | TanksTemples | Deep Blending | |||||||
PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | |
Plenoxels [8] | 23.08 | 0.625 | 0.463 | 21.07 | 0.721 | 0.379 | 23.06 | 0.795 | 0.510 |
INGP-Base [24] | 25.30 | 0.671 | 0.371 | 21.72 | 0.734 | 0.330 | 23.62 | 0.797 | 0.423 |
INGP-Big [24] | 25.59 | 0.699 | 0.331 | 21.92 | 0.752 | 0.305 | 24.96 | 0.817 | 0.390 |
Mip-NeRF 360 [3] | 27.69 | 0.792 | 0.237 | 22.22 | 0.800 | 0.257 | 29.40 | 0.901 | 0.245 |
3DGS [16] | 27.21 | 0.815 | 0.214 | 23.14 | 0.844 | 0.183 | 29.41 | 0.903 | 0.243 |
Mip-Splatting [36] | 27.57 | 0.817 | 0.218 | 23.78 | 0.851 | 0.178 | 29.69 | 0.904 | 0.248 |
Ours | 27.58 | 0.816 | 0.217 | 23.84 | 0.851 | 0.177 | 29.75 | 0.905 | 0.248 |
Ours + 3D filter | 27.58 | 0.818 | 0.217 | 23.91 | 0.853 | 0.177 | 29.71 | 0.906 | 0.247 |
PSNR | SSIM | LPIPS | ||||||||||
Truck | Train | Johnson | Playroom | Truck | Train | Johnson | Playroom | Truck | Train | Johnson | Playroom | |
Plenoxels [8] | 23.22 | 18.93 | 23.14 | 22.98 | 0.774 | 0.663 | 0.787 | 0.802 | 0.335 | 0.422 | 0.521 | 0.499 |
INGP-Base [24] | 23.26 | 20.17 | 27.75 | 19.48 | 0.779 | 0.666 | 0.839 | 0.754 | 0.274 | 0.386 | 0.381 | 0.465 |
INGP-Big [24] | 23.83 | 20.46 | 28.26 | 21.67 | 0.800 | 0.689 | 0.854 | 0.779 | 0.249 | 0.360 | 0.352 | 0.428 |
Mip-NeRF 360 [3] | 24.91 | 19.52 | 29.14 | 29.66 | 0.857 | 0.660 | 0.901 | 0.900 | 0.159 | 0.354 | 0.237 | 0.252 |
3DGS [16] | 25.19 | 21.09 | 28.77 | 30.04 | 0.879 | 0.802 | 0.899 | 0.906 | 0.148 | 0.218 | 0.244 | 0.241 |
Mip-Splatting [36] | 25.48 | 22.09 | 29.09 | 30.28 | 0.884 | 0.818 | 0.901 | 0.908 | 0.149 | 0.208 | 0.248 | 0.248 |
Ours | 25.48 | 22.20 | 29.18 | 30.32 | 0.883 | 0.819 | 0.901 | 0.909 | 0.148 | 0.207 | 0.247 | 0.248 |
Ours + 3D filter | 25.60 | 22.22 | 29.17 | 30.25 | 0.887 | 0.820 | 0.903 | 0.909 | 0.147 | 0.206 | 0.247 | 0.248 |
PSNR | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
Plenoxels [8] | 21.91 | 20.10 | 23.49 | 20.66 | 22.25 | 27.59 | 23.62 | 23.42 | 24.67 | 23.08 |
INGP-Base [24] | 22.19 | 20.35 | 24.60 | 23.63 | 22.36 | 29.27 | 26.44 | 28.55 | 30.34 | 25.30 |
INGP-Big [24] | 22.17 | 20.65 | 25.07 | 23.47 | 22.37 | 29.69 | 26.69 | 29.48 | 30.69 | 25.59 |
Mip-NeRF 360 [3] | 24.37 | 21.73 | 26.98 | 26.40 | 22.87 | 31.63 | 29.55 | 32.23 | 33.46 | 27.69 |
3DGS [16] | 25.25 | 21.52 | 27.41 | 26.55 | 22.49 | 30.63 | 28.70 | 30.32 | 31.98 | 27.21 |
Mip-Splatting [36] | 25.31 | 21.62 | 27.45 | 26.62 | 22.62 | 31.62 | 29.11 | 31.53 | 32.30 | 27.57 |
Ours | 25.18 | 21.61 | 27.39 | 26.65 | 22.54 | 31.75 | 29.11 | 31.56 | 32.43 | 27.58 |
Ours + 3D filter | 25.32 | 21.64 | 27.51 | 26.68 | 22.59 | 31.66 | 29.04 | 31.50 | 32.30 | 27.58 |
3DGS* [16] | 25.63 | 21.77 | 27.70 | 26.87 | 22.75 | 31.69 | 29.08 | 31.56 | 32.29 | 27.76 |
Mip-Splatting* [36] | 25.72 | 21.93 | 27.76 | 26.94 | 22.98 | 31.74 | 29.16 | 31.55 | 32.31 | 27.79 |
Ours* | 25.63 | 21.92 | 27.73 | 26.92 | 22.79 | 31.89 | 29.24 | 31.66 | 32.60 | 27.82 |
Ours + 3D filter* | 25.70 | 21.92 | 27.78 | 26.98 | 22.95 | 31.86 | 29.17 | 31.74 | 32.55 | 27.85 |
SSIM | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
Plenoxels [8] | 0.496 | 0.431 | 0.606 | 0.523 | 0.509 | 0.842 | 0.759 | 0.648 | 0.814 | 0.625 |
INGP-Base [24] | 0.491 | 0.450 | 0.649 | 0.574 | 0.518 | 0.855 | 0.798 | 0.818 | 0.890 | 0.671 |
INGP-Big [24] | 0.512 | 0.486 | 0.701 | 0.594 | 0.542 | 0.871 | 0.817 | 0.858 | 0.906 | 0.699 |
Mip-NeRF 360 [3] | 0.685 | 0.583 | 0.813 | 0.744 | 0.632 | 0.913 | 0.894 | 0.920 | 0.941 | 0.792 |
3DGS [16] | 0.771 | 0.605 | 0.868 | 0.775 | 0.638 | 0.914 | 0.905 | 0.922 | 0.938 | 0.815 |
Mip-Splatting [36] | 0.767 | 0.608 | 0.868 | 0.776 | 0.636 | 0.920 | 0.909 | 0.928 | 0.943 | 0.817 |
Ours | 0.763 | 0.606 | 0.866 | 0.772 | 0.633 | 0.921 | 0.910 | 0.928 | 0.943 | 0.816 |
Ours + 3D filter | 0.769 | 0.608 | 0.869 | 0.773 | 0.636 | 0.921 | 0.909 | 0.928 | 0.943 | 0.817 |
3DGS* [16] | 0.777 | 0.620 | 0.871 | 0.784 | 0.655 | 0.927 | 0.916 | 0.933 | 0.948 | 0.825 |
Mip-Splatting* [36] | 0.780 | 0.623 | 0.875 | 0.786 | 0.655 | 0.928 | 0.916 | 0.933 | 0.948 | 0.827 |
Ours* | 0.777 | 0.623 | 0.873 | 0.783 | 0.651 | 0.929 | 0.917 | 0.933 | 0.943 | 0.826 |
Ours + 3D filter* | 0.780 | 0.624 | 0.876 | 0.785 | 0.654 | 0.929 | 0.917 | 0.934 | 0.949 | 0.827 |
LPIPS | ||||||||||
bicycle | flowers | garden | stump | treehill | room | counter | kitchen | bonsai | Avg. | |
Plenoxels [8] | 0.506 | 0.521 | 0.386 | 0.503 | 0.540 | 0.419 | 0.441 | 0.447 | 0.398 | 0.463 |
INGP-Base [24] | 0.487 | 0.481 | 0.312 | 0.450 | 0.489 | 0.301 | 0.342 | 0.254 | 0.227 | 0.371 |
INGP-Big [24] | 0.446 | 0.441 | 0.257 | 0.421 | 0.450 | 0.261 | 0.306 | 0.195 | 0.205 | 0.331 |
Mip-NeRF 360 [3] | 0.301 | 0.344 | 0.170 | 0.261 | 0.339 | 0.221 | 0.204 | 0.127 | 0.176 | 0.237 |
3DGS [16] | 0.205 | 0.336 | 0.103 | 0.210 | 0.317 | 0.220 | 0.204 | 0.129 | 0.205 | 0.214 |
Mip-Splatting [36] | 0.213 | 0.340 | 0.108 | 0.216 | 0.329 | 0.221 | 0.201 | 0.127 | 0.208 | 0.218 |
Ours | 0.212 | 0.336 | 0.110 | 0.218 | 0.328 | 0.221 | 0.200 | 0.127 | 0.206 | 0.217 |
Ours + 3D filter | 0.211 | 0.340 | 0.108 | 0.218 | 0.327 | 0.220 | 0.202 | 0.127 | 0.207 | 0.218 |
3DGS* [16] | 0.205 | 0.329 | 0.103 | 0.208 | 0.318 | 0.192 | 0.178 | 0.113 | 0.174 | 0.202 |
Mip-Splatting* [36] | 0.206 | 0.331 | 0.103 | 0.209 | 0.320 | 0.192 | 0.179 | 0.113 | 0.173 | 0.203 |
Ours* | 0.207 | 0.329 | 0.105 | 0.210 | 0.320 | 0.194 | 0.180 | 0.114 | 0.176 | 0.204 |
Ours + 3D filter* | 0.206 | 0.333 | 0.104 | 0.210 | 0.321 | 0.194 | 0.181 | 0.114 | 0.177 | 0.204 |
0.B.1 Single-scale Training and Single-scale Testing on Scene Datasets.
We evaluate our Analytic-Splatting against other methods on complex scene datasets (i.e. Mip-NeRF 360 [3], Tanks&Temples [17], and Deep Blending [11]) under the single-scale training and testing setting, which is the most widely used setting. The overall results are shown in Tab. 4, our method shows great generalization across different datasets and almost outperforms other methods.
Moreover, we report per-scene metrics for Tanks&Temples [17] and Deep Blending [11] in Tab. 5, and for Mip-NeRF 360 [3] in Tab. 6.
For Mip-NeRF 360, images from indoor and outdoor scenes are downsampled by and , respectively, as full-resolution input for training and testing. The official dataset provides the downsampled images and stores them in different folders. Specifically, the results reported in Mip-Splatting did not use officially provided downsampled images as input, but use the bicubically downsampled images as input. The quantitative results in Tab. 6 show these two downsampling schemes greatly affect the metrics. Therefore, for fairness, we mark the methods that use bicubically downsampled images for training with * (i.e. 3DGS*, Mip-Splatting*, and Ours*). And the remaining methods without * marks use officially provided downsampled images for training.
0.B.2 Multi-scale Training and Multi-scale Testing on the Multi-scale Blender Synthetic Dataset
We evaluate our Analytic-Splatting against other cutting-edge methods on the Blender Synthetic dataset under the multi-scale training and testing setting. Since per-resolution metrics have been mentioned in the main paper, we report per-object metrics in Tab. 8 for more comprehensive comparisons. More qualitative results are shown in Fig. 10. Our method almost surpasses other methods and performs better than other methods in terms of detail capturing and anti-aliasing. We further provide our method’s per-resolution and per-object metrics in Tab. 7 so that subsequent methods can better refer to our results.
PSNR | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
Full Res. | 35.76 | 26.16 | 35.76 | 37.54 | 35.06 | 29.59 | 35.18 | 30.76 | 33.22 |
Res. | 38.48 | 27.30 | 36.46 | 39.46 | 36.46 | 31.05 | 37.61 | 32.54 | 34.92 |
Res. | 39.58 | 28.61 | 36.21 | 40.64 | 36.53 | 32.77 | 39.33 | 34.12 | 35.97 |
Res. | 39.24 | 29.87 | 36.01 | 40.24 | 34.93 | 33.54 | 39.02 | 35.10 | 35.99 |
All. | 38.26 | 27.98 | 36.11 | 39.47 | 35.75 | 31.74 | 37.78 | 33.13 | 35.03 |
SSIM | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
Full Res. | 0.986 | 0.952 | 0.988 | 0.984 | 0.980 | 0.958 | 0.990 | 0.901 | 0.967 |
Res. | 0.993 | 0.960 | 0.993 | 0.990 | 0.989 | 0.974 | 0.994 | 0.926 | 0.977 |
Res. | 0.995 | 0.952 | 0.988 | 0.984 | 0.992 | 0.986 | 0.995 | 0.948 | 0.984 |
Res. | 0.995 | 0.977 | 0.994 | 0.995 | 0.992 | 0.992 | 0.997 | 0.967 | 0.989 |
All. | 0.992 | 0.964 | 0.992 | 0.991 | 0.988 | 0.977 | 0.994 | 0.936 | 0.979 |
LPIPS | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
Full Res. | 0.015 | 0.040 | 0.011 | 0.023 | 0.020 | 0.039 | 0.007 | 0.111 | 0.033 |
Res. | 0.007 | 0.029 | 0.006 | 0.011 | 0.009 | 0.018 | 0.004 | 0.065 | 0.019 |
Res. | 0.006 | 0.026 | 0.006 | 0.006 | 0.007 | 0.010 | 0.004 | 0.036 | 0.013 |
Res. | 0.005 | 0.022 | 0.006 | 0.005 | 0.008 | 0.008 | 0.004 | 0.021 | 0.010 |
All. | 0.008 | 0.029 | 0.007 | 0.011 | 0.011 | 0.018 | 0.005 | 0.058 | 0.018 |
PSNR | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
NeRF w/o | 29.92 | 23.27 | 27.15 | 32.00 | 27.75 | 26.30 | 28.40 | 26.46 | 27.66 |
NeRF [23] | 33.39 | 25.87 | 30.37 | 35.64 | 31.65 | 30.18 | 32.60 | 30.09 | 31.23 |
MipNeRF [2] | 37.14 | 27.02 | 33.19 | 39.31 | 35.74 | 32.56 | 38.04 | 33.08 | 34.51 |
Plenoxels [8] | 32.79 | 25.25 | 30.28 | 34.65 | 31.26 | 28.33 | 31.53 | 28.59 | 30.34 |
TensoRF [5] | 32.47 | 25.37 | 31.16 | 34.96 | 31.73 | 28.53 | 31.48 | 29.08 | 30.60 |
Instant-NGP [24] | 32.95 | 26.43 | 30.41 | 35.87 | 31.83 | 29.31 | 32.58 | 30.23 | 31.20 |
Tri-MipRF [13] | 37.67 | 27.35 | 33.57 | 38.78 | 35.72 | 31.42 | 37.63 | 32.74 | 34.36 |
3DGS [16] | 32.73 | 25.30 | 29.00 | 35.03 | 29.44 | 27.13 | 31.17 | 28.33 | 29.77 |
3DGS-SS [16] | 35.62 | 27.02 | 33.12 | 37.46 | 33.27 | 29.90 | 34.69 | 30.63 | 32.71 |
Mip-Splatting [36] | 37.48 | 27.74 | 34.71 | 39.15 | 35.07 | 31.88 | 37.68 | 32.80 | 34.56 |
Ours | 38.26 | 27.98 | 36.11 | 39.47 | 35.75 | 31.74 | 37.78 | 33.13 | 35.03 |
Ours + 3D filter | 37.53 | 27.77 | 35.85 | 39.17 | 35.26 | 31.80 | 37.61 | 32.95 | 34.74 |
SSIM | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
NeRF w/o | 0.944 | 0.891 | 0.942 | 0.959 | 0.926 | 0.934 | 0.958 | 0.861 | 0.927 |
NeRF [23] | 0.971 | 0.932 | 0.971 | 0.979 | 0.965 | 0.967 | 0.980 | 0.900 | 0.958 |
MipNeRF [2] | 0.988 | 0.945 | 0.984 | 0.988 | 0.984 | 0.977 | 0.993 | 0.922 | 0.973 |
Plenoxels [8] | 0.968 | 0.929 | 0.972 | 0.976 | 0.964 | 0.959 | 0.979 | 0.892 | 0.955 |
TensoRF [5] | 0.967 | 0.930 | 0.972 | 0.976 | 0.964 | 0.959 | 0.979 | 0.892 | 0.955 |
Instant-NGP [24] | 0.971 | 0.940 | 0.973 | 0.979 | 0.966 | 0.959 | 0.981 | 0.904 | 0.959 |
Tri-MipRF [13] | 0.990 | 0.951 | 0.985 | 0.988 | 0.986 | 0.969 | 0.992 | 0.929 | 0.974 |
3DGS [16] | 0.976 | 0.941 | 0.968 | 0.982 | 0.964 | 0.956 | 0.979 | 0.910 | 0.960 |
3DGS-SS [16] | 0.988 | 0.958 | 0.985 | 0.988 | 0.982 | 0.973 | 0.990 | 0.928 | 0.974 |
Mip-Splatting [36] | 0.991 | 0.963 | 0.990 | 0.990 | 0.987 | 0.978 | 0.994 | 0.936 | 0.979 |
Ours | 0.992 | 0.964 | 0.992 | 0.991 | 0.988 | 0.977 | 0.994 | 0.936 | 0.979 |
Ours + 3D filter | 0.991 | 0.963 | 0.990 | 0.990 | 0.987 | 0.977 | 0.994 | 0.936 | 0.979 |
LPIPS | |||||||||
chair | drums | ficus | hotdog | lego | materials | mic | ship | Avg. | |
NeRF w/o | 0.035 | 0.069 | 0.032 | 0.028 | 0.041 | 0.045 | 0.031 | 0.095 | 0.052 |
NeRF [23] | 0.028 | 0.059 | 0.026 | 0.024 | 0.035 | 0.033 | 0.025 | 0.085 | 0.044 |
MipNeRF [2] | 0.011 | 0.044 | 0.014 | 0.012 | 0.013 | 0.019 | 0.007 | 0.062 | 0.026 |
Plenoxels [8] | 0.040 | 0.070 | 0.032 | 0.037 | 0.038 | 0.055 | 0.036 | 0.104 | 0.051 |
TensoRF [5] | 0.042 | 0.070 | 0.032 | 0.037 | 0.038 | 0.055 | 0.036 | 0.104 | 0.051 |
Instant-NGP [24] | 0.035 | 0.066 | 0.029 | 0.028 | 0.040 | 0.051 | 0.032 | 0.095 | 0.047 |
Tri-MipRF [13] | 0.011 | 0.046 | 0.016 | 0.014 | 0.013 | 0.033 | 0.008 | 0.069 | 0.026 |
3DGS [16] | 0.025 | 0.056 | 0.030 | 0.022 | 0.038 | 0.040 | 0.023 | 0.086 | 0.040 |
3DGS-SS [16] | 0.013 | 0.036 | 0.014 | 0.014 | 0.017 | 0.023 | 0.008 | 0.068 | 0.024 |
Mip-Splatting [36] | 0.010 | 0.031 | 0.009 | 0.011 | 0.012 | 0.018 | 0.005 | 0.059 | 0.019 |
Ours | 0.008 | 0.029 | 0.007 | 0.011 | 0.011 | 0.018 | 0.005 | 0.058 | 0.018 |
Ours + 3D filter | 0.009 | 0.031 | 0.009 | 0.011 | 0.012 | 0.019 | 0.005 | 0.059 | 0.019 |

0.B.3 Multi-scale Training and Multi-scale Testing on the Mip-NeRF 360 Dataset
We evaluate our Analytic-Splatting against other methods on the Mip-NeRF 360 dataset under the multi-scale training and testing setting. As mentioned in Sec. 0.B.2, we use the officially provided downsampled images ( for indoor scenes, and for outdoor scenes) as full-resolution images. Under the multi-scale training and testing setting, we convert each full-resolution image into a set of four images for training via bicubically downsampling it by . Some qualitative results are shown in Fig. 11. Our method performs better anti-aliasing capability and detail fidelity.

Outdoors | ||||||||||||||||||||
PSNR | bicycle | flowers | garden | stump | treehill | |||||||||||||||
Mip-NeRF 360 [3] | 24.51 | 26.93 | 28.53 | 29.24 | 21.64 | 23.90 | 26.01 | 27.35 | 26.71 | 29.59 | 31.35 | 32.52 | 26.27 | 27.68 | 28.82 | 29.27 | 22.93 | 24.63 | 26.06 | 27.12 |
Mip-NeRF 360 + iNGP | 24.61 | 26.98 | 26.69 | 24.50 | 21.93 | 24.14 | 24.90 | 23.19 | 26.48 | 29.06 | 27.54 | 24.85 | 26.41 | 27.63 | 27.62 | 25.64 | 23.19 | 24.86 | 25.55 | 24.81 |
Zip-NeRF [4] | 25.57 | 28.25 | 30.20 | 31.37 | 22.37 | 24.91 | 27.51 | 29.50 | 27.71 | 30.53 | 32.60 | 33.83 | 27.17 | 28.62 | 30.30 | 31.73 | 23.63 | 25.47 | 27.27 | 28.84 |
3DGS [16] | 24.19 | 26.23 | 26.46 | 25.83 | 20.96 | 23.07 | 24.54 | 23.93 | 26.16 | 28.54 | 29.13 | 28.66 | 25.84 | 27.24 | 27.96 | 27.64 | 22.50 | 24.13 | 25.31 | 25.35 |
3DGS-SS | 20.96 | 27.16 | 28.25 | 27.95 | 21.51 | 23.07 | 25.89 | 26.11 | 26.81 | 28.54 | 30.71 | 30.91 | 26.56 | 27.24 | 29.36 | 29.69 | 22.67 | 24.13 | 25.81 | 26.62 |
Mip-Splatting [36] | 24.90 | 27.24 | 28.81 | 29.10 | 21.42 | 23.75 | 26.13 | 28.19 | 26.69 | 29.37 | 30.92 | 31.66 | 26.49 | 27.94 | 29.58 | 31.17 | 22.52 | 24.36 | 26.22 | 27.84 |
Ours | 25.20 | 27.39 | 28.97 | 29.81 | 21.76 | 24.05 | 26.52 | 28.57 | 27.13 | 29.53 | 31.20 | 32.19 | 26.74 | 28.15 | 29.90 | 31.50 | 22.70 | 24.41 | 26.13 | 27.67 |
Ours + 3D filter | 25.32 | 27.50 | 29.04 | 29.88 | 21.79 | 24.04 | 26.49 | 28.52 | 27.19 | 29.52 | 31.14 | 32.08 | 26.80 | 28.15 | 29.87 | 31.48 | 22.72 | 24.33 | 25.97 | 27.49 |
SSIM | bicycle | flowers | garden | stump | treehill | |||||||||||||||
Mip-NeRF 360 [3] | 0.666 | 0.815 | 0.890 | 0.912 | 0.567 | 0.727 | 0.834 | 0.881 | 0.791 | 0.903 | 0.939 | 0.959 | 0.726 | 0.819 | 0.874 | 0.882 | 0.615 | 0.748 | 0.839 | 0.893 |
Mip-NeRF 360 + iNGP | 0.673 | 0.825 | 0.857 | 0.773 | 0.592 | 0.742 | 0.805 | 0.763 | 0.786 | 0.904 | 0.864 | 0.767 | 0.748 | 0.830 | 0.849 | 0.770 | 0.616 | 0.736 | 0.785 | 0.762 |
Zip-NeRF [4] | 0.758 | 0.872 | 0.926 | 0.948 | 0.635 | 0.774 | 0.864 | 0.914 | 0.850 | 0.929 | 0.960 | 0.974 | 0.791 | 0.865 | 0.914 | 0.939 | 0.671 | 0.780 | 0.865 | 0.922 |
3DGS [16] | 0.703 | 0.831 | 0.864 | 0.855 | 0.545 | 0.690 | 0.784 | 0.795 | 0.810 | 0.904 | 0.920 | 0.919 | 0.729 | 0.810 | 0.850 | 0.830 | 0.602 | 0.725 | 0.811 | 0.839 |
3DGS-SS | 0.736 | 0.849 | 0.902 | 0.911 | 0.585 | 0.724 | 0.824 | 0.864 | 0.834 | 0.920 | 0.947 | 0.956 | 0.763 | 0.837 | 0.887 | 0.891 | 0.620 | 0.738 | 0.830 | 0.878 |
Mip-Splatting [36] | 0.739 | 0.849 | 0.912 | 0.940 | 0.591 | 0.724 | 0.825 | 0.891 | 0.832 | 0.917 | 0.949 | 0.966 | 0.768 | 0.837 | 0.892 | 0.932 | 0.619 | 0.737 | 0.839 | 0.905 |
Ours | 0.750 | 0.855 | 0.913 | 0.940 | 0.601 | 0.732 | 0.834 | 0.898 | 0.847 | 0.921 | 0.951 | 0.966 | 0.772 | 0.842 | 0.899 | 0.933 | 0.627 | 0.739 | 0.835 | 0.899 |
Ours + 3D filter | 0.754 | 0.858 | 0.915 | 0.941 | 0.602 | 0.733 | 0.835 | 0.897 | 0.848 | 0.922 | 0.951 | 0.965 | 0.773 | 0.842 | 0.898 | 0.933 | 0.628 | 0.739 | 0.834 | 0.898 |
LPIPS | bicycle | flowers | garden | stump | treehill | |||||||||||||||
Mip-NeRF 360 [3] | 0.322 | 0.177 | 0.089 | 0.066 | 0.367 | 0.215 | 0.114 | 0.071 | 0.194 | 0.079 | 0.045 | 0.029 | 0.279 | 0.171 | 0.114 | 0.107 | 0.362 | 0.236 | 0.144 | 0.096 |
Mip-NeRF 360 + iNGP | 0.313 | 0.166 | 0.128 | 0.169 | 0.344 | 0.192 | 0.124 | 0.137 | 0.192 | 0.079 | 0.107 | 0.176 | 0.254 | 0.156 | 0.137 | 0.180 | 0.344 | 0.223 | 0.171 | 0.182 |
Zip-NeRF [4] | 0.222 | 0.112 | 0.061 | 0.041 | 0.287 | 0.156 | 0.083 | 0.050 | 0.129 | 0.055 | 0.030 | 0.020 | 0.206 | 0.122 | 0.077 | 0.057 | 0.263 | 0.163 | 0.103 | 0.068 |
3DGS [16] | 0.295 | 0.180 | 0.113 | 0.100 | 0.404 | 0.288 | 0.184 | 0.147 | 0.197 | 0.085 | 0.059 | 0.054 | 0.284 | 0.186 | 0.130 | 0.134 | 0.398 | 0.279 | 0.185 | 0.140 |
3DGS-SS | 0.257 | 0.144 | 0.078 | 0.067 | 0.365 | 0.251 | 0.154 | 0.107 | 0.165 | 0.065 | 0.038 | 0.032 | 0.243 | 0.150 | 0.097 | 0.088 | 0.364 | 0.250 | 0.160 | 0.110 |
Mip-Splatting [36] | 0.258 | 0.153 | 0.083 | 0.050 | 0.363 | 0.262 | 0.165 | 0.093 | 0.169 | 0.073 | 0.045 | 0.027 | 0.233 | 0.156 | 0.106 | 0.072 | 0.373 | 0.265 | 0.172 | 0.105 |
Ours | 0.239 | 0.134 | 0.070 | 0.047 | 0.344 | 0.239 | 0.148 | 0.084 | 0.141 | 0.061 | 0.037 | 0.027 | 0.224 | 0.138 | 0.087 | 0.062 | 0.349 | 0.244 | 0.160 | 0.098 |
Ours + 3D filter | 0.237 | 0.134 | 0.070 | 0.048 | 0.347 | 0.241 | 0.147 | 0.086 | 0.141 | 0.061 | 0.037 | 0.027 | 0.224 | 0.140 | 0.089 | 0.063 | 0.350 | 0.245 | 0.161 | 0.100 |
Indoors | ||||||||||||||||
PSNR | room | counter | kitchen | bonsai | ||||||||||||
Mip-NeRF 360 [3] | 31.44 | 32.53 | 33.17 | 32.96 | 29.30 | 30.12 | 30.81 | 30.52 | 31.90 | 33.39 | 34.69 | 34.92 | 32.85 | 33.97 | 34.63 | 33.80 |
Mip-NeRF 360 + iNGP | 30.93 | 31.83 | 31.66 | 29.52 | 24.30 | 24.66 | 24.81 | 24.06 | 30.13 | 31.25 | 29.85 | 26.14 | 30.20 | 30.90 | 30.39 | 27.49 |
Zip-NeRF [4] | 32.20 | 33.33 | 34.12 | 34.26 | 29.17 | 29.93 | 30.70 | 31.11 | 32.33 | 33.76 | 35.20 | 35.71 | 34.08 | 35.25 | 36.18 | 36.32 |
3DGS [16] | 30.53 | 31.42 | 31.46 | 29.81 | 28.25 | 28.91 | 29.21 | 27.66 | 29.90 | 31.04 | 31.50 | 29.57 | 30.63 | 31.42 | 31.02 | 31.58 |
3DGS-SS | 31.12 | 32.13 | 32.75 | 32.12 | 28.81 | 29.47 | 30.16 | 29.85 | 30.84 | 32.05 | 33.13 | 32.59 | 31.57 | 32.44 | 32.96 | 31.58 |
Mip-Splatting [36] | 31.32 | 32.26 | 32.79 | 32.88 | 28.91 | 29.50 | 30.03 | 30.32 | 31.11 | 32.05 | 32.33 | 32.79 | 31.48 | 32.21 | 32.26 | 31.97 |
Ours | 31.26 | 32.23 | 32.92 | 33.07 | 29.03 | 29.65 | 30.39 | 30.90 | 31.44 | 32.57 | 33.56 | 33.78 | 32.12 | 32.92 | 33.60 | 33.39 |
Ours + 3D filter | 31.32 | 32.28 | 32.98 | 33.13 | 29.03 | 29.64 | 30.38 | 30.88 | 31.07 | 32.15 | 33.09 | 33.25 | 31.97 | 32.73 | 33.42 | 33.26 |
SSIM | room | counter | kitchen | bonsai | ||||||||||||
Mip-NeRF 360 [3] | 0.906 | 0.944 | 0.963 | 0.967 | 0.887 | 0.916 | 0.936 | 0.942 | 0.916 | 0.949 | 0.968 | 0.975 | 0.935 | 0.959 | 0.969 | 0.968 |
Mip-NeRF 360 + iNGP | 0.904 | 0.941 | 0.950 | 0.932 | 0.816 | 0.837 | 0.843 | 0.819 | 0.903 | 0.938 | 0.904 | 0.773 | 0.920 | 0.941 | 0.937 | 0.874 |
Zip-NeRF [4] | 0.921 | 0.955 | 0.971 | 0.977 | 0.899 | 0.926 | 0.944 | 0.955 | 0.926 | 0.956 | 0.975 | 0.982 | 0.947 | 0.968 | 0.978 | 0.980 |
3DGS [16] | 0.903 | 0.936 | 0.952 | 0.947 | 0.886 | 0.912 | 0.928 | 0.920 | 0.907 | 0.941 | 0.956 | 0.950 | 0.924 | 0.948 | 0.954 | 0.938 |
3DGS-SS | 0.911 | 0.943 | 0.962 | 0.965 | 0.898 | 0.922 | 0.941 | 0.947 | 0.919 | 0.949 | 0.968 | 0.938 | 0.933 | 0.955 | 0.967 | 0.966 |
Mip-Splatting [36] | 0.913 | 0.944 | 0.962 | 0.969 | 0.899 | 0.921 | 0.936 | 0.949 | 0.920 | 0.948 | 0.960 | 0.974 | 0.935 | 0.954 | 0.960 | 0.966 |
Ours | 0.914 | 0.946 | 0.964 | 0.971 | 0.902 | 0.924 | 0.942 | 0.955 | 0.924 | 0.951 | 0.967 | 0.974 | 0.939 | 0.958 | 0.969 | 0.972 |
Ours + 3D filter | 0.915 | 0.946 | 0.964 | 0.971 | 0.903 | 0.924 | 0.943 | 0.955 | 0.924 | 0.951 | 0.967 | 0.973 | 0.939 | 0.958 | 0.969 | 0.972 |
LPIPS | room | counter | kitchen | bonsai | ||||||||||||
Mip-NeRF 360 [3] | 0.227 | 0.101 | 0.052 | 0.042 | 0.216 | 0.114 | 0.068 | 0.059 | 0.134 | 0.063 | 0.033 | 0.023 | 0.185 | 0.065 | 0.033 | 0.033 |
Mip-NeRF 360 + iNGP | 0.220 | 0.105 | 0.072 | 0.095 | 0.275 | 0.195 | 0.163 | 0.182 | 0.145 | 0.074 | 0.084 | 0.188 | 0.190 | 0.082 | 0.063 | 0.126 |
Zip-NeRF [4] | 0.199 | 0.084 | 0.041 | 0.028 | 0.189 | 0.095 | 0.055 | 0.039 | 0.117 | 0.055 | 0.028 | 0.018 | 0.173 | 0.052 | 0.023 | 0.017 |
3DGS [16] | 0.254 | 0.127 | 0.066 | 0.053 | 0.235 | 0.128 | 0.078 | 0.067 | 0.159 | 0.081 | 0.046 | 0.041 | 0.234 | 0.104 | 0.056 | 0.051 |
3DGS-SS | 0.241 | 0.114 | 0.054 | 0.039 | 0.217 | 0.112 | 0.066 | 0.050 | 0.142 | 0.068 | 0.034 | 0.024 | 0.220 | 0.092 | 0.044 | 0.032 |
Mip-Splatting [36] | 0.235 | 0.115 | 0.062 | 0.040 | 0.213 | 0.116 | 0.077 | 0.043 | 0.138 | 0.075 | 0.047 | 0.027 | 0.214 | 0.095 | 0.058 | 0.037 |
Ours | 0.234 | 0.111 | 0.052 | 0.035 | 0.208 | 0.109 | 0.065 | 0.045 | 0.134 | 0.065 | 0.037 | 0.028 | 0.210 | 0.088 | 0.042 | 0.029 |
Ours + 3D filter | 0.233 | 0.110 | 0.052 | 0.034 | 0.209 | 0.110 | 0.065 | 0.045 | 0.134 | 0.065 | 0.037 | 0.029 | 0.209 | 0.088 | 0.042 | 0.030 |
We further provide per-resolution and per-scene metrics in Tab. 9. The results of Mip-NeRF 360 [3] and Zip-NeRF [4] are reported from the official Zip-NeRF paper [4]. Please note that Mip-NeRF 360 and Zip-NeRF struggle with real-time rendering, and our Analytic-Splatting, like 3DGS and its variants, is capable of real-time rendering.
0.B.4 Approximation Error Analysis
In Sec. 5.1 of the main page, we study the approximation error produced by different schemes. In this analysis, we concentrate on the standard deviation and the samples within the confidence interval (i.e. ). We plot the curve of the approximation error under different conditions.
In detail, we sample uniformly in the logarithmic coordinate system of to provide the standard deviation samples and plot the approximation error to the standard deviation as Fig. 5(a) in the main page. For the approximation error caused by the rotation of the integral domain, to calculate the integral of the original integral domain (Fig. 9(a)) as reference results, we perform Monte Carlo sampling in the original integral domain and calculate the average response of samples as the integration reference.