[3,4,5]\fnmYilin \surQu [2]\fnmLeilei \surChen
1]\orgdivKey Laboratory of In-situ Property-improving Mining of Ministry of Education, \orgnameTaiyuan University of Technology, \orgaddress\cityTaiyuan, \postcode100190, \countryChina 2]\orgdivHenan International Joint Laboratory of Structural Mechanics and Computational Simulation, College of Architecture and Civil Engineering, \orgnameHuanghuai University, \orgaddress\cityZhumadian, \postcode463000, \countryChina 3]\orgdivSchool of Marine Science and Technology, \orgnameNorthwestern Polytechnical University, \orgaddress\cityXi’an, \postcode710072, \countryChina 4]\orgdivUnmanned Vehicle Innovation Center, \orgnameNingbo Institute of Northwestern Polytechnical University, \orgaddress\cityNingbo, \postcode315103, \countryChina 5]\orgdivKey Laboratory of Unmanned Underwater Vehicle Technology of Ministry of Industry and Information Technology, \orgnameNorthwestern Polytechnical University, \orgaddress\cityXi’an, \postcode710072, \countryChina 6]\orgdivCenter for Strategic Assessment and Consulting, \orgnameAcademy of Military Science, \orgaddress\cityBeijing, \postcode100091, \countryChina 7]\orgdivComputer Engineering Department, \orgnameTaiyuan Institute of Technology, \orgaddress\cityTaiyuan, \postcode030008, \countryChina
Bayesian uncertainty analysis for underwater 3D reconstruction with neural radiance fields
Abstract
Neural radiance fields (NeRFs) are a deep learning technique that can generate novel views of 3D scenes using sparse 2D images from different viewing directions and camera poses. As an extension of conventional NeRFs in underwater environment, where light can get absorbed and scattered by water, SeaThru-NeRF was proposed to separate the clean appearance and geometric structure of underwater scene from the effects of the scattering medium. Since the quality of the appearance and structure of underwater scenes is crucial for downstream tasks such as underwater infrastructure inspection, the reliability of the 3D reconstruction model should be considered and evaluated. Nonetheless, owing to the lack of ability to quantify uncertainty in 3D reconstruction of underwater scenes under natural ambient illumination, the practical deployment of NeRFs in unmanned autonomous underwater navigation is limited. To address this issue, we introduce a spatial perturbation field based on Bayes’ rays in SeaThru-NeRF and perform Laplace approximation to obtain a Gaussian distribution of the parameters , where the diagonal elements of correspond to the uncertainty at each spatial location. We also employ a simple thresholding method to remove artifacts from the rendered results of underwater scenes. Numerical experiments are provided to demonstrate the effectiveness of this approach.
keywords:
Neural radiance fields, Underwater scenes, Uncertainty quantification1 Introduction
3D reconstruction of underwater scenes is an important and challenging research topic in marine environmental research [1, 2, 3]. Traditional 3D reconstruction methods use geometric constraints and certain prior assumptions to recover the structure of target objects in a discrete 3D space, such as multi-view stereo (MVS), structure from motion (SfM) and voxel grid. Due to reliance upon discrete representations, the aforementioned methods have difficulty recovering detailed geometric information of complex structures. In addition, these methods have a high requirement of the the quality and quantity of the input data, resulting in a cumbersome data acquisition and pre-processing procedure.
In recent years, Neural Radiance Fields (NeRFs) [4] have emerged as novel approach for reconstructing 3D representation of a scene from 2D images. NeRFs fall into the category of neural rendering, which combine classical ray tracing methods in computer graphics and deep learning techniques. The key idea of NeRFs is to make a fully connected neural network learn a continuous function that maps 3D spatial coordinates and viewing directions to colors and optical densities of scenes. After the neural networks are trained using a sparse set of input 2D images that are collected from different viewing angles and camera pose, NeRFs are able to synthesize novel views of a scene from arbitrary viewpoints. Compared to traditional methods, NeRFs not only enhance image quality of 3D reconstruction, but more importantly, improve the efficiency considerably because it is simple to use and only require a sparse number of input views.
However, the presence of complex optical effects such as backscatter and attenuation in underwater environments poses great challenges for application of NeRFs. In shallow water with natural ambient illumination, the propagation of light follows a very different physical law from that in air, leading to a significant degradation in the reconstruction quality of NeRFs. To address this challenge, SeaThru-NeRF [5] extends the NeRFs for the first time to scattering media such as underwater. It is able to synthesise realistic renderings of novel views, while isolating clean scene appearance and geometry from the effects of scattering media. This not only helps to extend the application scenarios of NeRFs, but also helps to recover the hidden scene details from the scattering medium data, which is of great value in the fields of autonomous underwater vehicles (AUVs) navigation, and unmanned vehicles under severe weather [6].
Despite these advancements, the complexity of underwater environment introduces inherent uncertainty in modeling optical effects. However, most existing works treat NeRFs as a deterministic model, ignoring its inherent sources of uncertainty. In fact, under different environments and data distributions, NeRFs exhibits certain uncertainty and variability in reconstruction accuracy and geometric detail restoration, which seriously impacts the model’s generalisation and robustness. This directly affects the visual quality of AUVs during ocean exploration and navigation, thereby introducing unforeseen failure risks to probability-bound downstream tasks [7] such as underwater inspection and monitoring [8, 9], underwater navigation and localisation [10, 11], and underwater infrastructure inspection [12]. Fortunately, through uncertainty quantification methods, we can enhance the reliability assessment of the model, thus improving the quality of AUVs decision-making in various real-world tasks and providing important guarantees for the successful completion of these tasks.
Uncertainty quantification plays a crucial role in reducing uncertainty in optimization and decision-making processes. In recent years, with the continuous advancement of deep learning techniques, various uncertainty quantification methods such as deep ensemble, variational inference, MC-dropout, and others have emerged as hot research topics in both academia and industry [13]. To enhance the robustness and reliability of NeRFs in practical applications, some studies have begun to combine these methods to explore the uncertainty of NeRFs. However, to the authors’ best knowledge, there is no existing literature that documents the uncertainty quantification related to underwater NeRFs, especially under natural ambient illumination.
To address the research gap mentioned above, we attempt to introduce a learnable spatial perturbation field and perform Laplace approximation based on Bayes’ rays [14] to quantify the uncertainty of pre-trained SeaThru-NeRF model (Figure 1). The perturbation field is used to perturb the input coordinate of the original SeaThru-NeRF network, thus indirectly leading to the reparameterization of the entire model. By using the Laplace approximation method, the uncertainty of each spatial location is estimated based on the difference between the original reconstruction results and the perturbed results. In addition, we utilize a thresholding method to remove artifacts caused by occlusions or incomplete data.
In this work, we demonstrate that our model can explicitly infer spatial uncertainty in both synthetic and real-world underwater scenes. In summary, we makes the following key contributions:
-
•
This work introduces uncertainty quantification into NeRFs for underwater scenes for the first time, allowing for the analysis of model reliability and enhancement of its robustness.
-
•
By introducing an additional perturbation field and performing a Laplace approximation, we avoid additional training, costly sampling, or accessing the training images.
-
•
Furthermore, the model can be used for post-processing to remove artifacts.

2 Related works
2.1 NeRFs in underwater scenes
NeRFs represents a novel approach leveraging deep learning techniques to reconstruct 3D scenes. However, because of the effects of backscatter and attenuation lead to the complex light propagation in underwater, makes accurate 3D scene reconstruction by traditional NeRFs difficult. To address this issue, researchers have proposed various improvements methods. WaterNeRF [15] uses NeRFs to additionally learn water column parameters, and combines them with light transport model [16] for color correction. By matching the corrected color distribution to the reference image distribution and using the Sinkhorn loss function [17] for training, it enables to produce the consistent color-corrected results from varying viewpoints during the inference stage. U2NeRF [18] extends UPIFM [19]. It modifies the generalizable NeRF transformer (GNT) [20] to predict scene radiance, direct and back scatter transmission maps, and employs variational autoencoder (VAE) to predict the global background light component. By integrating these four components with the image formation model [19], it reconstructs the original underwater image. WaterHE-NeRF [21] designs a novel water-ray tracing field based on the Retinex model [22] to learn color, density, and illuminance attenuation. By controlling the intensity of illuminance attenuation, it can generate both degraded and restored multi-view images simultaneously. SeaThru-NeRF [5] based on the SeaThru [23], replaces the traditional single opaque object density with the sum of the object density and the medium density. The final pixel color consists of the object radiance attenuated by the medium and the accumulated medium radiance along the light. It can not only generate realistic images from new viewpoints but also utilize the learned object and medium parameters to remove the effects of the scattering medium, reconstructing the true appearance and color of the scene as if the image were taken in the air.
2.2 Uncertainty in deep learning
In deep learning, uncertainty refers to the confidence level in the model’s prediction results. Effective estimation and management of uncertainty can not only improve the model’s predictive performance but also enhance its robustness to anomalies [24, 25]. Variational inference [26, 27, 28] approximates the intractable posterior distribution by introducing a prespecified family of variational distribution and minimizes the evidence lower bound (ELBO) to reduce the discrepancy between them. While this approach provides comprehensive uncertainty estimates, it has high computational complexity. Deep ensemble strategies [29, 30] use multiple independently trained models combined with diverse data subsets, model structures, or initialisation methods to reduce the variance of a single model, thus improving generalisation and robustness. However, training multiple models requires more computational resources and storage space, and may introduce additional prediction delays. MC-dropout-based methods [31, 32, 33, 34] simulate the effect of Bayesian neural networks by performing multiple forward propagation in the inference stage, and statistical properties of different predictions (such as mean and variance) are used to estimate the uncertainty of the model. However, multiple forward propagation leads to high computational cost. Chen et al. [35, 36, 37, 38] and Qu et al. [39] combine model order reduction techniques with deep learning to expedite the sampling process for uncertainty quantification. Compared with aforementioned uncertainty quantification methods, the Laplace approximation [40, 41] does not require retraining the model or modifying the training process. Instead, it approximates the posterior distribution with a Gaussian distribution around the mode of the posterior, thereby effectively reducing computational costs. Specifically, by performing a Taylor expansion of the log-posterior and retaining terms up to the second order at the mode, a Gaussian distribution is obtained to approximate the posterior. The mean of this Gaussian distribution is the mode, and the covariance is the negative inverse of the Hessian matrix at the mode.
2.3 Uncertainty in NeRFs
In practical applications, NeRFs needs to address uncertainty issues to improve model reliability and robustness. Therefore, researchers have explored various approaches. ActiveNeRF [42] models the radiance values at each location as a Gaussian distribution instead of a single value, enabling NeRFs to provide reasonable high-variance predictions in unobserved regions. S-NeRF [43] utilizes Bayesian variational inference to achieve uncertainty quantification related to outputs in tasks such as novel view synthesis or depth estimation. CF-NeRF [44] employs conditional normalizing flows and latent variable modelling to flexibly learn the radiance field distribution without any prior assumptions. It estimates uncertainty by evaluating the predicted mean and variance during the inference process. Sünderhauf et al. [45] proposes to quantify prediction uncertainty by independently training multiple NeRFs with different initial parameters on the same dataset [46, 47]. In addition to considering the variance in RGB colors, it introduces an epistemic uncertainty term based on termination probability to capture uncertainty in unobserved regions. FG-NeRF [48] decouples NeRFs into deterministic and probabilistic branches, using Flow-GAN [49] to model the probabilistic branch to avoid independence assumptions. Additionally, it employs a patch adversarial training strategy. These designs enable it to achieve more accurate uncertainty estimation in complex scenes. ProbNeRF [50] employs the hamiltonian monte carlo (HMC) [51] method during testing to perform posterior inference on NeRFs’ parameters for given views. It accurately infers the 3D geometry and appearance of objects from a single or several views while quantifying the associated uncertainty. Recursive-NeRF [52] draws inspiration from level of detail (LOD). Each network layer predicts the uncertainty of query points: for points with low uncertainty, results are directly output without passing to deeper layers; for points with high uncertainty, they are passed to the next, more powerful layer for further processing. This strategy significantly improves rendering efficiency while ensuring the quality of view synthesis. Unlike the above methods, we introduce an additional perturbation field to avoid retraining the model or modifying the training process, significantly reducing the computational cost.

3 Scientific background
3.1 SeaThru
Traditional underwater image generation models based on atmospheric models result in significant errors. To address this issue of underwater image color restoration, SeaThru [23] employs a revised optical model that accounts for the attenuation and scattering characteristics of light propagation in water, thereby generating images that are closer to true colors.
The underwater image generation model is described as follows:
(1) |
where is color channel, is the image captured by the camera, is the direct signal, and is the backscatter.
Traditional methods assume a uniform attenuation coefficient for light throughout the entire scene, which is a coarse approximation. Actually, the direct signal and backscatter are controlled by different attenuation coefficients, which also depend on factors such as object distance and reflectance. Therefore, SeaThru extends Eq. (1) as follows:
(2) |
where is the clear scene that should be captured ideally, represents veiling light, is the distance from the object to the camera, and are the attenuation coefficients and backscatter coefficients, respectively. The vectors and represent the dependence of and on factors such as distance, reflectance.
SeaThru analyzes multi-view images to estimate the distance of each pixel, thereby inferring the effects of scattering and attenuation. It then corrects color distortions in the image by applying these estimated values.
3.2 NeRFs
NeRFs is a novel 3D reconstruction technique that reconstructs high-quality 3D scenes from collections of 2D images and camera poses. It achieves high-resolution rendering and image synthesis by representing scenes as a dense volumetric radiance field function and leveraging volumetric rendering techniques.
Specifically, as shown in the Fig. 2, NeRFs utilizes MLP to learn a mapping function that maps any given 3D spatial point and viewing direction to a color value and a density value :
(3) |
where are the learnable parameters of the MLP.
To render the continuous 5D scene structure represented by color and density into a 2D image, NeRFs uses the volumetric rendering equation to integrate the color and density values along a camera ray , where is the camera center and .
The expected color obtained along the ray is given by:
(4) |
Here, the integration is bounded between the near bound and the far bound , and the accumulated transmittance between them is defined as:
(5) |
In practice, NeRFs utilizes the quadrature rule [53] to discretize the integration range into intervals (where ), and assumes that the density and color are constant within each interval. Thus:
(6) |
and
(7) |
where is the distance between adjacent sample points.
NeRFs adjusts the parameters of the MLP during training by optimizing the squared distance between the expected color and the ground truth .
3.3 SeaThru-NeRF

Inspired by SeaThru [23], SeaThru-NeRF not only considers the opaque objects in traditional NeRFs but also treats the medium as a semi-transparent entity, introducing separate color and density parameters for both the medium and the objects. This method extends the capability of NeRFs to handle scattering media such as underwater (Fig. 3) through mapping function .
Specifically, it replaces the traditional single opaque object density with the sum of the object density and the medium density .
(8) |
(9) |
where and are the color of the object and medium at , respectively. When , it simplifies to the traditional NeRFs case.
Adopting the same discretization strategy as NeRFs:
(10) |
(11) |
According to the rendering equation Eq. (10), the color contributions can be divided into object and medium components, reflecting their different contributions to the final color.
(12) |
where
(13) |
and
(14) |
To simplify the model, SeaThru-NeRF assumes the color and density of the medium are constant along the ray . Additionally, because of before the object and at the object [5], thus
(15) |
(16) |
(17) |
In the aforementioned discussion, the object component and the backscatter component used the same attenuation coefficient. According to SeaThru, the effective for the object component and the backscatter component are different. Therefore, in the final model, different parameters are used for each component— for the object component and for the backscatter component .
(18) |
(19) |
(20) |
Like NeRFs, SeaThru-NeRF through minimize the squared distance between expected color and ground truth to optimize the network parameters for each ray sampled from image of training set images . In a Bayesian perspective, this is equivalent to assuming a Gaussian likelihood and inferring , the mode of the posterior distribution
(21) |
which, by Bayes’ rule, is the same as minimizing the negative log-likelihood
(22) |
4 Uncertainty estimation
4.1 Neural Laplace approximations
Laplace approximation obtains the optimal network weights through pre-trained model, then approximates the posterior distribution of the network parameters as a multivariate Gaussian distribution centered at , i.e.,
(23) |
where is the covariance matrix.
According to the Laplace approximation, we consider the second-order Taylor expansion of the objective function at :
(24) |
where is the Hessian matrix of at , and since is an extremum point of , the first-order derivative term is zero.
By comparing the Eq. (24) with the usual log squared exponential Gaussian likelihood of the multivariate Gaussian distribution, we can derive the expression for the covariance matrix :
(25) |
However, directly equating with is impracticable. The high correlation between model parameters makes accurately estimating the covariance matrix of the parameter distribution challenging. Additionally, even with an accurate , converting it into geometrically meaningful distribution necessitates an expensive sampling process, resulting in significant computational overhead.
To address these issues, we introduce a reparameterization method based on a perturbation field in Section 4.2, which is equivalent to adding a differentiable spatial deformation module before the MLP. This reparameterization method makes the parameters more amenable to Laplace approximation.



4.2 Modeling perturbations
As illustrated in Fig. 4, a space (green area) exists where the green line segment in Fig. 4(a) can be perturbed to any shape within this region without impacting the reconstruction loss (Fig. 4(b)). During model training, different random seeds may cause convergence to various configurations within this space. Consequently, for a pre-trained reconstruction model, due to the limited training data, certain regions of the scene have perturbable spaces that do not affect the reconstruction loss. The degree of allowable perturbation indicates the model’s uncertainty level in that region. By systematically applying perturbations throughout the entire 3D scene and measuring the maximum tolerated perturbation at each spatial position, one can obtain the model’s uncertainty distribution across the entire space (Fig. 4(c)).

Inspired by the above, we introduce a parametrized perturbation field , which can be interpreted as a perturbation transformation applied to the coordinates before inputting them into the MLP (Fig. 5). The parameters of the perturbation field are represented as , where denotes the size of the grid used to store displacement vectors and represents vector dimension. For any spatial coordinate , its perturbation is computed by trilinear interpolation of displacement vectors from neighboring grid points:
(26) |
Next, we reparameterize the optimized MLP neural network by perturbing each coordinate:
(27) |
(28) |
where and are the optimized density and radiance of object, respectively. And as mentioned in Section 3.3, the density and color associated with the medium are constants at each spatial coordinate, so they are not affected by perturbations.
Based on the reparameterized and , the perturbed predicted pixel color can be obtained as:
(29) |
where
(30) |
(31) |
and accumulated transmittance of object can be obtained as:
(32) |
Since we still want the predicted color to be as close to as possible after reparameterization, we assume a likelihood function of the same form as the original model, i.e., . Moreover, since are the optimal parameters obtained during the training of the MLP network, the model has achieved optimal performance at these parameters. Therefore, when perturbations are applied to new parameters, we expect these perturbations to neither significantly improve nor degrade the model’s performance. Hence, we impose a regularization Gaussian prior on the new parameters .
Under these assumptions, the negative log-likelihood function of the posterior distribution is given by:
(33) |
Here, represents the reconstruction error of the pixel colors, and is the regularization term.
When , we have and , thus . Therefore, minimizes the reconstruction error and is the mode of the posterior distribution .
According to the Laplace approximation, we perform a second-order Taylor expansion around the mode, which yields
(34) |
where is the Hessian matrix of evaluated at zero.
4.3 Approximating H
Because directly computing the second derivatives in the Hessian matrix is computationally expensive, we approximate it using the Fisher information.
For any parameterized family of probability distributions , the Hessian matrix of its log-likelihood function with respect to the parameters is related to the Fisher information as follows:
(35) |
where is defined as the log-likelihood function.
Furthermore, under reasonable regularity conditions, the Fisher information can also be defined as:
(36) |
We use the random variable to correspond to a camera ray and its corresponding ground truth . Thus,
(37) |
where . And
(38) |
represents the Jacobian matrix of the predicted color with respect to the parameter , which can be computed through backpropagation.
Furthermore, based on the properties of conditional expectations:
(39) |
According to the likelihood , it holds that is nothing more than , so:
(40) |
Here, we approximate the expectation by sampling rays.
(41) |
We can obtain:
(42) |
From Eq. (35), we can finally obtain:
(43) |

4.4 Spatial uncertainty
Since each entry of the parameter vector corresponds to a vertex in the grid, its influence is limited to the cell containing that vertex, making inherently sparse and thereby minimizing the number of related parameters. Similar to the approach by Ritter et al. [54], we approximate by considering only the diagonal elements of .
(44) |
where encodes the spatial uncertainty of the radiance field. Intuitively, it represents the extent to which the geometry of NeRF can be altered without compromising the quality of reconstruction.
By computing the diagonal entries of , we obtain the marginal variance vector . At each grid vertex, defines a spatial ellipsoid representing the region within which deformations can occur with minimal reconstruction cost. The norm of the vector is a positive scalar that measures the local spatial uncertainty of the radiance field at each grid vertex.
Through this method, we can define the spatial uncertainty field , expressed as (Fig. 6):
(45) |
Strictly speaking, as described above, measures the uncertainty at , not at ; however, for a trained SeaThru-NeRF where , these points are effectively the same.
5 Numerical experiment
5.1 Experimental setup
5.1.1 Dataset
The real dataset consists of four image sets (Curasao, IUI3, Panama and JapaneseGradens) taken in three different marine environments. Prior to training, linear images were white-balanced and extreme pixel values in each channel were clipped by 0.5% to reduce noise. The average image resolution was reduced to 900x1400, and camera poses were estimated using COLMAP [55]. The synthetic data (uwSimulation) is based on the fern scene from the LLFF dataset [56], with underwater simulation effects added.
5.1.2 Implementation
In our experiments, we set , , and iterations =1000. For the real-world and synthetic datasets, all cases except for Panama included two evaluated images, while Panama included one. We render diagonal elements of as a new volumetric data , where regions with larger values indicate higher uncertainty.
Moreover, besides SeaThru-NeRF, we also conducted experiments using a smaller neural network model architecture (SeaThru-NeRF-lite) compared to SeaThru-NeRF. SeaThru-NeRF and SeaThru-NeRF-lite primarily differ in model size and training time. SeaThru-NeRF is a larger model, using approximately 23GB of VRAM, and provides optimal image quality. In contrast, SeaThru-NeRF-lite is a smaller model that requires only about 7GB of VRAM. Although the image quality is slightly lower, it still produces excellent and clear results.
5.1.3 Metric
We use the area under sparsification error (AUSE) [57, 58] to evaluate the performance of uncertainty estimation. A lower AUSE indicates that the model’s uncertainty estimates are well-calibrated, meaning that higher uncertainty predictions correspond to higher actual errors. We experimented with the AUSE values related to three types of errors (MSE, MAE, RMSE).
Additionally, we also use three metrics to evaluate image quality. The structural similarity index (SSIM) [59] quantifies the structural similarity between two images, being sensitive to local structural changes in the image, with a range from to . Higher values indicate better image quality. Peak signal-to-noise ratio (PSNR) [60] evaluates image quality by calculating the MSE between the original and rendered images and converting it to a logarithmic scale. Higher values indicate better image quality. Learned perceptual image patch similarity (LPIPS) [61] measures the perceptual difference between two images, prioritizing perceptual similarity. Lower LPIPS values indicate higher similarity.
AUSE_MSE | AUSE_MAE | AUSE_RMSE | PSNR(ours) | PSNR(base) | SSIM(ours) | SSIM(base) | LPIPS(ours) | LPIPS(base) | ||
---|---|---|---|---|---|---|---|---|---|---|
Curasao | seathru-nerf | 0.30523 | 0.47558 | 0.3435 | 31.24606 | 31.40482 | 0.92763 | 0.93049 | 0.07061 | 0.06950 |
seathru-nerf-lite | 0.31727 | 0.51540 | 0.35525 | 31.60670 | 31.52468 | 0.93857 | 0.93716 | 0.07888 | 0.07985 | |
IUI3 | seathru-nerf | 0.18834 | 0.22894 | 0.25691 | 28.76620 | 28.84461 | 0.84195 | 0.84548 | 0.16801 | 0.18165 |
seathru-nerf-lite | 0.15008 | 0.22401 | 0.22501 | 28.33715 | 28.06076 | 0.84584 | 0.83927 | 0.30336 | 0.31994 | |
Panama | seathru-nerf | 0.29347 | 0.469596 | 0.35063 | 34.05641 | 34.13037 | 0.95733 | 0.95994 | 0.04356 | 0.04148 |
seathru-nerf-lite | 0.31662 | 0.50313 | 0.37197 | 34.31783 | 34.57177 | 0.96276 | 0.95927 | 0.04249 | 0.05089 | |
JapaneseGradens | seathru-nerf | 0.04164 | 0.42559 | 0.13122 | 24.53097 | 24.61138 | 0.91968 | 0.92158 | 0.08829 | 0.08754 |
seathru-nerf-lite | 0.08290 | 0.47345 | 0.17870 | 25.11808 | 25.34451 | 0.90783 | 0.90202 | 0.11012 | 0.12594 | |
uwSimulation | seathru-nerf | 0.75512 | 0.48952 | 0.725569 | 24.92481 | 25.03314 | 0.79364 | 0.79775 | 0.20144 | 0.19730 |
seathru-nerf-lite | 0.88559 | 0.44384 | 0.76970 | 24.45243 | 25.02669 | 0.79431 | 0.80737 | 0.23303 | 0.22473 |
5.2 Results
Fig. 7 illustrates the uncertainty quantification results across five datasets on SeaThru-NeRF and SeaThru-NeRF-lite. The uncertainty estimates are visually conveyed using color, where the color intensity represents the level of uncertainty: bluer colors indicate higher uncertainty, and redder colors indicate lower uncertainty. In addition, as shown in Table 1, our uncertainty quantification results exhibit excellent performance across AUSE metrics. At the same time, as observed in Table 1, the PSNR, SSIM, and LPIPS metrics of the model show negligible differences compared to the original model (base). It is in line with our emphasis in Section 4.2 on performing uncertainty quantification without significantly impacting reconstruction loss. This indicates that we can provide additional confidence information while maintaining the original reconstruction performance, which provides important technical support and guarantee for practical application. For example, in AUVs navigation, high-quality environmental reconstruction is crucial for path planning and obstacle detection. Additional uncertainty information can assist in making more cautious decisions when faced with uncertain or hazardous situations.
5.3 Ablation study
5.3.1 Influence of parameter M
In Fig. 8, Fig. 9 and Table 2, we present the uncertainty estimates at different parameter (i.e., grid size). determines the granularity of the spatial segmentation, thereby influencing the model’s ability to capture scene details and the accuracy of uncertainty estimation. Analyzing uncertainty estimates at different grid sizes allows for a deeper understanding of the influence of parameter .
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.31296 | 0.46118 | 0.34732 | 0.30963 | 0.46208 | 0.34561 | 0.31188 | 0.47111 | 0.34674 |
seathru-nerf-lite | 0.31954 | 0.47982 | 0.35705 | 0.32095 | 0.4935 | 0.3573 | 0.32056 | 0.50304 | 0.35708 | |
IUI3 | seathru-nerf | 0.188496 | 0.23426 | 0.25706 | 0.18846 | 0.23242 | 0.25702 | 0.18822 | 0.23021 | 0.25684 |
seathru-nerf-lite | 0.15069 | 0.22814 | 0.22551 | 0.15035 | 0.22653 | 0.22525 | 0.15008 | 0.22488 | 0.22504 | |
Panama | seathru-nerf | 0.29554 | 0.48057 | 0.35222 | 0.29417 | 0.47178 | 0.35118 | 0.29385 | 0.47117 | 0.35092 |
seathru-nerf-lite | 0.31977 | 0.53474 | 0.37435 | 0.3186 | 0.51457 | 0.37348 | 0.31747 | 0.51243 | 0.37262 | |
JapaneseGradens | seathru-nerf | 0.04112 | 0.41962 | 0.13046 | 0.04146 | 0.40946 | 0.13097 | 0.04148 | 0.40756 | 0.13099 |
seathru-nerf-lite | 0.08387 | 0.46099 | 0.17996 | 0.08352 | 0.4519 | 0.17947 | 0.08331 | 0.4529 | 0.17921 | |
uwSimulation | seathru-nerf | 0.74244 | 0.46726 | 0.71863 | 0.75712 | 0.46848 | 0.72628 | 0.75442 | 0.48468 | 0.72511 |
seathru-nerf-lite | 0.85303 | 0.43378 | 0.75572 | 0.86718 | 0.43215 | 0.76192 | 0.88133 | 0.43807 | 0.76797 | |
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.30743 | 0.47222 | 0.3446 | 0.30523 | 0.47558 | 0.3435 | 0.30819 | 0.50488 | 0.34504 |
seathru-nerf-lite | 0.31838 | 0.51017 | 0.35592 | 0.31727 | 0.515399 | 0.35525 | 0.31773 | 0.514298 | 0.35596 | |
IUI3 | seathru-nerf | 0.18826 | 0.22971 | 0.25686 | 0.18834 | 0.22894 | 0.25691 | 0.18834 | 0.23043 | 0.25693 |
seathru-nerf-lite | 0.15006 | 0.22467 | 0.22501 | 0.15008 | 0.22401 | 0.22501 | 0.14999 | 0.22435 | 0.22496 | |
Panama | seathru-nerf | 0.29369 | 0.47242 | 0.35079 | 0.29347 | 0.469596 | 0.35063 | 0.29354 | 0.46585 | 0.35068 |
seathru-nerf-lite | 0.31695 | 0.50764 | 0.37222 | 0.31662 | 0.50313 | 0.37197 | 0.31623 | 0.49484 | 0.37166 | |
JapaneseGradens | seathru-nerf | 0.04182 | 0.42031 | 0.13151 | 0.04164 | 0.42559 | 0.13122 | 0.042 | 0.4463 | 0.13178 |
seathru-nerf-lite | 0.08336 | 0.4627 | 0.17928 | 0.0829 | 0.47345 | 0.1787 | 0.08328 | 0.48051 | 0.17919 | |
uwSimulation | seathru-nerf | 0.75756 | 0.48492 | 0.72675 | 0.75512 | 0.48952 | 0.725569 | 0.75617 | 0.50209 | 0.72618 |
seathru-nerf-lite | 0.88521 | 0.44128 | 0.76963 | 0.88559 | 0.44384 | 0.7697 | 0.88561 | 0.45851 | 0.76972 |
Through the Table 2, it can be observed that using extremely low grid sizes (e.g., ) leads to insufficient uncertainty estimates. Low grid sizes mean that spatial segmentation is coarser, resulting in a loss of detailed information in the image. This loss of information prevents the model from adequately capturing the complex features of the scene, especially in regions where high-frequency variations and complex geometries are present. In this case, the model may exhibit overconfidence, thereby underestimating the actual uncertainty. Therefore, although the computational cost is lower at low grid sizes, the uncertainty estimates are also less accurate and cannot provide reliable confidence information.
As the grid size increases, the effectiveness of uncertainty estimation gradually improves. When the grid size reaches a certain level (e.g., ), the uncertainty estimation results achieve optimal performance. A higher grid size implies finer spatial segmentation, which helps capture more image details and scene features. In this case, the model can more accurately identify and quantify uncertainties, particularly in complex and variable regions. Therefore, uncertainty estimation at higher grid sizes is more reliable and better reflects the uncertainties in the model’s predictions.
Specifically, it is experimentally found that the uncertainty estimation can reach the optimal equilibrium point at a moderate grid size (e.g., ). With this grid size, the model is able to make full use of the spatial segmentation accuracy to accurately capture image details and scene features, thus providing high-quality uncertainty estimation. At the same time, the computational resources and time costs are within acceptable range, achieving an optimal balance between performance and efficiency. In other words, a moderate value (e.g., ) can provide high-quality uncertainty estimation while ensuring computational efficiency.
However, when the grid size continues to increase (e.g., ), the benefits of uncertainty estimation start to diminish. This phenomenon can be attributed to several factors. First, a significantly higher grid size brings substantial increases in computational resources and time. Although the increased spatial segmentation precision can capture more detailed information, these additional details do not significantly improve the accuracy of uncertainty estimation. In other words, beyond a certain grid size, the increased computational complexity and time cost do not yield corresponding performance gains, potentially leading to resource wastage. Additionally, excessively high grid sizes may introduce extra noise and uncertainty. Despite the finer spatial segmentation, these extra partitions may introduce more noise in complex scenes, thereby affecting the model’s uncertainty estimation.
In summary, the parameter plays a crucial role in our research. An appropriate grid size not only improves the accuracy of uncertainty estimation, but also finds an optimal balance between computational resources and time cost. This observation provides an important reference for parameter selection in practical applications.
5.3.2 Influence of parameter lambda
In uncertainty quantification tasks, the choice of the regularization parameter is often a critical factor. To validate the sensitivity and robustness of the parameter selection, we conducted ablation experiments, selecting multiple different values and observing the changes in the AUSE metric.
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.305097 | 0.475915 | 0.343446 | 0.305877 | 0.475641 | 0.343836 | 0.305227 | 0.475575 | 0.343502 |
seathru-nerf-lite | 0.317300 | 0.514976 | 0.355272 | 0.317226 | 0.515023 | 0.355213 | 0.317270 | 0.515399 | 0.355254 | |
IUI3 | seathru-nerf | 0.188341 | 0.228918 | 0.256907 | 0.188336 | 0.228961 | 0.256903 | 0.188344 | 0.228937 | 0.256910 |
seathru-nerf-lite | 0.150081 | 0.224004 | 0.225014 | 0.150080 | 0.223975 | 0.225012 | 0.150083 | 0.224009 | 0.225015 | |
Panama | seathru-nerf | 0.293607 | 0.469783 | 0.350732 | 0.293460 | 0.469740 | 0.350620 | 0.293474 | 0.469596 | 0.350631 |
seathru-nerf-lite | 0.316573 | 0.503292 | 0.371936 | 0.316534 | 0.503354 | 0.371907 | 0.316621 | 0.503130 | 0.371973 | |
JapaneseGradens | seathru-nerf | 0.041607 | 0.422099 | 0.131177 | 0.041624 | 0.423969 | 0.131204 | 0.041635 | 0.425591 | 0.131222 |
seathru-nerf-lite | 0.082522 | 0.470332 | 0.178238 | 0.082702 | 0.472465 | 0.178463 | 0.082895 | 0.473451 | 0.178703 | |
uwSimulation | seathru-nerf | 0.758189 | 0.490118 | 0.727131 | 0.756315 | 0.489870 | 0.726176 | 0.755125 | 0.489520 | 0.725569 |
seathru-nerf-lite | 0.885232 | 0.443938 | 0.769525 | 0.885482 | 0.443795 | 0.769653 | 0.885589 | 0.443840 | 0.769704 | |
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.305245 | 0.475608 | 0.343517 | 0.304875 | 0.475516 | 0.343322 | 0.305049 | 0.475232 | 0.343407 |
seathru-nerf-lite | 0.317319 | 0.514914 | 0.355267 | 0.317206 | 0.515218 | 0.355213 | 0.317342 | 0.514882 | 0.355284 | |
IUI3 | seathru-nerf | 0.188341 | 0.228972 | 0.256908 | 0.188351 | 0.228999 | 0.256916 | 0.188357 | 0.229069 | 0.256920 |
seathru-nerf-lite | 0.150083 | 0.223981 | 0.225015 | 0.150082 | 0.224050 | 0.225014 | 0.150082 | 0.224060 | 0.225014 | |
Panama | seathru-nerf | 0.293440 | 0.469665 | 0.350604 | 0.293421 | 0.469665 | 0.350590 | 0.293327 | 0.469757 | 0.350516 |
seathru-nerf-lite | 0.316556 | 0.503609 | 0.371923 | 0.316557 | 0.503048 | 0.371922 | 0.316608 | 0.502941 | 0.371962 | |
JapaneseGradens | seathru-nerf | 0.041650 | 0.427574 | 0.131244 | 0.041664 | 0.429283 | 0.131266 | 0.041687 | 0.431087 | 0.131301 |
seathru-nerf-lite | 0.083091 | 0.473875 | 0.178947 | 0.083290 | 0.474727 | 0.179195 | 0.083499 | 0.475687 | 0.179453 | |
uwSimulation | seathru-nerf | 0.753546 | 0.489448 | 0.724759 | 0.752280 | 0.489259 | 0.724111 | 0.751624 | 0.489032 | 0.723774 |
seathru-nerf-lite | 0.885756 | 0.443877 | 0.769787 | 0.885542 | 0.443982 | 0.769699 | 0.884586 | 0.443875 | 0.769307 | |
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.306121 | 0.475468 | 0.343931 | 0.305626 | 0.475344 | 0.343692 | 0.306133 | 0.475322 | 0.343947 |
seathru-nerf-lite | 0.317286 | 0.515510 | 0.355253 | 0.317258 | 0.515088 | 0.355234 | 0.317231 | 0.515209 | 0.355195 | |
IUI3 | seathru-nerf | 0.188358 | 0.229056 | 0.256921 | 0.188373 | 0.229157 | 0.256932 | 0.188397 | 0.229244 | 0.256949 |
seathru-nerf-lite | 0.150068 | 0.224163 | 0.225000 | 0.150075 | 0.224310 | 0.224998 | 0.150094 | 0.224451 | 0.224999 | |
Panama | seathru-nerf | 0.293328 | 0.469565 | 0.350517 | 0.293288 | 0.469674 | 0.350486 | 0.293303 | 0.469529 | 0.350498 |
seathru-nerf-lite | 0.316509 | 0.503183 | 0.371886 | 0.316584 | 0.502911 | 0.371944 | 0.316537 | 0.503446 | 0.371908 | |
JapaneseGradens | seathru-nerf | 0.041713 | 0.433563 | 0.131342 | 0.041735 | 0.436043 | 0.131374 | 0.041759 | 0.437183 | 0.131410 |
seathru-nerf-lite | 0.083736 | 0.476267 | 0.179745 | 0.083949 | 0.476716 | 0.180006 | 0.084140 | 0.476693 | 0.180240 | |
uwSimulation | seathru-nerf | 0.750872 | 0.488936 | 0.723385 | 0.749251 | 0.488403 | 0.722554 | 0.746942 | 0.488111 | 0.721370 |
seathru-nerf-lite | 0.884037 | 0.444256 | 0.769081 | 0.877413 | 0.444275 | 0.766256 | 0.873298 | 0.443020 | 0.764476 |
From Fig. 10 and Fig. 11, we can observe that the change in uncertainty is not significant. And at the same time, the experimental results in Table 3 show that, despite values spanning several orders of magnitude, the range of changes in the AUSE metric is very limited. This indicates that our method maintains good performance over a wide range of values, and the minimal changes in the AUSE metric further demonstrate the robustness and stability of the method. The method shows stable performance when handling different levels of regularization, ensuring that the model can still provide reliable results under different settings.
The insensitivity to the choice of the value provides significant convenience for practical applications. Since the model maintains good performance across a relatively wide range of values, users do not need to fine-tune the value excessively during actual operation. This characteristic greatly simplifies the use of the model and reduces the cost of parameter optimization in different application scenarios. Meanwhile, the robustness of the model is enhanced, enabling it to operate reliably in various complex and dynamic environments. This feature is particularly important for practical application scenarios that require rapid deployment and real-time response, as it reduces the difficulty of adapting the model to different datasets and task requirements, enhancing the method’s versatility and operability. Consequently, it provides a solid technical foundation for broad application.
5.3.3 Influence of iterations
In experiments, the number of iterations determines the duration of the optimization process. More iterations typically mean more opportunities for the model to update parameters, which may lead to finding better solutions. Therefore, higher iteration counts may improve model accuracy and stability. However, excessively high iteration counts can also significantly increase training time, affecting computational efficiency. To analyze the impact of iteration counts in depth, we conducted experiments with different iteration counts.
iterations=100 | iterations=200 | iterations=300 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.305659 | 0.473639 | 0.343713 | 0.305385 | 0.475084 | 0.343568 | 0.304905 | 0.475541 | 0.343346 |
seathru-nerf-lite | 0.317328 | 0.514631 | 0.355282 | 0.317231 | 0.514994 | 0.355215 | 0.317237 | 0.514179 | 0.355210 | |
IUI3 | seathru-nerf | 0.188374 | 0.228354 | 0.256929 | 0.188378 | 0.228885 | 0.256934 | 0.188370 | 0.228906 | 0.256928 |
seathru-nerf-lite | 0.150142 | 0.223699 | 0.225054 | 0.150125 | 0.223928 | 0.225044 | 0.150100 | 0.223972 | 0.225026 | |
Panama | seathru-nerf | 0.293459 | 0.470040 | 0.350619 | 0.293425 | 0.469367 | 0.350593 | 0.293382 | 0.469639 | 0.350559 |
seathru-nerf-lite | 0.316639 | 0.502895 | 0.371987 | 0.316584 | 0.503721 | 0.371944 | 0.316566 | 0.502956 | 0.371930 | |
JapaneseGradens | seathru-nerf | 0.041648 | 0.425957 | 0.131242 | 0.041638 | 0.425930 | 0.131226 | 0.041625 | 0.426061 | 0.131207 |
seathru-nerf-lite | 0.083123 | 0.473766 | 0.178990 | 0.083001 | 0.473642 | 0.178834 | 0.082956 | 0.473534 | 0.178781 | |
uwSimulation | seathru-nerf | 0.752581 | 0.486382 | 0.724259 | 0.752934 | 0.488215 | 0.724438 | 0.755410 | 0.488788 | 0.725712 |
seathru-nerf-lite | 0.885621 | 0.436375 | 0.769709 | 0.885757 | 0.440941 | 0.769786 | 0.885783 | 0.442063 | 0.769803 | |
iterations=1000 | iterations=1500 | iterations=2000 | ||||||||
AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | AUSE_MSE | AUSE_MAE | AUSE_RMSE | ||
Curasao | seathru-nerf | 0.305227 | 0.475575 | 0.343502 | 0.305797 | 0.475731 | 0.343802 | 0.304581 | 0.475949 | 0.343189 |
seathru-nerf-lite | 0.317270 | 0.515399 | 0.355254 | 0.317271 | 0.515034 | 0.355237 | 0.317286 | 0.515187 | 0.355257 | |
IUI3 | seathru-nerf | 0.188344 | 0.228937 | 0.256910 | 0.188335 | 0.229022 | 0.256904 | 0.188324 | 0.229073 | 0.256896 |
seathru-nerf-lite | 0.150083 | 0.224009 | 0.225015 | 0.150066 | 0.224009 | 0.225003 | 0.150056 | 0.224019 | 0.224996 | |
Panama | seathru-nerf | 0.293474 | 0.469596 | 0.350631 | 0.293464 | 0.469693 | 0.350622 | 0.293543 | 0.469767 | 0.350681 |
seathru-nerf-lite | 0.316621 | 0.503130 | 0.371973 | 0.316590 | 0.503392 | 0.371949 | 0.316549 | 0.503300 | 0.371918 | |
JapaneseGradens | seathru-nerf | 0.041635 | 0.425591 | 0.131222 | 0.041635 | 0.425580 | 0.131222 | 0.041636 | 0.425358 | 0.131223 |
seathru-nerf-lite | 0.082895 | 0.473451 | 0.178703 | 0.082863 | 0.473325 | 0.178665 | 0.082821 | 0.473129 | 0.178611 | |
uwSimulation | seathru-nerf | 0.755125 | 0.489520 | 0.725569 | 0.755997 | 0.490056 | 0.726017 | 0.756580 | 0.490418 | 0.726316 |
seathru-nerf-lite | 0.885589 | 0.443840 | 0.769704 | 0.885517 | 0.444950 | 0.769669 | 0.885512 | 0.445619 | 0.769669 |
From the experimental results in Fig. 12, Fig. 13 and Table 4, we observed certain regularities in the influence of iteration counts on uncertainty quantification metrics for both real and synthetic datasets. Generally speaking, increasing the number of iterations improves performance to some extent, but fluctuations in performance were also observed under certain conditions.
In real datasets, the model’s performance across various metrics gradually improves with increasing iteration counts until optimal performance is achieved, indicating effective capture of dataset complexity and thorough training. However, in synthetic datasets, the model often performs better in initial iterations than later ones, suggesting that early iterations adequately capture the dataset’s features, while excessive iterations may lead to overfitting.
In the synthetic dataset, the SeaThru-NeRF model has the smallest AUSE_MSE, AUSE_MAE, and AUSE_RMSE metrics at an iteration number of 100, indicating that the model is able to converge quickly and achieve the best performance at a smaller number of iterations. Also, the SeaThru-NeRF-lite model has the smallest AUSE_MAE metrics at an iteration number of 100, indicating that the model is able to perform well in some metrics during the initial iteration phase. However, as the number of iterations increases, the performance metrics do not continue to improve and even deteriorate. This phenomenon may be caused by several factors.
Firstly, the rapid initial convergence of the model may be due to the fact that within a small number of iterations, the model is able to efficiently tune the parameters to quickly find a better locally optimal solution, which results in excellent performance metrics. For synthetic datasets, the initial fast tuning may be sufficient to capture the main features of the data and achieve better performance. However, as the number of iterations increases, the model may begin to overfit the training data, resulting in performance metrics that no longer improve or even deteriorate. This is particularly evident in the SeaThru-NeRF model, suggesting that after a certain threshold number of iterations, the model overfits the details of the training data and instead reduces its ability to generalise over the test data.
Moreover, synthetic datasets often possess specific patterns and characteristics captured by the model in early iterations. For the SeaThru-NeRF-lite, minimal AUSE_MSE and AUSE_RMSE were observed at 2000 iterations, indicating improved adaptation to dataset characteristics over extended training, despite its overall performance being inferior to SeaThru-NeRF. This reflects the impact of model scale differences during training, where the larger SeaThru-NeRF model benefits from rapid convergence due to its complex structure, while the smaller SeaThru-NeRF-lite model may require more iterations to achieve comparable results.
Overall, the impact of iteration counts on model performance exhibits certain regularities, with longer iterations generally enhancing performance and robustness. However, optimal iteration counts vary depending on dataset and model architecture, underscoring the importance of selecting suitable iteration counts based on specific application scenarios and data characteristics. Rational iteration count selection can enhance model performance and stability, thereby providing more reliable results in practical applications.
5.4 Discussion
5.4.1 Influence of model architecture
From Figs. 7-13 and Tables 1-4, it can be noted that SeaThru-NeRF-lite is slightly inferior to SeaThru-NeRF in terms of uncertainty estimate. This discrepancy is due to the combination of multiple factors.
Firstly, SeaThru-NeRF is a larger model, providing higher capacity and representational power. Consequently, it captures more details and complex features, leading to higher quality images and more accurate uncertainty estimates. In contrast, SeaThru-NeRF-lite, being smaller in size with limited parameters, has weaker representational power and detail capture, resulting in slightly inferior uncertainty estimates.
Secondly, SeaThru-NeRF benefits from longer training times and deeper neural network layers during training, better fitting the data distribution and reducing model uncertainty. SeaThru-NeRF-lite is simplified in terms of iterations and network architecture, leading to its comparatively lower performance in uncertainty quantification.
Thirdly, the larger model capacity allows SeaThru-NeRF to better fit the training data, demonstrating more stable and superior performance across different parameter settings. Specifically, SeaThru-NeRF effectively utilizes its higher parameter capacity to optimize the model when facing various , , and iterations settings, excelling in AUSE_MSE, AUSE_MAE, and AUSE_RMSE metrics. In contrast, SeaThru-NeRF-lite, with its smaller parameter capacity, struggles with complex data or tasks requiring high precision, resulting in inferior performance across all metrics.
Furthermore, SeaThru-NeRF exhibits excellent performance with both synthetic and real datasets. This indicates that the larger model generalizes better to different data distributions, effectively modeling and predicting both low-noise synthetic data and more uncertain real data. This enhanced generalization ability also enables SeaThru-NeRF to maintain low error metrics under various experimental conditions.
It is worth noting that SeaThru-NeRF-lite was designed as a lightweight model to be used in resource-constrained environments. Therefore, despite its inferior performance compared to SeaThru-NeRF, it still offers unique advantages and application value in scenarios where computational resources and model performance must be balanced.
In summary, the superior performance of the SeaThru-NeRF model under various experimental conditions can be attributed to its larger model capacity, stronger feature representation ability, and better generalization performance. In practical applications, selecting the appropriate model architecture based on specific needs and resource availability is crucial for achieving optimal performance.
5.4.2 Influence of datasets
In the synthetic datasets, we found that the three AUSE metrics are generally higher than those in the real datasets. This can be analyzed in detail from the perspectives of data complexity, noise and data quality.
On the one hand, synthetic datasets are usually generated based on specific rules and models. Although they exhibit consistency and predictability, the simplified assumptions and lack of real-world complexity during the generation process may cause the model to overfit when dealing with synthetic datasets. This overfitting phenomenon can lead to increased prediction errors for the model on synthetic datasets, thereby raising the values of uncertainty estimation. In other words, the synthetic datasets may be overly idealized, failing to fully reflect the complexity and diversity of the real world, resulting in insufficient generalization ability of the model on these data, thus performing poorly in uncertainty estimation.
On the other hand, real datasets typically contain more noise and unpredictable factors, such as lighting changes, occlusions, and measurement errors. These factors can lead to larger errors for the model on real datasets but also force the model to learn more generalization capabilities during the training process, thereby improving the accuracy of uncertainty estimation. Although the data complexity and diversity in real datasets are higher, the model can exhibit lower AUSE values on these complex data through better generalization ability. The various uncertainties and randomness in real datasets, on the contrary, make the model more adaptable and robust when facing unknown data.
In short, the higher AUSE values in synthetic datasets compared to real datasets indicate that the model’s performance on synthetic datasets is inferior to that on real datasets. This is mainly due to the simplified assumptions and lack of real-world complexity in synthetic datasets, leading to overfitting of the model on synthetic datasets, thus performing poorly in prediction error and uncertainty estimation. In contrast, the diversity and complexity of real datasets compel the model to possess better generalization capabilities, allowing it to more accurately estimate uncertainties and reduce prediction errors in complex data environments.
5.4.3 Influence of errors
Tables 1-4 document the results of our main experiments as well as the results of the ablation experiments. In these experiments, we calculated the AUSE values related to three types of errors (MSE, MAE, RMSE). The experimental results indicate that the AUSE_MAE values are generally higher than the corresponding AUSE_MSE and AUSE_RMSE values under the same conditions.
Firstly, it is essential to understand the calculation methods and differences among AUSE_MSE, AUSE_MAE, and AUSE_RMSE. AUSE_MSE and AUSE_RMSE measure the overall performance of model uncertainty using the MSE and RMSE, respectively. Since MSE and RMSE are more sensitive to larger errors, especially outliers, even a few large errors can significantly increase the overall value in these metrics. In contrast, AUSE_MAE is based on MAE, treating each error value equally without excessive sensitivity to outliers.
The observation that AUSE_MAE values are generally higher than AUSE_MSE and AUSE_RMSE may be due to the presence of some large errors in the actual error distribution. These large errors are amplified in the calculation of AUSE_MSE and AUSE_RMSE, resulting in lower overall values. However, AUSE_MAE, being less sensitive to these large errors, shows relatively higher values. Thus, AUSE_MAE provides a smoother measurement method when evaluating uncertainty quantification performance, avoiding the overemphasis on individual large errors.
Secondly, different datasets and models may generate different types of error distributions during training. For instance, in synthetic datasets, the model might more easily capture the overall features of the data, resulting in a more concentrated error distribution and consequently higher AUSE_MAE values. For real datasets, the error distribution may be more dispersed with more outliers, causing AUSE_MSE and AUSE_RMSE to be elevated by these outliers, making AUSE_MAE relatively higher.
Finally, the impact of experimental parameter settings must be considered. Variations in , , and iterations will affect the degree of model training and data fitting. These changes directly influence the distribution and magnitude of errors, affecting the values of each metric. When evaluating the influence of these parameters, it is crucial to comprehensively consider the characteristics of different metrics and data distributions to fully understand the model’s performance.
In conclusion, the phenomenon that AUSE_MAE values are generally higher than AUSE_MSE and AUSE_RMSE suggests that the model’s fitting to the data varies under different settings, reflecting the differences in sensitivity of these metrics to errors. In practical applications, selecting appropriate evaluation metrics should be based on specific application scenarios and data characteristics to more accurately measure model performance and uncertainty.
5.5 Applications: clean up
In this work, we utilize Bayesian Laplace approximation to compute the covariance matrix for the perturbed field parameters . The diagonal elements of represent the uncertainty in each dimension. We render these diagonal elements as a new volumetric data , where regions with larger values indicate higher uncertainty. These high uncertainty areas often correspond to artifacts in the rendering results. By applying a threshold to , we can identify and remove these artifacts, resulting in clearer reconstruction outcomes (Fig. 14). Consequently, this method not only effectively quantifies the uncertainty in each dimension but also post-processes the reconstruction results.
6 Conclusions
In this paper, we introduce a spatial perturbation field based on Bayes’ rays to quantify the spatial uncertainty of an underwater 3D reconstruction represented by neural radiance fields with scattering medium. This spatial perturbation field makes a small perturbation to the input coordinates and inputs the perturbed coordinates into the SeaThru-NeRF to recalculate the colours and densities of object component to obtain the reconstruction results after the perturbation. The uncertainty of each spatial location is modelled by using the Laplace approximation method based on the difference between the original reconstruction results and the perturbed results. Moreover, by rendering the estimated spatial uncertainty field as an additional one colour channel, it is possible to visualise which regions of the whole scene have higher uncertainty. Furthermore, using the uncertainty field, we can remove artifacts from the rendering results of the underwater scene by simple thresholding. Numerical experiments show that our method can explicitly infer the spatial uncertainty of the model on both synthetic and real scenes, and exploit this uncertainty to improve the reconstruction quality. The method will benefit downstream tasks in ocean exploration and navigation, such as underwater reconnaissance and security surveillance, underwater navigation and localisation, and underwater infrastructure inspection. The current work has limitations: we assume that the light source is only from natural ambient light. However, when in deep-sea regions, the effects from artificial lighting and multiple scattering need to be considered due to poor visibility. We will investigate more diverse scenarios and medium parameter estimation methods in our future work.
CRediT authorship contribution statement
H.L.: Conceptualization, Methodology, Software, Writing – original draft, Writing – review & editing, Funding acquisition. X.L.: Methodology, Investigation, Software, Writing – original draft, Writing – review & editing, Data curation. Y.Q.: Conceptualization, Data curation, Supervision, Validation, Writing – original draft. J.D.: Investigation, Formal analysis, Resources. Z.M.: Formal analysis, Writing – review & editing, Validation, Visualization. J.L.: Investigation, Validation, Visualization. L.C.: Conceptualization, Investigation, Project administration, Supervision.
Declaration of competing interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Data availability
The data that support the fndings of this study are available from the authors upon reasonable request.
Acknowledgments
This study was funded by the National Natural Science Foundation of China (No. 52274222).
References
- \bibcommenthead
- Bazilevs et al. [2008] Bazilevs, Y., Calo, V.M., Hughes, T.J., Zhang, Y.: Isogeometric fluid-structure interaction: theory, algorithms, and computations. Comput. Mech. 43, 3–37 (2008) https://doi.org/10.1007/s00466-008-0315-x
- Zhang and Bajaj [2006] Zhang, Y., Bajaj, C.: Adaptive and quality quadrilateral/hexahedral meshing from volumetric data. Comput. Methods Appl. Mech. Eng. 195(9), 942–960 (2006) https://doi.org/10.1016/j.cma.2005.02.016
- Zhang et al. [2005] Zhang, Y., Bajaj, C., Sohn, B.-S.: 3d finite element meshing from imaging data. Comput. Methods Appl. Mech. Eng. 194(48), 5083–5106 (2005) https://doi.org/10.1016/j.cma.2004.11.026
- Mildenhall et al. [2021] Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021) https://doi.org/10.1145/3503250
- Levy et al. [2023] Levy, D., Peleg, A., Pearl, N., Rosenbaum, D., Akkaynak, D., Korman, S., Treibitz, T.: Seathru-nerf: Neural radiance fields in scattering media. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 56–65 (2023). https://doi.org/10.1109/cvpr52729.2023.00014
- Lian et al. [2023] Lian, H., Sun, P., Meng, Z., Li, S., Wang, P., Qu, Y.: Lidar point cloud augmentation for dusty weather based on a physical simulation. Mathematics 12(1), 141 (2023) https://doi.org/10.3390/math12010141
- Shen et al. [2023] Shen, J., Ren, R., Ruiz, A., Moreno-Noguer, F.: Estimating 3d uncertainty field: Quantifying uncertainty for neural radiance fields. arXiv:2311.01815 (2023)
- Terracciano et al. [2020] Terracciano, D.S., Bazzarello, L., Caiti, A., Costanzi, R., Manzari, V.: Marine robots for underwater surveillance. Curr. Robot. Rep. 1(4), 159–167 (2020) https://doi.org/10.1007/s43154-020-00028-z
- Ioannou et al. [2024] Ioannou, G., Forti, N., Millefiori, L.M., Carniel, S., Renga, A., Tomasicchio, G., Binda, S., Braca, P.: Underwater inspection and monitoring: Technologies for autonomous operations. IEEE Aero. El. Sys. Mag. 39(5), 4–16 (2024) https://doi.org/10.1109/maes.2024.3366144
- Maurelli et al. [2022] Maurelli, F., Krupiński, S., Xiang, X., Petillot, Y.: Auv localisation: A review of passive and active techniques. Int. J. Intell. Robot. Appl. 6(2), 246–269 (2022) https://doi.org/10.1007/s41315-021-00215-x
- Martz et al. [2020] Martz, J., Al-Sabban, W., Smith, R.N.: Survey of unmanned subterranean exploration, navigation, and localisation. IET Cyber-Syst. Robot. 2(1), 1–13 (2020) https://doi.org/10.1049/iet-csr.2019.0043
- Halder and Afsari [2023] Halder, S., Afsari, K.: Robots in inspection and monitoring of buildings and infrastructure: A systematic review. Appl. Sci. 13(4), 2304 (2023) https://doi.org/10.3390/app13042304
- Gawlikowski et al. [2023] Gawlikowski, J., Tassi, C.R.N., Ali, M., Lee, J., Humt, M., Feng, J., Kruspe, A., Triebel, R., Jung, P., Roscher, R., et al.: A survey of uncertainty in deep neural networks. Artif. Intell. Rev. 56(Suppl 1), 1513–1589 (2023) https://doi.org/10.1007/s10462-023-10562-9
- Goli et al. [2024] Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ rays: Uncertainty quantification for neural radiance fields. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 20061–20070 (2024)
- Sethuraman et al. [2023] Sethuraman, A.V., Ramanagopal, M.S., Skinner, K.A.: Waternerf: Neural radiance fields for underwater scenes. In: OCEANS 2023 - MTS/IEEE U.S. Gulf Coast, pp. 1–7 (2023). https://doi.org/10.23919/oceans52994.2023.10336972
- Schechner and Karpel [2005] Schechner, Y.Y., Karpel, N.: Recovery of underwater visibility and structure by polarization analysis. IEEE J. Oceanic. Eng. 30(3), 570–587 (2005) https://doi.org/10.1109/joe.2005.850871
- Knight [2008] Knight, P.A.: The sinkhorn–knopp algorithm: convergence and applications. SIAM J. Matrix Anal. A. 30(1), 261–275 (2008) https://doi.org/10.1137/060659624
- Gupta et al. [2024] Gupta, V., Varma, M., Manoj, S., Mitra, K.: U2nerf: Unsupervised underwater image restoration and neural radiance fields. In: The Second Tiny Papers Track at ICLR 2024 (2024)
- Chai et al. [2022] Chai, S., Fu, Z., Huang, Y., Tu, X., Ding, X.: Unsupervised and untrained underwater image restoration based on physical image formation model. In: IEEE Int. Conf. Acoust. Speech Signal Process., pp. 2774–2778 (2022). https://doi.org/10.1109/icassp43922.2022.9746292
- Varma T et al. [2022] Varma T, M., Wang, P., Chen, X., Chen, T., Venugopalan, S., Wang, Z.: Is attention all that nerf needs? arXiv:2207.13298 (2022)
- Zhou et al. [2023] Zhou, J., Liang, T., He, Z., Zhang, D., Zhang, W., Fu, X., Li, C.: Waterhe-nerf: Water-ray tracing neural radiance fields for underwater scene reconstruction. arXiv:2312.06946 (2023)
- Zhang et al. [2023] Zhang, D., Zhou, J., Zhang, W., Lin, Z., Yao, J., Polat, K., Alenezi, F., Alhudhaif, A.: Rex-net: A reflectance-guided underwater image enhancement network for extreme scenarios. Expert Syst. Appl. 231, 120842 (2023) https://doi.org/10.1016/j.eswa.2023.120842
- Akkaynak and Treibitz [2019] Akkaynak, D., Treibitz, T.: Sea-thru: A method for removing water from underwater images. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1682–1691 (2019). https://doi.org/10.1109/cvpr.2019.00178
- Guo et al. [2017] Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Int. Conf. Mach. Learn., pp. 1321–1330 (2017)
- Hernández-Lobato and Adams [2015] Hernández-Lobato, J.M., Adams, R.: Probabilistic backpropagation for scalable learning of bayesian neural networks. In: Int. Conf. Mach. Learn., pp. 1861–1869 (2015)
- Hinton and Van Camp [1993] Hinton, G.E., Van Camp, D.: Keeping the neural networks simple by minimizing the description length of the weights. In: Proc. 6th Annu. Conf. Comput. Learn. Theory, pp. 5–13 (1993). https://doi.org/10.1145/168304.168306
- Graves [2011] Graves, A.: Practical variational inference for neural networks. Adv. Neural Inf. Process. Syst. 24 (2011)
- Rezende and Mohamed [2015] Rezende, D., Mohamed, S.: Variational inference with normalizing flows. In: Int. Conf. Mach. Learn., pp. 1530–1538 (2015)
- Jain et al. [2020] Jain, S., Liu, G., Mueller, J., Gifford, D.: Maximizing overall diversity for improved uncertainty estimates in deep ensembles. In: Proc. AAAI Conf. Artif. Intell., vol. 34, pp. 4264–4271 (2020). https://doi.org/10.1609/aaai.v34i04.5849
- Lakshminarayanan et al. [2017] Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 30 (2017)
- Aralikatti et al. [2018] Aralikatti, R., Margam, D., Sharma, T., Abhinav, T., Venkatesan, S.M.: Global snr estimation of speech signals using entropy and uncertainty estimates from dropout networks. In: Interspeech 2018 (2018). https://doi.org/10.21437/interspeech.2018-1884
- Hernández et al. [2020] Hernández, S., Vergara, D., Valdenegro-Toro, M., Jorquera, F.: Improving predictive uncertainty estimation using dropout–hamiltonian monte carlo. Soft Comput. 24(6), 4307–4322 (2020) https://doi.org/10.1007/s00500-019-04195-w
- Kingma et al. [2015] Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. Adv. Neural Inf. Process. Syst. 28 (2015)
- Gal and Ghahramani [2016] Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: Int. Conf. Mach. Learn., pp. 1050–1059 (2016)
- Chen et al. [2022] Chen, L., Cheng, R., Li, S., Lian, H., Zheng, C., Bordas, S.P.A.: A sample-efficient deep learning method for multivariate uncertainty qualification of acoustic-vibration interaction problems. Comput. Methods Appl. M. 393, 114784 (2022) https://doi.org/10.1016/j.cma.2022.114784
- Chen et al. [2023] Chen, L., Lian, H., Xu, Y., Li, S., Liu, Z., Atroshchenko, E., Kerfriden, P.: Generalized isogeometric boundary element method for uncertainty analysis of time-harmonic wave propagation in infinite domains. Appl. Math. Model. 114, 360–378 (2023) https://doi.org/10.1016/j.apm.2022.09.030
- Chen et al. [2024a] Chen, L.L., Lian, H., Pei, Q., Meng, Z., Jiang, S., Dong, H., Yu, P.: Fem-bem analysis of acoustic interaction with submerged thin-shell structures under seabed reflection conditions. Ocean Eng. 309, 118554 (2024) https://doi.org/10.1016/j.oceaneng.2024.118554
- Chen et al. [2024b] Chen, L., Wang, Z., Lian, H., Ma, Y., Meng, Z., Li, P., Ding, C., Bordas, S.P.A.: Reduced order isogeometric boundary element methods for cad-integrated shape optimization in electromagnetic scattering. Comput. Methods Appl. M. 419, 116654 (2024) https://doi.org/10.1016/j.cma.2023.116654
- Qu et al. [2024] Qu, Y.L., Zhou, Z.B., Chen, L.L., Lian, H.J., Li, X.D., Hu, Z.M., Cao, Y.H., Pan, G.: Uncertainty quantification of vibro-acoustic coupling problems for robotic manta ray models based on deep learning. Ocean Eng. 299, 117388 (2024) https://doi.org/%****␣sn-article.bbl␣Line␣650␣****10.1016/j.oceaneng.2024.117388
- Denker and LeCun [1990] Denker, J., LeCun, Y.: Transforming neural-net output levels to probability distributions. Adv. Neural Inf. Process. Syst. 3 (1990)
- MacKay [1992] MacKay, D.J.: A practical bayesian framework for backpropagation networks. Neural comput. 4(3), 448–472 (1992) https://doi.org/%****␣sn-article.bbl␣Line␣675␣****10.1162/neco.1992.4.3.448
- Pan et al. [2022] Pan, X., Lai, Z., Song, S., Huang, G.: Activenerf: Learning where to see with uncertainty estimation. In: Eur. Conf. Comput. Vis., pp. 230–246 (2022). https://doi.org/10.1007/978-3-031-19827-4_14
- Shen et al. [2021] Shen, J., Ruiz, A., Agudo, A., Moreno-Noguer, F.: Stochastic neural radiance fields: Quantifying uncertainty in implicit 3d representations. In: Int. Conf. 3D Vis., pp. 972–981 (2021). https://doi.org/10.1109/3dv53792.2021.00105
- Shen et al. [2022] Shen, J., Agudo, A., Moreno-Noguer, F., Ruiz, A.: Conditional-flow nerf: Accurate 3d modelling with reliable uncertainty quantification. In: Eur. Conf. Comput. Vis., pp. 540–557 (2022). https://doi.org/10.1007/978-3-031-20062-5_31
- Sünderhauf et al. [2023] Sünderhauf, N., Abou-Chakra, J., Miller, D.: Density-aware nerf ensembles: Quantifying predictive uncertainty in neural radiance fields. In: IEEE Int. Conf. Robot. Autom., pp. 9370–9376 (2023). https://doi.org/10.1109/icra48891.2023.10161012
- Lian et al. [2024a] Lian, H., Li, X., Chen, L., Wen, X., Zhang, M., Zhang, J., Qu, Y.: Uncertainty quantification of neural reflectance fields for underwater scenes. J. Mar. Sci. Eng. 12(2), 349 (2024) https://doi.org/10.3390/jmse12020349
- Lian et al. [2024b] Lian, H., Wang, J., Chen, L., Li, S., Cao, R., Hu, Q., Zhao, P.: Uncertainty-aware physical simulation of neural radiance fields for fluids. Comput. Model. Eng. Sci. 140(1), 1143–1163 (2024) https://doi.org/10.32604/cmes.2024.048549
- Wei et al. [2023] Wei, S., Zhang, J., Wang, Y., Xiang, F., Su, H., Wang, H.: Fg-nerf: Flow-gan based probabilistic neural radiance field for independence-assumption-free uncertainty estimation. arXiv:2309.16364 (2023)
- Grover et al. [2018] Grover, A., Dhar, M., Ermon, S.: Flow-gan: Combining maximum likelihood and adversarial learning in generative models. AAAI Conf. Artif. Intell. 32(1) (2018) https://doi.org/10.1609/aaai.v32i1.11829
- Hoffman et al. [2023] Hoffman, M.D., Le, T.A., Sountsov, P., Suter, C., Lee, B., Mansinghka, V.K., Saurous, R.A.: Probnerf: Uncertainty-aware inference of 3d shapes from 2d images. In: Int. Conf. Artif. Intell. Stat., pp. 10425–10444 (2023)
- Neal [2012] Neal, R.M.: Mcmc using hamiltonian dynamics. arXiv:1206.1901 (2012)
- Yang et al. [2022] Yang, G.-W., Zhou, W.-Y., Peng, H.-Y., Liang, D., Mu, T.-J., Hu, S.-M.: Recursive-nerf: An efficient and dynamically growing nerf. IEEE Trans. Vis. Comput. Graph. 29(12), 5124–5136 (2022) https://doi.org/10.1109/tvcg.2022.3204608
- Max [1995] Max, N.: Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph. 1(2), 99–108 (1995) https://doi.org/10.1109/2945.468400
- Ritter et al. [2018] Ritter, H., Botev, A., Barber, D.: A scalable laplace approximation for neural networks. In: Int. Conf. Learn. Represent. (2018)
- Schonberger and Frahm [2016] Schonberger, J.L., Frahm, J.-M.: Structure-from-motion revisited. In: IEEE Conf. Comput. Vis. Pattern Recognit., pp. 4104–4113 (2016). https://doi.org/10.1109/cvpr.2016.445
- Mildenhall et al. [2019] Mildenhall, B., Srinivasan, P.P., Ortiz-Cayon, R., Kalantari, N.K., Ramamoorthi, R., Ng, R., Kar, A.: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38(4), 1–14 (2019) https://doi.org/10.1145/3306346.3322980
- Bae et al. [2021] Bae, G., Budvytis, I., Cipolla, R.: Estimating and exploiting the aleatoric uncertainty in surface normal estimation. In: Proc. IEEE/CVF Int. Conf. Comput. Vis., pp. 13137–13146 (2021). https://doi.org/10.1109/iccv48922.2021.01289
- Ilg et al. [2018] Ilg, E., Çiçek, Ö., Galesso, S., Klein, A., Makansi, O., Hutter, F., Brox, T.: Uncertainty estimates for optical flow with multi-hypotheses networks. arXiv:1802.07095 (2018)
- Wang et al. [2004] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004) https://doi.org/10.1109/tip.2003.819861
- Korhonen and You [2012] Korhonen, J., You, J.: Peak signal-to-noise ratio revisited: Is simple beautiful? In: 2012 4th Int. Workshop Qual. Multimedia Exp., pp. 37–38 (2012). https://doi.org/10.1109/qomex.2012.6263880
- Zhang et al. [2018] Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068