This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency

Min-Seop Kwak    Jiuhn Song    Seungryong Kim
Abstract

We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometric consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometric consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view-dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.

Machine Learning, ICML
\icmlauthorforemail

Min-Seop [email protected] \icmlauthorforemailJiuhn [email protected]


1 Introduction

Recently, representing a 3D scene as a Neural Radiance Field (NeRF) (Mildenhall et al., 2020) has proven to be a powerful approach for novel view synthesis and 3D reconstruction (Barron et al., 2021; Jain et al., 2021; Chen et al., 2021). However, despite its impressive performance, NeRF requires a large number of densely, well distributed calibrated images for optimization, which limits its applicability. When limited to sparse observations, NeRF easily overfits to the input view images and is unable to reconstruct correct geometry (Zhang et al., 2020).

The task that directly addresses this problem, also called a few-shot NeRF, aims to optimize high-fidelity neural radiance field in such sparse scenarios (Jain et al., 2021; Kim et al., 2022; Niemeyer et al., 2022), countering the underconstrained nature of said problem by introducing additional priors. Specifically, previous works attempted to solve this by utilizing a semantic feature (Jain et al., 2021), entropy minimization (Kim et al., 2022), SfM depth priors (Deng et al., 2022) or normalizing flow (Niemeyer et al., 2022), but their necessity for handcrafted methods or inability to extract local and fine structures limited their performance.

To alleviate these issues, we propose a novel regularization technique that enforces a geometric consistency across different views with a depth-guided warping and a geometry-aware consistency modeling. Based on these, we propose a novel framework, called Neural Radiance Fields with Geometric Consistency (GeCoNeRF), for training neural radiance fields in a few-shot setting. Our key insight is that we can leverage a depth rendered by NeRF to warp sparse input images to novel viewpoints, and use them as pseudo ground truths to facilitate learning of fine details and high-frequency features by NeRF. By encouraging images rendered at novel views to model warped images with a consistency loss, we can successfully constrain both geometry and appearance to boost fidelity of neural radiance fields even in highly under-constrained few-shot setting. Taking into consideration non-Lambertian nature of given datasets, we propose feature-level regularization loss that captures contextual and structural information while largely ignoring individual color differences. We also present a method to generate a consistency mask to prevent inconsistently warped information from harming the network. Finally, we provide coarse-to-fine training strategies for sampling and pose generation to stabilize optimization of the model.

We demonstrate the effectiveness of our method on synthetic and real datasets (Mildenhall et al., 2020; Jensen et al., 2014). Experimental results prove the effectiveness of the proposed model over the latest methods for few-shot novel view synthesis.

Refer to caption
Figure 1: Illustration of our consistency modeling pipeline for few-shot NeRF. Given an image IiI_{i} and estimated depth map DjD_{j} of jj-th unobserved viewpoint, we warp the image IiI_{i} to that novel viewpoint as IijI_{i\rightarrow j} by establishing geometric correspondence between two viewpoints. Using the warped image as a pseudo ground truth, we cause rendered image of unseen viewpoint, IjI_{j}, to be consistent in structure with warped image, with occlusions taken into consideration.

2 Related Work

Neural radiance fields.

Among the most notable of approaches regarding the task of novel view synthesis and 3D reconstruction is Neural Radiance Field (NeRF) (Mildenhall et al., 2020), where photo-realistic images are rendered by a simple MLP architecture. Sparked by its impressive performance, a variety of follow-up studies based on its continuous neural volumetric representation have been prompted, including dynamic and deformable scenes (Park et al., 2021; Tretschk et al., 2021; Pumarola et al., 2021; Attal et al., 2021), real-time rendering (Yu et al., 2021a; Hedman et al., 2021; Reiser et al., 2021; Müller et al., 2022), self-calibration (Jeong et al., 2021) and generative modeling (Schwarz et al., 2020; Niemeyer & Geiger, 2021; Xu et al., 2021; Deng et al., 2021). Mip-NeRF (Barron et al., 2021) eliminates aliasing artifacts by adopting cone tracing with a single multi-scale MLP. In general, most of these works have difficulty in optimizing a single scene with a few number of images.

Few-shot NeRF.

One key limitation of NeRF is its necessity for large number of calibrated views in optimizing neural radiance fields. Some recent works attempted to address this in the case where only few observed views of the scene are available. PixelNeRF(Yu et al., 2021b) conditions a NeRF on image inputs using local CNN features. This conditional model allows the network to learn scene priors across multiple scenes. Stereo radiance fields (Chibane et al., 2021) use local CNN features from input views for scene geometry reasoning and MVSNeRF (Chen et al., 2021) combines cost volume with neural radiance field for improved performance. However, pre-training with multi-view images of numerous scenes are essential for these methods for them to learn reconstruction priors.

Other works attempt different approach of optimizing NeRF from scratch in few-shot settings: DSNeRF (Deng et al., 2022) makes use of depth supervision to network to optimize a scene with few images. (Roessle et al., 2021) also utilizes sparse depth prior by extending into dense depth map by depth completion module to guide network optimization. On the other hand, there are models that tackle depth prior-free few-shot optimization: DietNeRF (Jain et al., 2021) enforces semantic consistency between rendered images from unseen view and seen images. RegNeRF (Niemeyer et al., 2022) regularizes the geometry and appearance of patches rendered from unobserved viewpoints. InfoNeRF (Kim et al., 2022) constrains the density’s entropy in each ray and ensures consistency across rays in the neighborhood. While these methods constrain NeRF into learning more realistic geometry, their regularizations are limited in that they require extensive dataset-specific fine-tuning and that they only provide regularization at a global level in a generalized manner.

Self-supervised photometric consistency.

In the field of multiview stereo depth estimation, consistency modeling between stereo images and their warped images has been widely used for self-supervised training (Godard et al., 2017; Garg et al., 2016; Zhou et al., 2017) In weakly supervised or unsupervised settings (Huang et al., 2021; Khot et al., 2019) where there is lack of ground truth depth information, consistency modeling between images with geometry-based warping is used as a supervisory signal (Zhou et al., 2017; Huang et al., 2021; Khot et al., 2019) formulating depth learning as a form of reconstruction task between viewpoints.

Recently, methods utilizing self-supervised photometric consistency have been introduced to NeRF: concurrent works such as NeuralWarp (Darmon et al., 2022), StructNeRF (Chen et al., 2022) and Geo-NeuS (Fu et al., 2022) model photometric consistency between source images and their warped counterparts from other source viewpoints to improve their reconstruction quality. However, these methods only discuss dense view input scenarios where pose differences between source viewpoints are small, and do not address their behavior in few-shot settings - where sharp performance drop is expected due to scarcity of input viewpoints and increased difficulty in the warping procedure owing to large viewpoint differences and heavy self-occlusions. RapNeRF (Zhang et al., 2022) uses geometry-based reprojection method to enhance view extrapolation performance, and (Bortolon et al., 2022) uses depth rendered by NeRF as correspondence information for view-morphing module to synthesize images between input viewpoints. However, these methods do not take occlusions into account, and their pixel-level photometric consistency modeling comes with downside of suppressing view-dependent specular effects.

3 Preliminaries

Neural Radiance Field (NeRF) (Mildenhall et al., 2020) represents a scene as a continuous function fθf_{\theta} represented by a neural network with parameters θ\theta, where the points are sampled along rays, represented by rr, for evaluation by the neural network. Typically, the sampled coordinates 𝐱3\mathbf{x}\in{\mathbb{R}^{3}} and view direction 𝐝2\mathbf{d}\in{\mathbb{R}^{2}} are transformed by a positional encoding γ\gamma into Fourier features (Tancik et al., 2020) that facilitates learning of high-frequency details. The neural network fθf_{\theta} takes as input the transformed coordinate γ(𝐱)\gamma(\mathbf{x}) and viewing directions γ(𝐝)\gamma(\mathbf{d}), and outputs a view-invariant density value σ\sigma\in{\mathbb{R}} and a view-dependent color value 𝐜3\mathbf{c}\in{\mathbb{R}^{3}} such that

{𝕔,σ}=fθ(γ(𝐱),γ(𝐝)).\{\mathbb{c},\sigma\}=f_{\theta}\mathopen{}\mathclose{{}\left(\gamma(\mathbf{x}),\gamma(\mathbf{d})}\right). (1)

With a ray parameterized as 𝐫p(t)=𝐨+t𝐝p\mathbf{r}_{p}(t)=\mathbf{o}+t\mathbf{d}_{p} from the camera center 𝐨\mathbf{o} through the pixel pp along direction 𝐝p\mathbf{d}_{p}, the color is rendered as follows:

C(𝕣p)=tntfT(t)σ(𝕣p(t))𝕔(𝕣p(t),𝕕p)𝑑t,C(\mathbb{r}_{p})=\int_{t_{n}}^{t_{f}}T(t)\mathbb{\sigma}(\mathbb{r}_{p}(t))\mathbb{c}(\mathbb{r}_{p}(t),\mathbb{d}_{p})dt, (2)

where C(𝕣p)C(\mathbb{r}_{p}) is a predicted color value at the pixel pp along the ray 𝐫p(t)\mathbf{r}_{p}(t) from tnt_{n} to tft_{f}, and T(t)T(t) denotes an accumulated transmittance along the ray from tn\mathit{t_{n}} to t\mathit{t}, defined such that

T(t)=exp(tntσ(𝕣p(s))𝑑s).T(t)=\exp\mathopen{}\mathclose{{}\left(-\int_{t_{n}}^{t}\sigma(\mathbb{r}_{p}(s))ds}\right). (3)

To optimize the networks fθf_{\theta}, the observation loss obs\mathcal{L}_{\mathrm{obs}} enforces the rendered color values to be consistent with ground truth color value C(𝕣)C^{\prime}(\mathbb{r}):

obs=𝕣pC(𝕣p)C(𝕣p)22,\mathcal{L}_{\mathrm{obs}}=\sum_{\mathbb{r}_{p}\in\mathcal{R}}\lVert C^{\prime}(\mathbb{r}_{p})-C(\mathbb{r}_{p})\rVert_{2}^{2}, (4)

where \mathcal{R} represents a batch of training rays.

4 Methodology

4.1 Motivation and Overview

Let us denote an image at ii-th viewpoint as IiI_{i}. In a few-shot novel view synthesis, NeRF is given only a few images {Ii}\{I_{i}\} for i{1,,N}i\in\{1,...,N\} with small NN, e.g., N=3N=3 or N=5N=5. The objective of novel view synthesis is to train the mapping function fθf_{\theta} that can be used to recover an image IjI_{j} at jj-th unseen or novel viewpoint. As we described above, in the few-shot setting, given {Ii}\{I_{i}\}, directly optimizing fθf_{\theta} solely with the pixel-wise reconstruction loss obs\mathcal{L}_{\mathrm{obs}} is limited by its inability to model view-dependent effects, and thus an additional regularization to encourage the network fθf_{\theta} to generate consistent appearance and geometry is required.

To achieve this, we propose a novel regularization technique to enforce a geometric consistency across different views with depth-guided warping and consistency modeling. We focus on the fact that NeRF (Mildenhall et al., 2020) inherently renders not only color image but depth image as well. Combined with known viewpoint difference, the rendered depths can be used to define a geometric correspondence relationship between two arbitrary views.

Specifically, we consider a depth image rendered by the NeRF model, DjD_{j} at unseen viewpoint jj. By formulating a warping function ψ(Ii;Dj,Rij)\psi(I_{i};D_{j},R_{i\rightarrow j}) that warps an image IiI_{i} according to the depth DjD_{j} and viewpoint difference RijR_{i\rightarrow j}, we can encourage a consistency between warped image Iij=ψ(Ii;Dj,Rij)I_{i\rightarrow j}=\psi(I_{i};D_{j},R_{i\rightarrow j}) and rendered image IjI_{j} at jj-th unseen viewpoint, which in turn improves the few-shot novel view synthesis performance. This framework can overcome the limitations of previous few-shot setting approaches (Mildenhall et al., 2020; Chen et al., 2021; Barron et al., 2021), improving not only global geometry but also high-frequency details and appearance as well.

In the following, we first explain how input images can be warped to unseen viewpoints in our framework. Then, we demonstrate how we impose consistency upon the pair of warped image and rendered image for regularization, followed by explanation of occlusion handling method and several training strategies that proved crucial for stabilization of NeRF optimization in few-shot scenario.

4.2 Rendered Depth-Guided Warping

Refer to caption
Figure 2: Illustration of the proposed framework. GeCoNeRF regularizes the networks with consistency modeling. Consistency loss function consM\mathcal{L}^{M}_{\mathrm{cons}} is applied between unobserved viewpoint image and warped observed viewpoint image, while disparity regularization loss reg\mathcal{L}_{\mathrm{reg}} regularizes depth at seen viewpoints.

To render an image at novel viewpoints, we first sample a random camera viewpoint, from which corresponding ray vectors are generated in a patch-wise manner. As NeRF outputs density and color values of sampled points along the novel rays, we use recovered density values to render a consistent depth map. Following (Mildenhall et al., 2020), we formulate per-ray depth values as weighted composition of distances traveled from origin. Since ray 𝕣p\mathbb{r}_{p} corresponding to pixel pp is parameterized as 𝕣p(t)=𝕠+t𝕕p\mathbb{r}_{p}(t)=\mathbb{o}+t\mathbb{d}_{p}, the depth rendering is defined similarly to the color rendering:

D(𝕣p)=tntfT(t)σ(𝕣p(t))t𝑑t,D(\mathbb{r}_{p})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbb{r}_{p}(t))tdt, (5)

where D(𝕣p)D(\mathbb{r}_{p}) is a predicted depth along the ray 𝕣p\mathbb{r}_{p}. As described in Figure 1, we use the rendered depth map DjD_{j} to warp input ground truth image IiI_{i} to jj-th unseen viewpoint and acquire a warped image IijI_{i\rightarrow j}, which is defined as a process such that Iij=ψ(Ii;Dj,Rij)I_{i\rightarrow j}=\psi(I_{i};D_{j},R_{i\rightarrow j}). More specifically, pixel location pj{p_{j}} in target unseen viewpoint image is transformed to pji{p_{j\rightarrow i}} at source viewpoint image by viewpoint difference RjiR_{j\rightarrow i} and camera intrinsic parameter KK such that

pjiKRjiDj(pj)K1pj,p_{j\rightarrow i}\sim KR_{j\rightarrow i}{D}_{j}(p_{j})K^{-1}p_{j}, (6)

where \sim indicates approximate equality and the projected coordinate pjip_{j\rightarrow i} is a continuous value. With a differentiable sampler, we extract color values of pjip_{j\rightarrow i} on IiI_{i}. More formally, the transforming components process can be written as follows:

Iij(pj)=sampler(Ii;pji),I_{i\rightarrow j}(p_{j})=\mathrm{sampler}(I_{i};p_{j\rightarrow i}), (7)

where sampler()\mathrm{sampler}(\cdot) is a bilinear sampling operator (Jaderberg et al., 2015).

Acceleration.

Rendering a full image is computationally heavy and extremely timetaking, requiring tens of seconds for a single iteration. To overcome the computational bottleneck of full image rendering and warping, rays are sampled on a strided grid to make the patch with stride ss, which we have set as 22. After the rays undergo volumetric rendering, we upsample the low-resolution depth map back to original resolution with bilinear interpolation. This full-resolution depth map is used for the inverse warping. This way, detailed warped patches of full-resolution can be generated with only a fraction of computational cost that would be required when rendering the original sized ray batch.

Refer to caption
(a) GT patch
Refer to caption
(b) Rendered patch
Refer to caption
(c) Warped patch
Refer to caption
(d) Occlusion mask
Refer to caption
(e) Masked patch
Figure 3: Visualization of consistency modeling process. (a) ground truth patch, (b) rendered patch at novel viewpoint, (c) warped patch, from input viewpoint to novel viewpoint, (d) occlusion mask with threshold masking, and (e) final warped patch with occlusion masking at novel viewpoint.

4.3 Consistency Modeling

Given the rendered patch IjI_{j} at jj-th viewpoint and the warped patch IijI_{i\rightarrow j} with depth DjD_{j} and viewpoint difference RijR_{i\rightarrow j}, we define the consistency between the two to encourage additional regularization for globally consistent rendering. One viable option is to naïvely apply the pixel-wise image reconstruction loss pix\mathcal{L}_{\mathrm{pix}} such that

pix=IijIj.\mathcal{L}_{\mathrm{pix}}=\mathopen{}\mathclose{{}\left\lVert I_{i\rightarrow j}-I_{j}}\right\rVert. (8)

However, we observe that this simple strategy is prone to cause failures in reflectant non-Lambertian surfaces where appearance changes greatly regarding viewpoints (Zhan et al., 2018). In addition, geometry-related problems, such as self-occlusion and artifacts, prohibits naïve usage of pixel-wise image reconstruction loss for regularization in unseen viewpoints.

Feature-level consistency modeling.

To overcome these issues, we propose masked feature-level regularization loss that encourages structural consistency while ignoring view-dependent radiance effects, as illustrated in Figure 2.

Given an image II as an input, we use a convolutional network to extract multi-level feature maps such that fϕ,l(I)Hl×Wl×Clf_{\phi,l}(I)\in\mathbb{R}^{H_{l}\times W_{l}\times C_{l}}, with channel depth ClC_{l} for ll-th layer. To measure feature-level consistency between warped image IijI_{i\rightarrow j} and rendered image IjI_{j}, we extract their features maps from LL layers and compute difference within each feature map pairs that are extracted from the same layer.

In accordance with the idea of using the warped image IijI_{i\rightarrow j} as pseudo ground truths, we allow a gradient backpropagation to pass only through the rendered image and block it for the warped image. By applying the consistency loss at multiple levels of feature maps, we cause IjI_{j} to model after IijI_{i\rightarrow j} both on semantic and structural level.

Formally written, the consistency loss cons\mathcal{L}_{\mathrm{cons}} is defined as such that

cons=l=1L1Clfϕl(Iji)fϕl(Ij).\mathcal{L}_{\mathrm{cons}}=\sum^{L}_{l=1}{1\over C_{l}}\mathopen{}\mathclose{{}\left\lVert f_{\phi}^{l}(I_{j\rightarrow i})-f_{\phi}^{l}(I_{j})}\right\rVert. (9)

For this loss function cons\mathcal{L}_{\mathrm{cons}}, we find ll-1 distance function most suited for our task and utilize it to measure consistency across feature difference maps. Empirically, we have discovered that VGG-19 network (Simonyan & Zisserman, 2014) yields best performance in modeling consistencies, likely due to the absence of normalization layers (Johnson et al., 2016) that scale down absolute values of feature differences. Therefore, we employ VGG19 network as our feature extractor network fϕf_{\phi} throughout all of our models.

It should be noted that our loss function differs from that of DietNeRF (Jain et al., 2021) in that while DietNeRF’s consistency loss is limited to regularizing the radiance field in a globally semantic level, our loss combined with the warping module is also able to give the network highly rich information on a local, structural level as well. In other words, contrary to DietNeRF giving only high-level feature consistency, our method of using multiple levels of convolutional network for feature difference calculation can be interpreted as enforcing a mixture of all levels, from high-level semantic consistency to low-level structural consistency.

Occlusion handling.

Refer to caption
Figure 4: Occlusion-aware mask generation. Mask generation by comparing geometry between novel view jj and source view ii, with IijI_{i\rightarrow j} being warped patch generated for view jj. For (a) and (b), warping does not occur correctly due to artifacts and self-occlusion, respectively. Such pixels are masked out by MlM_{l}, allowing only (c), with accurate warping, as training signal for rendered image IjI_{j}.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) GT
Refer to caption
(b) DietNeRF
Refer to caption
(c) InfoNeRF
Refer to caption
(d) RegNeRF
Refer to caption
(e) Ours
Refer to caption
(f) Ours (D)
Figure 5: Qualitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) show that in 3-view setting, our method captures fine details more robustly (such as the wire in the mic scene) and produces less artifacts (background in the materials scene) compared to previous methods. We show GeCoNeRF’s results (e) with its rendered depth (f).

In order to prevent imperfect and distorted warpings caused by erroneous geometry from influencing the model, which degrades overall reconstruction quality, we construct consistency mask MlM_{l} to let NeRF ignore regions with geometric inconsistencies, as demonstrated in Figure 3. Instead of applying masks to the images before inputting them into the feature extractor network, we apply resized masks MlM_{l} directly to the feature maps, after using nearest-neighbor down-sampling to make them match the dimensions of ll-th layer outputs.

We generate MM by measuring consistency between rendered depth values from the target viewpoint and source viewpoint such that

M(pj)=[Dj(pj)Di(pji)<τ].M(p_{j})=\big{[}\|{D}_{j}(p_{j})-{D}_{i}(p_{j\rightarrow i})\|<\tau\big{]}. (10)

where [][\cdot] is Iverson bracket, and pjip_{j\rightarrow i} refers to the corresponding pixel in source viewpoint ii for reprojected target pixel pjp_{j} of jj-th viewpoint. Here we measure euclidean distance between depth points rendered from target and source viewpoints as a criterion for a threshold masking. As illustrated in Figure 4, if distance between two points are greater than given threshold value τ\tau, we determine two rays as rendering depths of separate surfaces and mask out the corresponding pixel in viewpoint IjI_{j}. The process takes place over every pixel in viewpoint IjI_{j} to generate a mask MM the same size as rendered pixels. Through this technique, we filter out problematic solutions at feature level and regularize NeRF with only high-confidence image features.

Based on this, the consistency loss cons\mathcal{L}_{\mathrm{cons}} is extended as such that

consM=l=1L1ClmlMl(fϕl(Iij)fϕl(Ij)),\mathcal{L}^{M}_{\mathrm{cons}}=\sum^{L}_{l=1}{1\over C_{l}m_{l}}\mathopen{}\mathclose{{}\left\lVert M_{l}\odot(f_{\phi}^{l}(I_{i\rightarrow j})-f_{\phi}^{l}(I_{j}))}\right\rVert, (11)

where mlm_{l} is the sum of non-zero values.

Edge-aware disparity regularization.

Since our method is dependent upon the quality of depth rendered by NeRF, we directly impose additional regularization on rendered depth to facilitate optimization. We further encourage local depth smoothness on rendered scenes by imposing ll-1 penalty on disparity gradient within randomly sampled patches of input views. In addition, inspired by (Godard et al., 2017), we take into account the fact that depth discontinuities in depth maps are likely to be aligned to gradients of its color image, and introduce an edge-aware term with image gradients I\partial I to weight the disparity values. Specifically, following (Godard et al., 2017), we regularize for edge-aware depth smoothness such that

reg=|xDi|e|xIi|+|yDi|e|yIi|,\mathcal{L}_{\mathrm{reg}}=\mathopen{}\mathclose{{}\left|\partial_{x}D^{*}_{i}}\right|e^{-\mathopen{}\mathclose{{}\left|\partial_{x}I_{i}}\right|}+\mathopen{}\mathclose{{}\left|\partial_{y}D^{*}_{i}}\right|e^{-\mathopen{}\mathclose{{}\left|\partial_{y}I_{i}}\right|}, (12)

where Di=Di/Di¯D^{*}_{i}=D_{i}/\overline{D_{i}} is the mean-normalized inverse depth from (Godard et al., 2017) to discourage shrinking of the estimated depth.

4.4 Training Strategy

In this section, we present novel training strategies to learn the model with the proposed losses.

Total losses.

We optimize our model with a combined final loss of original NeRF’s pixel-wise reconstruction loss obs\mathcal{L}_{\mathrm{obs}} and two types of regularization loss, consM\mathcal{L}^{M}_{\mathrm{cons}} for unobserved view consistency modeling and reg\mathcal{L}_{\mathrm{reg}} for disparity regularization.

Progressive camera pose generation.

Difficulty of of accurate warping increases the further target view is from the source view, which means that sampling far camera poses straight from the beginning of training may have negative effects on our model. Therefore, we first generate camera poses near source views, then progressively further as training proceeds. We sample noise value uniformly between an interval of [-β\beta, +β\beta] and add it to the original Euler rotation angles of input view poses, with parameter β\beta growing linearly from 3 to 9 degrees throughout the course of optimization. This design choice can be intuitively understood as stabilizing locations near observed viewpoints at start and propagating this regularization to further locations, where warping becomes progressingly more difficult.

Positional encoding frequency annealing.

We find that most of the artifacts occurring are high-frequency occlusions that fill the space between scene and camera. This behaviour can be effectively suppressed by constraining the order of fourier positional encoding (Tancik et al., 2020) to low dimensions. Due to this reason, we adopt coarse-to-fine frequency annealing strategy previously used by (Park et al., 2021) to regularize our optimization. This strategy forces our network to primarily optimize from coarse, low-frequency details where self-occlusions and fine features are minimized, easing the difficulty of warping process in the beginning stages of training. Following (Park et al., 2021), the annealing equation is α(t)=mt/K\alpha(t)=mt/K, with mm as the number of encoding frequencies, tt as iteration step, and we set hyper-parameter KK as 15k15k.

5 Experiments

5.1 Experimental Settings

Table 1: Quantitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019) datasets.
Methods NeRF-Synthetic (Mildenhall et al., 2020) LLFF (Mildenhall et al., 2019)
PSNR \uparrow SSIM \uparrow LPIPS \downarrow Avg. \downarrow PSNR \uparrow SSIM \uparrow LPIPS \downarrow Avg. \downarrow
NeRF (Mildenhall et al., 2020) 14.73 0.734 0.451 0.199 13.34 0.373 0.451 0.255
mip-NeRF (Barron et al., 2021) 17.71 0.798 0.745 0.178 14.62 0.351 0.495 0.246
\cdashline1-9 DietNeRF (Jain et al., 2021) 16.06 0.793 0.306 0.151 14.94 0.370 0.496 0.232
InfoNeRF (Kim et al., 2022) 18.65 0.811 0.230 0.111 14.37 0.349 0.457 0.238
RegNeRF (Niemeyer et al., 2022) 18.01 0.842 0.352 0.132 19.08 0.587 0.336 0.146
GeCoNeRF (Ours) 19.23 0.866 0.201 0.096 18.77 0.596 0.338 0.145
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) Ground-truth
Refer to caption
(b) mip-NeRF
Refer to caption
(c) mip-NeRF (D)
Refer to caption
(d) GeCoNeRF
Refer to caption
(e) GeCoNeRF (D)
Figure 6: Qualitative results on LLFF (Mildenhall et al., 2019). Comparison with baseline mip-NeRF shows that our model learns of coherent depth and geometry in extremely sparse 3-view setting.

Baselines.

We use mip-NeRF (Barron et al., 2021) as our backbone. We give our comparisons to the baseline and several state-of-the-art models for few-shot NeRF: InfoNeRF (Kim et al., 2022), DietNeRF (Jain et al., 2021), and RegNeRF (Niemeyer et al., 2022). We provide implementation details in the appendix.

Datasets and metrics.

We evaluate our model on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019). NeRF-Synthetic is a realistically rendered 360 synthetic dataset comprised of 8 scenes. We randomly sample 3 viewpoints out of 100 training images in each scene, with 200 testing images for evaluation. We also conduct experiments on LLFF benchmark dataset, which consists of real-life forward facing scenes. Following RegNeRF (Niemeyer et al., 2022), we apply standard settings by selecting test set evenly from list of every 8th image and selecting 3 reference views from remaining images. We quantify novel view synthesis quality using PSNR, Structural Similarity Index Measure (SSIM) (Wang et al., 2004), LPIPS perceptual metric (Zhang et al., 2018) and an average error metric introduced in (Barron et al., 2021) to report the mean value of metrics for all scenes in each dataset.

5.2 Comparisons

Qualitative comparisons.

Qualitative comparison results in Figure 5 and 6 demonstrate that our model shows superior performance to baseline mip-NeRF (Barron et al., 2021) and previous state-of-the-art model, RegNeRF (Niemeyer et al., 2022), in 3-view settings. We observe that our warping-based consistency enables GeCoNeRF to capture fine details that mip-NeRF and RegNeRF struggle to capture in same sparse view scenarios, as demonstrated with the mic scene. Our method also displays higher stability in rendering smooth surfaces and reducing artifacts in background in comparison to previous models, as shown in the results of the materials scene. We argue that these results demonstrate how our method, through generation of warped pseudo ground truth patches, is able to give the model local, scene-specific regularization that aids recovery of fine details, which previous few-shot NeRF models with their global, generalized priors were unable to accomplish.

Quantitative comparisons.

Comparisons in Table 1 show our model’s competitive results in LLFF dataset, whose PSNR results show large increase in comparison to mip-NeRF baseline and competitive compared to RegNeRF. We see that our warping-based consistency modeling successfully prevents overfitting and artifacts, which allows our model to perform better quantitatively.

Refer to caption
(a) Baseline
Refer to caption
(b) (a) + cons\mathcal{L}_{\mathrm{cons}}
Refer to caption
(c) (b) + MM (O. mask)
Refer to caption
(d) (c) + Progressive
Refer to caption
(e) (d) + reg\mathcal{L}_{\mathrm{reg}} (Ours)
Figure 7: Qualitative ablation. Our qualitative ablation results on Horns scene shows the contribution of each module in performance of our model at 3-view scenario.

5.3 Ablation Study

We validate our design choices by performing an ablation study on LLFF (Mildenhall et al., 2019) dataset.

Table 2: Ablation study.
Components PSNR\uparrow SSIM\uparrow LPIPS\downarrow Avg.\downarrow
(a) Baseline 14.62 0.351 0.495 0.246
(b) (a) + cons\mathcal{L}_{\mathrm{cons}} 18.10 0.529 0.408 0.164
(c) (b) + MM (O. mask) 18.24 0.535 0.379 0.159
(d) (c) + Progressive 18.46 0.552 0.349 0.151
(e) (d) + reg\mathcal{L}_{\mathrm{reg}} (Ours) 18.55 0.592 0.340 0.150

Feature-level consistency loss.

We observe that without the consistency loss cons\mathcal{L}_{\mathrm{cons}}, our model suffers both quantitative and qualitative decrease in reconstruction fidelity, verified by incoherent geometry in image (a) of Figure 7. Absence of unseen view consistency modeling destabilizes the model, resulting divergent behaviours.

Occlusion mask.

We observe that addition of occlusion mask MM improves overall appearance as well as geometry, as shown in image (c) of Figure 7. Its absence results broken geometry throughout the overall scene, as demonstrated in (b). Erroneous artifacts pertaining to projections from different viewpoints were detected in multiple scenes, resulting lower quantitative values.

Progressive training strategies.

In Table 3, we justify our progressive training strategies with additional experiments on NeRF-Synthetic dataset, while in the main ablation we conduct an ablation with progressive annealing only. For pose generation, we sample pose angle from large interval in the beginning, instead of slowly growing the interval. For positional encoding, we replace progressive annealing with naïve positional encoding used in NeRF. We observe that their absence causes destabilization of the model and degradation in appearance, respectively.

Table 3: Progressive training ablation.
Components PSNR\uparrow SSIM\uparrow LPIPS\downarrow Avg. \downarrow
w/o prog. anneal 18.50 0.852 0.781 0.161
w/o prog. pose 16.96 0.799 0.811 0.194
w/o both 17.04 0.788 0.823 0.197
GeCoNeRF (Ours) 19.23 0.866 0.723 0.148

Edge-aware disparity regularization.

We observe that inclusion of edge-aware disparity regularization reg\mathcal{L}_{\mathrm{reg}} refines given geometry, as shown in image (e) of Figure 7. By applying reg\mathcal{L}_{\mathrm{reg}}, we see increased smoothness in geometry throughout the overall scene. This loss contributes to removal of erroneous artifacts, which achieves better results both qualitatively and quantitatively, as shown in Table 2.

Feature-level loss vs. pixel-level loss.

In Table 4, we conduct a quantitative ablation comparisons between feature-level consistency loss consM\mathcal{L}^{M}_{\mathrm{cons}} and pixel-level photometric consistency loss pixM\mathcal{L}^{M}_{\mathrm{pix}}, both with occlusion masking. As shown in Figure 8, naïvely applying pixel-level loss for consistency modeling leads to broken geometry. This phenomenon can be attributed to pix\mathcal{L}_{\mathrm{pix}} being agnostic to view-dependent specular effects, which the network tries to model by altering or erasing altogether non-Lambertian surfaces.

Table 4: Pixel-level consistency ablation.
Components PSNR\uparrow SSIM\uparrow LPIPS\downarrow Avg.\downarrow
w/ pixM\mathcal{L}^{M}_{\mathrm{pix}} 17.98 0.528 0.431 0.165
w/ consM\mathcal{L}^{M}_{\mathrm{cons}} (Ours) 11 18.55 0.592 0.340 0.150
Refer to caption
Figure 8: pixM\mathcal{L}^{M}_{\mathrm{pix}} vs. consM\mathcal{L}^{M}_{\mathrm{cons}} comparison.

6 Conclusion

We present GeCoNeRF, a novel approach for optimizing Neural Radiance Fields (NeRF) for few-shot novel view synthesis. Inspired by self-supervised monocular depth estimation method, we regularize geometry consistency by giving semantic consistency between rendered image and warped image. This approach overcomes limitation of NeRF with sparse inputs, which shows performance degradation with depth ambiguity and many artifacts. With feature consistency loss, we are able to regularize NeRF at unobserved viewpoints to give it beneficial geometric constraint. Further techniques and training strategies we propose prove to have stabilizing effect and faciliate optimization of our network. Our experimental evaluation demonstrates our method’s competitiveness results compared to other state of the art baselines.

References

  • Attal et al. (2021) Attal, B., Laidlaw, E., Gokaslan, A., Kim, C., Richardt, C., Tompkin, J., and O’Toole, M. Törf: Time-of-flight radiance fields for dynamic scene view synthesis. Advances in neural information processing systems, 34, 2021.
  • Barron et al. (2021) Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., and Srinivasan, P. P. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
  • Bortolon et al. (2022) Bortolon, M., Del Bue, A., and Poiesi, F. Data augmentation for nerf: a geometric consistent solution based on view morphing, 2022. URL https://arxiv.org/abs/2210.04214.
  • Chen et al. (2021) Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., and Su, H. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  14124–14133, 2021.
  • Chen et al. (2022) Chen, Z., Wang, C., Guo, Y., and Zhang, S.-H. Structnerf: Neural radiance fields for indoor scenes with structural hints. ArXiv, abs/2209.05277, 2022.
  • Chibane et al. (2021) Chibane, J., Bansal, A., Lazova, V., and Pons-Moll, G. Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  7911–7920, 2021.
  • Darmon et al. (2022) Darmon, F., Bascle, B., Devaux, J., Monasse, P., and Aubry, M. Improving neural implicit surfaces geometry with patch warping. 2022.
  • Deng et al. (2022) Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. Depth-supervised NeRF: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.
  • Deng et al. (2021) Deng, Y., Yang, J., Xiang, J., and Tong, X. Gram: Generative radiance manifolds for 3d-aware image generation. arXiv preprint arXiv:2112.08867, 2021.
  • Fu et al. (2022) Fu, Q., Xu, Q., Ong, Y.-S., and Tao, W. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction, 2022. URL https://arxiv.org/abs/2205.15848.
  • Garg et al. (2016) Garg, R., Bg, V. K., Carneiro, G., and Reid, I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European conference on computer vision, pp.  740–756. Springer, 2016.
  • Godard et al. (2017) Godard, C., Mac Aodha, O., and Brostow, G. J. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017.
  • Hedman et al. (2021) Hedman, P., Srinivasan, P. P., Mildenhall, B., Barron, J. T., and Debevec, P. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5875–5884, 2021.
  • Huang et al. (2021) Huang, B., Yi, H., Huang, C., He, Y., Liu, J., and Liu, X. M3vsnet: Unsupervised multi-metric multi-view stereo network. In 2021 IEEE International Conference on Image Processing (ICIP), pp.  3163–3167, 2021. doi: 10.1109/ICIP42928.2021.9506469.
  • Jaderberg et al. (2015) Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015.
  • Jain et al. (2021) Jain, A., Tancik, M., and Abbeel, P. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5885–5894, 2021.
  • Jensen et al. (2014) Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., and Aanæs, H. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  406–413, 2014.
  • Jeong et al. (2021) Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., and Park, J. Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5846–5854, 2021.
  • Johnson et al. (2016) Johnson, J., Alahi, A., and Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.
  • Khot et al. (2019) Khot, T., Agrawal, S., Tulsiani, S., Mertz, C., Lucey, S., and Hebert, M. Learning unsupervised multi-view stereopsis via robust photometric consistency. arXiv preprint arXiv:1905.02706, 2019.
  • Kim et al. (2022) Kim, M., Seo, S., and Han, B. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
  • Mildenhall et al. (2019) Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari, N. K., Ramamoorthi, R., Ng, R., and Kar, A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019.
  • Mildenhall et al. (2020) Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  • Müller et al. (2022) Müller, T., Evans, A., Schied, C., and Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989, 2022.
  • Niemeyer & Geiger (2021) Niemeyer, M. and Geiger, A. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  11453–11464, 2021.
  • Niemeyer et al. (2022) Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M. S. M., Geiger, A., and Radwan, N. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
  • Park et al. (2021) Park, K., Sinha, U., Barron, J. T., Bouaziz, S., Goldman, D. B., Seitz, S. M., and Martin-Brualla, R. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5865–5874, 2021.
  • Pumarola et al. (2021) Pumarola, A., Corona, E., Pons-Moll, G., and Moreno-Noguer, F. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10318–10327, 2021.
  • Reiser et al. (2021) Reiser, C., Peng, S., Liao, Y., and Geiger, A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  14335–14345, 2021.
  • Roessle et al. (2021) Roessle, B., Barron, J. T., Mildenhall, B., Srinivasan, P. P., and Nießner, M. Dense depth priors for neural radiance fields from sparse input views. arXiv preprint arXiv:2112.03288, 2021.
  • Schwarz et al. (2020) Schwarz, K., Liao, Y., Niemeyer, M., and Geiger, A. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems, 33:20154–20166, 2020.
  • Simonyan & Zisserman (2014) Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556.
  • Tancik et al. (2020) Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 2020.
  • Tretschk et al. (2021) Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., and Theobalt, C. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  12959–12970, 2021.
  • Wang et al. (2004) Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004. doi: 10.1109/TIP.2003.819861.
  • Xu et al. (2021) Xu, X., Pan, X., Lin, D., and Dai, B. Generative occupancy fields for 3d surface-aware image synthesis. Advances in Neural Information Processing Systems, 34, 2021.
  • Yu et al. (2021a) Yu, A., Li, R., Tancik, M., Li, H., Ng, R., and Kanazawa, A. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021a.
  • Yu et al. (2021b) Yu, A., Ye, V., Tancik, M., and Kanazawa, A. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4578–4587, 2021b.
  • Zhan et al. (2018) Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., and Reid, I. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  340–349, 2018.
  • Zhang et al. (2022) Zhang, J., Zhang, Y., Fu, H., Zhou, X., Cai, B., Huang, J., Jia, R., Zhao, B., and Tang, X. Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  18355–18365, 2022. doi: 10.1109/CVPR52688.2022.01783.
  • Zhang et al. (2020) Zhang, K., Riegler, G., Snavely, N., and Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.
  • Zhang et al. (2018) Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
  • Zhou et al. (2017) Zhou, T., Brown, M., Snavely, N., and Lowe, D. G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  1851–1858, 2017.