This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Hong Kong University of Science and Technology
11email: {lzhouai,zluoag,mzhen,tshenaa,zhuangbr,quan}@cse.ust.hk
22institutetext: Everest Innovation Technology
22email: {sli,fangtian}@altizure.com

Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction

Lei Zhou 11    Zixin Luo 11    Mingmin Zhen 11    Tianwei Shen 11   
Shiwei Li
22
   Zhuofei Huang 11    Tian Fang 22    Long Quan 11
Abstract

Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number. When the problem is scaled up, this step is neither efficient in computation nor manageable for a single compute node. In this work, we propose a stochastic bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM iterations to improve the efficiency and scalability. It first reformulates the quadratic programming problem of an LM iteration based on the clustering of the visibility graph by introducing the equality constraints across clusters. Then, we propose to relax it into a chance constrained problem and solve it through sampled convex program. The relaxation is intended to eliminate the interdependence between clusters embodied by the constraints, so that a large RCS can be decomposed into independent linear sub-problems. Numerical experiments on unordered Internet image sets and sequential SLAM image sets, as well as distributed experiments on large-scale datasets, have demonstrated the high efficiency and scalability of the proposed approach. Codes are released at https://github.com/zlthinker/STBA.

Keywords:
Stochastic bundle adjustment, Clustering, 3D reconstruction

1 Introduction

Bundle Adjustment (BA) is typically formulated as a nonlinear least square problem to refine the parameters of cameras and 3D points. It is usually addressed by the Levenberg-Marquardt (LM) algorithm, where a linear equation system called Reduced Camera System (RCS) [26, 16] must be solved in each iteration. However, when the problem is scaled up, solving the RCS has been a bottleneck which takes a major portion of computation time (see the first bar of Fig. 1). The dimension of the RCS is proportional to the camera number, and thus the increase of cameras would ramp up the computation and memory consumption, although methods have been proposed to use efficient linear solvers [3, 24, 14, 22, 38] and economize on matrix manipulations [3, 23, 38]. Furthermore, different to other operations such as Jacobian or gradient evaluations, this step is indivisible, making it hard to fit BA for parallel and distributed computing.

Refer to caption
Figure 1: Per-iteration time of bundle adjustment w.r.t. the compute node number. The Levenberg-Marquardt (LM) algorithm is limited by the bottleneck when solving the reduced camera system (RCS). Our STBA splits the RCS into independent sub-problems, which achieves a speedup on a single-threaded compute node. Besides, STBA allows parallel and distributed computing with multiple compute nodes which further improves the efficiency and scalability.

In order to accomplish efficient and scalable reconstructions, clustering has been adopted as a useful practice to decompose a large problem into smaller, more manageable ones. For example, a number of SfM approaches have been developed in a divide and conquer fashion, which first reconstruct the partitioned sub-maps independently and then merge the partial reconstructions together [32, 43, 44, 18]. Although these methods are able to produce the initial sparse reconstructions in an efficient and scalable way, a full bundle adjustment is still indispensable to optimize the camera and point parameters globally. Therefore, in the context of BA, the methods [17, 40] proposed to distribute the objectives of BA to the split sub-models and optimize the sum of the objectives under the distributed optimization frameworks [11, 6], which, however, involves extra costly inner iterations and thus makes the optimization over-complicated.

In this work, we follow the direction of exploiting the clustering methods and push forward the investigation on how to integrate a clustering scheme into the BA problem systematically. Instead of applying a fixed, and one-time partition at the pre-processing step, we derive a stochastic clustering-based strategy within each LM iteration so as to decompose the RCS for efficiency and scalability.

  • First, we reformulate the quadratic programming problem of an LM iteration based on the clustering of the visibility graph. Such a formulation splits the problem into the most elementary structures, but meanwhile introduces additional equality constraints and raises the computational cost.

  • Second, in order to make the above problem efficiently solvable, we propose to relax the constraints into chance constraints [25] and then solve it with sampled convex program [8]. The approach helps to eliminate the interdependence between different clusters by randomized constraint reduction, which hence decomposes the RCS into independent linear sub-problems related to the clusters. In this way, an approximate step can be achieved efficiently.

  • Third, we present an add-on technique which helps to correct the approximate steps towards the steepest descent direction within a small trust region to improve the convergence.

Due to the stochastic process induced by the sampled convex program, we term our algorithm STochastic Bundle Adjustment (STBA), which brings the following tangible advantages. First, solving the split RCS in place of the original one has achieved a great speedup thanks to the reduced complexity. Second, the solving process can be made parallel and scalable to accommodate the growth of camera numbers, since all the sub-steps of a BA iteration can be decomposed. In Fig. 1, we visualize how the running time is reduced by distributing STBA over multiple compute nodes.

2 Related Works

Bundle adjustment (BA) is an indispensable step of 3D reconstruction [42, 29, 28, 41, 44, 43]. It is typically solved by the Levenberg-Marquardt (LM) algorithm [27], which approximately linearizes the error functions inside a local trust region and then solves a linear normal equation for an update step. SBA [26] first simplified the norm equation into a reduced camera system (RCS) through Schur complement by taking advantage of the special problem structure. After this, efforts were dedicated to solving the RCS faster in either exact or inexact ways. The exact solvers apply Cholesky factorization to the reduced camera matrix 𝐒\mathbf{S}, while exploiting variable ordering [4, 13] and supernodal methods [33, 13] for acceleration. The inexact solvers are based on the Conjugate Gradient (CG) method [20] coupled with various preconditioners [3, 21, 24], which attains inexact solutions with better efficiency. Apart from the algorithmic improvements, [23, 38] presented well-optimized implementations of the LM solver to save the memory usage and exploit the CPU and GPU parallelism. However, despite the efforts above, solving a large and indivisible RCS will increasingly become the bottleneck of a BA solver when the problem is scaled up.

In order to make large-scale reconstruction tractable, the clustering methods are initially introduced into the structure from motion (SfM) domain. Basically, a divide-and-conquer strategy is applied, which first partitions a large scene into multiple sub-maps and then merges the partial reconstructions globally [32, 43, 44, 18]. In the formulation of these approaches, a reduced optimization problem other than the original BA problem is addressed, thus leading to a sub-optimal result. For example, [32, 44] factored out the internal variables inside the sub-maps and [43] registered all the cameras with motion averaging [10] without the involvement of points. In the realm of BA, [24] derived a block-diagonal preconditioner for the RCS from the clustering of cameras, but the clustering did not help to decompose the problem as it is done in the SfM algorithms [32, 43, 44, 18]. Instead, [17, 40] proposed to apply the distributed optimization frameworks like the Douglas-Rachford method [11] and ADMM [6] onto the empirically clusterized BA problems towards large scales. Although built upon a theoretical foundation, the methods required costly inner iterations and introduced a plethora of latent parameters during optimization.

3 Bundle Adjustment Revisited

In this section, we first revisit the bundle adjustment problem and its LM solution to give the necessary preliminaries and terminologies. Henceforth, vectors and matrices appear in boldface and .\|.\| denotes the L2 norm.

A bundle adjustment problem is built upon a bipartite visibility graph 𝒢=(𝒞𝒫,)\mathcal{G}=(\mathcal{C}\cup\mathcal{P},\mathcal{E}). Here, 𝒞={𝐜id}i=1m\mathcal{C}=\{\mathbf{c}_{i}\in\mathbb{R}^{d}\}_{i=1}^{m} denotes the set of mm cameras parameterized by dd-dimensional vectors, 𝒫={𝐩i3}i=1n\mathcal{P}=\{\mathbf{p}_{i}\in\mathbb{R}^{3}\}_{i=1}^{n} denotes the set of nn 3D points, and ={𝐪i2}i=1q\mathcal{E}=\{\mathbf{q}_{i}\in\mathbb{R}^{2}\}_{i=1}^{q} denotes the set of qq projections. The objective is to minimize F(𝐱)=𝐟(𝐱)2F(\mathbf{x})=\|\mathbf{f}(\mathbf{x})\|^{2}, where 𝐟\mathbf{f} denotes a 2q2q-dimensional vector of reprojection errors and 𝐱\mathbf{x} concatenates camera parameters 𝐜dm\mathbf{c}\in\mathbb{R}^{dm} and point parameters 𝐩3n\mathbf{p}\in\mathbb{R}^{3n}, i.e., 𝐱=[𝐜T,𝐩T]T\mathbf{x}=[\mathbf{c}^{T},\mathbf{p}^{T}]^{T}.

The LM algorithm achieves an update step Δ𝐱\Delta\mathbf{x} at each iteration by linearizing 𝐟(𝐱)\mathbf{f}(\mathbf{x}) as 𝐉(𝐱)Δ𝐱+𝐟(𝐱)\mathbf{J}(\mathbf{x})\Delta\mathbf{x}+\mathbf{f}(\mathbf{x}) in a trust region around 𝐱\mathbf{x}, where 𝐉(𝐱)=𝐱𝐟(𝐱)=[𝐉(𝐜),𝐉(𝐩)]\mathbf{J}(\mathbf{x})=\mathbf{\nabla}_{\mathbf{x}}\mathbf{f}(\mathbf{x})=[\mathbf{J}(\mathbf{c}),\mathbf{J}(\mathbf{p})] is the Jacobian matrix. Then the minimization of F(𝐱)F(\mathbf{x}) is turned into

minΔ𝐱𝐉(𝐱)Δ𝐱+𝐟(𝐱)2+λ𝐃Δ𝐱2,\min_{\Delta\mathbf{x}}\|\mathbf{J}(\mathbf{x})\Delta\mathbf{x}+\mathbf{f}(\mathbf{x})\|^{2}+\lambda\|\mathbf{D}\Delta\mathbf{x}\|^{2}, (1)

whose solution comes from the normal equation below

[𝐉(𝐱)λ𝐃]Δ𝐱=[𝐟(𝐱)𝟎],\left[\begin{matrix}\mathbf{J}(\mathbf{x})\\ \sqrt{\lambda}\mathbf{D}\end{matrix}\right]\Delta\mathbf{x}=\left[\begin{matrix}-\mathbf{f}(\mathbf{x})\\ \mathbf{0}\end{matrix}\right], (2)

where λ>0\lambda>0 is the damping parameter and typically 𝐃=diag(𝐉T(𝐱)𝐉(𝐱))12\mathbf{D}=\textrm{diag}(\mathbf{J}^{T}(\mathbf{x})\mathbf{J}(\mathbf{x}))^{\frac{1}{2}}. For notational simplicity, we write 𝐉𝐉(𝐱)\mathbf{J}\triangleq\mathbf{J}(\mathbf{x}), 𝐉𝐜𝐉(𝐜)\mathbf{J_{c}}\triangleq\mathbf{J}(\mathbf{c}), 𝐉𝐩𝐉(𝐩)\mathbf{J_{p}}\triangleq\mathbf{J}(\mathbf{p}) and 𝐟𝐟(𝐱)\mathbf{f}\triangleq\mathbf{f}(\mathbf{x}). After multiplying [𝐉T,λ𝐃T][\mathbf{J}^{T},\sqrt{\lambda}\mathbf{D}^{T}] at both sides of Eq. 2, we have

(𝐉T𝐉+λ𝐃T𝐃)Δ𝐱=𝐉T𝐟,(\mathbf{J}^{T}\mathbf{J}+\lambda\mathbf{D}^{T}\mathbf{D})\Delta\mathbf{x}=-\mathbf{J}^{T}\mathbf{f}, (3)

which can be re-written in the form

[𝐁𝐄𝐄T𝐂][Δ𝐜Δ𝐩]=[𝐯𝐰],\left[\begin{matrix}\mathbf{B}&\mathbf{E}\\ \mathbf{E}^{T}&\mathbf{C}\end{matrix}\right]\left[\begin{matrix}\Delta\mathbf{c}\\ \Delta\mathbf{p}\end{matrix}\right]=\left[\begin{matrix}\mathbf{v}\\ \mathbf{w}\end{matrix}\right], (4)

where 𝐁=𝐉𝐜T𝐉𝐜+λdiag(𝐉𝐜T𝐉𝐜)\mathbf{B}=\mathbf{J}_{\mathbf{c}}^{T}\mathbf{J_{c}}+\lambda\textrm{diag}(\mathbf{J}_{\mathbf{c}}^{T}\mathbf{J_{c}}), 𝐂=𝐉𝐩T𝐉𝐩+λdiag(𝐉𝐩T𝐉𝐩)\mathbf{C}=\mathbf{J}_{\mathbf{p}}^{T}\mathbf{J_{p}}+\lambda\textrm{diag}(\mathbf{J}_{\mathbf{p}}^{T}\mathbf{J_{p}}), 𝐄=𝐉𝐜T𝐉𝐩\mathbf{E}=\mathbf{J}_{\mathbf{c}}^{T}\mathbf{J_{p}}, 𝐯=𝐉𝐜T𝐟\mathbf{v}=-\mathbf{J}_{\mathbf{c}}^{T}\mathbf{f}, and 𝐰=𝐉𝐩T𝐟\mathbf{w}=-\mathbf{J}_{\mathbf{p}}^{T}\mathbf{f}. Eq. 4 can be simplified by the Schur complement [26], which leads to

𝐒Δ𝐜\displaystyle\mathbf{S}\Delta\mathbf{c} =𝐯𝐄𝐂1𝐰,\displaystyle=\mathbf{v}-\mathbf{E}\mathbf{C}^{-1}\mathbf{w}, (5)
Δ𝐩\displaystyle\Delta\mathbf{p} =𝐂1(𝐰𝐄TΔ𝐜),\displaystyle=\mathbf{C}^{-1}(\mathbf{w}-\mathbf{E}^{T}\Delta\mathbf{c}), (6)

where 𝐒=𝐁𝐄𝐂1𝐄T\mathbf{S}=\mathbf{B}-\mathbf{E}\mathbf{C}^{-1}\mathbf{E}^{T} is the Schur complement of 𝐂\mathbf{C}. Here, 𝐒\mathbf{S}, known as the reduced camera matrix, is a block structured symmetric positive definite matrix. The block 𝐒ijd×d\mathbf{S}_{ij}\in\mathbb{R}^{d\times d} is nonzero iff cameras 𝐜i\mathbf{c}_{i} and 𝐜j\mathbf{c}_{j} observe at least one common point. Although a variety of sparse Cholesky factorization techniques [4, 13, 33] and preconditioned conjugate gradient methods [3, 21, 24] have been developed to solve the reduced camera system (RCS) of Eq. 5, it still can be prohibitive when the camera number mm grows large.

4 Stochastic Bundle Adjustment

In this section, we present our stochastic bundle adjustment (STBA) method that decomposes the RCS into clusters inside the LM iterations. In Sec. 4.1, we first reformulate problem (1) based on the clustering of the visibility graph 𝒢\mathcal{G}, yet subject to additional equality constraints. Next, in Sec. 4.2, we apply chance constrained relaxation to the reformulation and solve it by sampled convex program [8, 9]. It manages to decompose the RCS into cluster-related linear sub-problems and yield an approximate STBA step efficiently. Third, a steepest correction step is proposed to remedy the approximation error of the STBA steps in Sec. 4.3. Finally, in Sec. 4.4, we present a practical implementation of the random constraint sampler required by the chance constrained relaxation.

Refer to caption
Figure 2: Illustration of point splitting/binding over the visibility graph (top) and the corresponding structure of the reduced camera matrix (bottom). Point splitting helps to reshape the reduced camera matrix into the block-diagonal structure, while point binding does the inverse.

4.1 Clustering Based Reformulation

In constrast to the previous methods [32, 43, 44, 18, 17, 40] that partition the problem in the pre-processing stage, we present a reformulation of problem (1) to decompose the RCS into clusters inside the LM iterations.

Particularly, we consider the most general case that every single camera forms a cluster. In order to preserve all the projections \mathcal{E}, we apply point splitting to the physical points, as shown in Fig. 2. For a physical point 𝐩i\mathbf{p}_{i} viewed by viv_{i} cameras, we split it into viv_{i} virtual points {𝐩ij}j=1vi\{\mathbf{p}_{i}^{j}\}_{j=1}^{v_{i}}, each assigned to one cluster. Such a clustering will reformulate problem (1) equivalently as a new constrained quadratic programming (QP) problem as below

minΔ𝐱\displaystyle\min_{\Delta\mathbf{x}^{\prime}} 𝐉Δ𝐱+𝐟2+λ𝐃Δ𝐱2,\displaystyle\;\;\|\mathbf{J}^{\prime}\Delta\mathbf{x}^{\prime}+\mathbf{f}\|^{2}+\lambda\|\mathbf{D}^{\prime}\Delta\mathbf{x}^{\prime}\|^{2}, (7)
s.t. 𝐀Δ𝐱=𝟎.\displaystyle\;\;\mathbf{A}\Delta\mathbf{x}^{\prime}=\mathbf{0}. (8)

Here, Δ𝐱=[Δ𝐜T,Δ𝐩T]T\Delta\mathbf{x^{\prime}}=[\Delta\mathbf{c}^{T},\Delta\mathbf{p^{\prime}}^{T}]^{T} is an expansion of Δ𝐱\Delta\mathbf{x} which considers the update steps for all the virtual points, and so is the Jacobian 𝐉=[𝐉𝐜,𝐉𝐩]\mathbf{J}^{\prime}=[\mathbf{J}_{\mathbf{c}},\mathbf{J}_{\mathbf{p}}^{\prime}]. Accordingly, 𝐃=diag(𝐉T𝐉)12\mathbf{D}^{\prime}=\textrm{diag}(\mathbf{J}^{\prime T}\mathbf{J}^{\prime})^{\frac{1}{2}}. The noteworthy distinction between problems (7) and (1) is that the new equality constraints of Eq. 8 are imposed to enforce that the steps of the same points in different clusters are identical. For example, Δ𝐩is=Δ𝐩it\Delta{\mathbf{p}_{i}^{s}}=\Delta{\mathbf{p}_{i}^{t}} (s,t{1,,vi})(\forall s,t\in\{1,...,v_{i}\}) for point 𝐩i\mathbf{p}_{i} and the corresponding j-th row of 𝐀\mathbf{A} appears in a form as 𝐚j=[0,1,,1,0]\mathbf{a}_{j}=[0...,1,...,-1,...0]. Since a point 𝐩i\mathbf{p}_{i} introduces vi1v_{i}-1 equations, 𝐀\mathbf{A} has a row number of r=i=1n(vi1)r=\sum_{i=1}^{n}(v_{i}-1). Besides, 𝐀\mathbf{A} is full row rank, because the rows each of which defines a unique equality constraint are linearly independent.

The constrained QP problem above can be easily solved by Lagrangian duality, which turns the problem into

𝐇λΔ𝐱=(𝐉T𝐟+𝐀T𝝂),\mathbf{H}_{\lambda}\Delta\mathbf{x^{\prime}}=-(\mathbf{J^{\prime}}^{T}\mathbf{f}+\mathbf{A}^{T}\boldsymbol{\nu}), (9)

where 𝐇λ=𝐉T𝐉+λ𝐃T𝐃\mathbf{H}_{\lambda}=\mathbf{J^{\prime}}^{T}\mathbf{J^{\prime}}+\lambda\mathbf{D}^{\prime T}\mathbf{D}^{\prime} and 𝝂=(𝐀𝐇λ1𝐀T)1𝐀𝐇λ1𝐉T𝐟\boldsymbol{\nu}=-(\mathbf{A}\mathbf{H}_{\lambda}^{-1}\mathbf{A}^{T})^{-1}\mathbf{A}\mathbf{H}_{\lambda}^{-1}\mathbf{J^{\prime}}^{T}\mathbf{f} are the Lagrangian multipliers. Eq. 9 is in the similar format to Eq. 3, but has an additional term 𝐀T𝝂\mathbf{A}^{T}\boldsymbol{\nu} on the right hand side compared with Eq. 3. While 𝐉T𝐟\mathbf{J^{\prime}}^{T}\mathbf{f} includes the gradients w.r.t. the independent virtual points, 𝐀T𝝂\mathbf{A}^{T}\boldsymbol{\nu} acts as a correction term to ensure that the solution complies with the constraints of Eq. 8.

The ultimate benefit of the clustering-based reformulation is revealed below. By likewise applying Schur complement to Eq. 9, we have

𝐒Δ𝐜=𝐯𝐄𝐂1𝐰,\mathbf{S}^{\prime}\Delta\mathbf{c}=\mathbf{v}^{\prime}-\mathbf{E}^{\prime}\mathbf{C}^{\prime-1}\mathbf{w}^{\prime}, (10)

where 𝐄=𝐉𝐜T𝐉𝐩\mathbf{E}^{\prime}=\mathbf{J}_{\mathbf{c}}^{T}\mathbf{J^{\prime}_{p}}, 𝐂=𝐉𝐩T𝐉𝐩+λdiag(𝐉𝐩T𝐉𝐩)\mathbf{C^{\prime}}=\mathbf{J^{\prime}_{p}}^{T}\mathbf{J^{\prime}_{p}}+\lambda\textrm{diag}(\mathbf{J^{\prime}_{p}}^{T}\mathbf{J^{\prime}_{p}}), 𝐒=𝐁𝐄𝐂1𝐄T\mathbf{S}^{\prime}=\mathbf{B}-\mathbf{E}^{\prime}\mathbf{C}^{\prime-1}\mathbf{E}^{\prime T}, and [𝐯T,𝐰T]T=(𝐉T𝐟+𝐀T𝝂)[\mathbf{v}^{\prime T},\mathbf{w}^{\prime T}]^{T}=-(\mathbf{J^{\prime}}^{T}\mathbf{f}+\mathbf{A}^{T}\boldsymbol{\nu}). Due to the fact that any two cameras do not share any common virtual points, 𝐒\mathbf{S}^{\prime} now becomes a block-diagonal matrix, i.e., 𝐒ij=𝟎,ij\mathbf{S}^{\prime}_{ij}=\mathbf{0},\forall i\neq j. Then Eq. 10 can be equivalently decomposed into mm most elementary linear systems each corresponding to one camera.

4.2 Chance Constrained Relaxation

The major problem with the clustering based reformulation above is the excessive cost of evaluating the Lagrangian multipliers 𝝂\boldsymbol{\nu}, because it requires the evaluation of 𝐇λ1\mathbf{H}_{\lambda}^{-1}. In order to make the problem practically solvable, we would like to eliminate the need to evaluate the correction term 𝐀T𝝂\mathbf{A}^{T}\boldsymbol{\nu} of Eq. 9 by means of relaxation for problem (7).

We multiply a random binary variable θi\theta_{i} with the i-th equality constraint of Eq. 8, which results in θi𝐚iΔ𝐱=0\theta_{i}\mathbf{a}_{i}\Delta\mathbf{x}^{\prime}=0. It could be interpreted that, if θi=1\theta_{i}=1, the constraint 𝐚iΔ𝐱=0\mathbf{a}_{i}\Delta\mathbf{x}^{\prime}=0 must be satisfied; otherwise, the constraint is allowed to be violated. In this way, Eq. 8 will be relaxed into chance constraints [25], which leads to

minΔ𝐱\displaystyle\min_{\Delta\mathbf{x}^{\prime}} 𝐉Δ𝐱+𝐟2+λ𝐃Δ𝐱2,\displaystyle\;\;\|\mathbf{J}^{\prime}\Delta\mathbf{x}^{\prime}+\mathbf{f}\|^{2}+\lambda\|\mathbf{D}^{\prime}\Delta\mathbf{x}^{\prime}\|^{2}, (11)
s.t. Prob(𝐚iΔ𝐱=0)=Prob(θi=1)α,i=1,,r,\displaystyle\;\;\textrm{Prob}(\mathbf{a}_{i}\Delta\mathbf{x}^{\prime}=0)=\textrm{Prob}(\theta_{i}=1)\geq\alpha,\;\;i=1,...,r, (12)

where α(0,1]\alpha\in(0,1] is a predefined confidence level. It means that, instead of enforcing the hard constraints, we allow them to be satisfied with a probability above α\alpha. The advantage of the chance constrained relaxation is that we can determine the reliability level of approximation by controlling α\alpha. The larger α\alpha is, the closer the chance constrained problem (11) will be to the original deterministic problem (7). One approach to problem (11) is called sampled convex program [8, 9]. It extracts NN independent samples Θ(α)={θi(n)|i=1,,r,n=1,,N}\Theta(\alpha)=\{\theta_{i}^{(n)}|i=1,...,r,n=1,...,N\} with a minimum sampling probability of α\alpha for each variable θi(n)\theta_{i}^{(n)} and replaces the chance constraints (12) with the sampled ones. Below we will elaborate on how problem (11) can be solved given the samples Θ(α)\Theta(\alpha).

For a sample θi(n)Θ(α)\theta_{i}^{(n)}\in\Theta(\alpha), if θi(n)=0\theta_{i}^{(n)}=0, the equality constraint 𝐚iΔ𝐱=0\mathbf{a}_{i}\Delta\mathbf{x}^{\prime}=0 is dropped; if θi(n)=1\theta_{i}^{(n)}=1, it enforces the equality of the steps of two virtual points, e.g., Δ𝐩js=Δ𝐩jt\Delta\mathbf{p}_{j}^{s}=\Delta\mathbf{p}_{j}^{t}. Here the virtual point 𝐩js\mathbf{p}_{j}^{s} belongs to the single-camera cluster of camera 𝐜s\mathbf{c}_{s} and similarly 𝐩jt\mathbf{p}_{j}^{t} to 𝐜t\mathbf{c}_{t}. Then we merge 𝐩js\mathbf{p}_{j}^{s} and 𝐩jt\mathbf{p}_{j}^{t} into one point as shown in Fig. 2, and we call the operation point binding as opposed to point splitting introduced in Sec. 4.1. On the one hand, the point binding leads to the consequence that the Lagrangian multiplier νi=0\nu_{i}=0, because the equality Δ𝐩js=Δ𝐩jt\Delta\mathbf{p}_{j}^{s}=\Delta\mathbf{p}_{j}^{t} always holds. On the other hand, the block 𝐒st\mathbf{S}^{\prime}_{st} of 𝐒\mathbf{S}^{\prime} (c.f. Eq. 10) becomes nonzero, since the merged point is now shared by cameras 𝐜s\mathbf{c}_{s} and 𝐜t\mathbf{c}_{t}. After applying point binding to all the virtual points involved in the sampled constraints, i.e., {θi(n)Θ(α)|θi(n)=1}\{\theta_{i}^{(n)}\in\Theta(\alpha)|\theta_{i}^{(n)}=1\}, all the constraints will be eliminated and there is no need to evaluate 𝝂\boldsymbol{\nu} any more. Meanwhile, it will bring the cameras sharing common points into the same clusters.

Since the cameras in different clusters have no points in common after the point binding, the matrix 𝐒\mathbf{S}^{\prime} will appear in a block-diagonal structure which we call cluster-diagonal, as illustrated in Fig. 2. It means that each diagonal block of 𝐒\mathbf{S}^{\prime} corresponds to a camera cluster. In particular, we can intentionally design the sampler Θ(α)\Theta(\alpha) in order to shape 𝐒\mathbf{S}^{\prime} into the desired cluster-diagonal structures, as we will present in Sec. 4.4. As a result, this structure of 𝐒\mathbf{S}^{\prime} still enables the decomposition of Eq. 10 into smaller independent linear systems each relating to one camera cluster. Provided that there are ll clusters, Eq. 10 can be equivalently re-written as

{𝐒1Δ𝐜1=𝐛1,𝐒lΔ𝐜l=𝐛l,\begin{cases}\mathbf{S}^{\prime}_{1}\Delta\mathbf{c}_{1}=\mathbf{b}_{1},\\ \;\;\;\;\;\;...\\ \mathbf{S}^{\prime}_{l}\Delta\mathbf{c}_{l}=\mathbf{b}_{l},\end{cases} (13)

where [Δ𝐜1T,,Δ𝐜lT]T=Δ𝐜[\Delta\mathbf{c}_{1}^{T},...,\Delta\mathbf{c}_{l}^{T}]^{T}=\Delta\mathbf{c} and [𝐛1T,,𝐛lT]T=𝐯𝐄𝐂1𝐰[\mathbf{b}_{1}^{T},...,\mathbf{b}_{l}^{T}]^{T}=\mathbf{v}^{\prime}-\mathbf{E^{\prime}}\mathbf{C^{\prime}}^{-1}\mathbf{w}^{\prime}. After Eqs. 13 are evaluated, we substitute Δ𝐜\Delta\mathbf{c} into Eq. 9 to give

𝐉𝐩Δ𝐩=i=1l𝐉𝐩iΔ𝐩i=𝐟𝐉𝐜Δ𝐜,\mathbf{J^{\prime}_{p}}\Delta\mathbf{p^{\prime}}=\sum_{i=1}^{l}\mathbf{J}_{\mathbf{p}^{i}}\Delta\mathbf{p}^{i}=-\mathbf{f}-\mathbf{J_{c}}\Delta\mathbf{c}, (14)

where 𝐉𝐩=[𝐉𝐩1,,𝐉𝐩l]\mathbf{J^{\prime}_{p}}=[\mathbf{J}_{\mathbf{p}^{1}},...,\mathbf{J}_{\mathbf{p}^{l}}] and Δ𝐩=[Δ𝐩1,,Δ𝐩l]\Delta\mathbf{p}^{\prime}=[\Delta\mathbf{p}^{1},...,\Delta\mathbf{p}^{l}] include the Jacobians and virtual point steps w.r.t. the ll clusters respectively and we omit 𝐃\mathbf{D}^{\prime} for ease of notation. To give a uniform step for a physical point, we equalize the steps of its virtual points in different clusters by solving the linear system below in place of Eq. 14:

i=1l𝐉𝐩iΔ𝐩=𝐟𝐉𝐜Δ𝐜.\sum_{i=1}^{l}\mathbf{J}_{\mathbf{p}^{i}}\Delta\mathbf{p}=-\mathbf{f}-\mathbf{J_{c}}\Delta\mathbf{c}. (15)

Since i=1l𝐉𝐩i=𝐉𝐩\sum_{i=1}^{l}\mathbf{J}_{\mathbf{p}^{i}}=\mathbf{J_{p}}, Eq. 15 gives the same solution as the point steps in Eq. 6: Δ𝐩=𝐂1(𝐰𝐄TΔ𝐜)\Delta\mathbf{p}=\mathbf{C}^{-1}(\mathbf{w}-\mathbf{E}^{T}\Delta\mathbf{c}).

So far we have presented how an update step of the camera and point parameters is determined approximately by STBA. Besides this, we keep the other components of the LM algorithm unchanged [30]. For reference, we detail the full algorithm in the supplementary material.

4.3 Steepest Descent Correction

The chance constrained relaxation in the last section effectively decomposes the RCS, but leads to approximate solutions with decreased feasibility due to the random constraint sampling. Below, we provide an empirical analysis on the effect of the approximation and present a conditional correction step to remedy the approximation error.

The LM algorithm is known to be the interpolation of the Gauss-Newton and gradient descent methods, depending on the trust region radius controlled by the damping parameter λ\lambda. When λ\lambda is small and the LM algorithm behaves more like the Gauss-Newton method, the approximation induced by STBA in Sec. 4.2 is admissible, in that problem (1) itself is derived from the first order approximation of the error function 𝐟(𝐱)\mathbf{f}(\mathbf{x}). And the LM algorithm can automatically contract the trust region when the approximation leads to the increase of the objective.

When λ\lambda is large, i.e., the trust region is small, the LM algorithm is closer to the gradient descent method, which gives a step towards the steepest descent direction defined by the right hand side of Eq. 9, i.e., (𝐉T𝐟+𝐀T𝝂)-(\mathbf{J^{\prime}}^{T}\mathbf{f}+\mathbf{A}^{T}\boldsymbol{\nu}). However, a problem with STBA is that the correction term 𝐀T𝝂\mathbf{A}^{T}\boldsymbol{\nu} is eliminated approximately by the chance constrained relaxation. As a consequence, the derived step would deviate from the steepest descent direction and thus hamper the convergence. Therefore, we propose to recover 𝐀T𝝂\mathbf{A}^{T}\boldsymbol{\nu} to remedy the deviation in such a case. Especially, when λ\lambda is large enough, the matrix 𝐇λ\mathbf{H}_{\lambda} in Eq. 9 will be dominated by the diagonal terms, so that we can approximate 𝐇λ\mathbf{H}_{\lambda} by diag(𝐇λ)\textrm{diag}(\mathbf{H}_{\lambda}). After that, 𝐀T𝝂=𝐀T(𝐀𝐇λ1𝐀T)1𝐀𝐇λ1𝐉T𝐟\mathbf{A}^{T}\boldsymbol{\nu}=-\mathbf{A}^{T}(\mathbf{A}\mathbf{H}_{\lambda}^{-1}\mathbf{A}^{T})^{-1}\mathbf{A}\mathbf{H}_{\lambda}^{-1}\mathbf{J^{\prime}}^{T}\mathbf{f} can be evaluated efficiently because of the sparsity of 𝐀\mathbf{A}. Since the approximation 𝐇λdiag(𝐇λ)\mathbf{H}_{\lambda}\approx\textrm{diag}(\mathbf{H}_{\lambda}) is not accurate unless λ\lambda is large, in practice, we enable the steepest descent correction particularly when λ0.1\lambda\geq 0.1. Fig. 3 visualizes the effect of the correction.

Refer to caption
Figure 3: Visualization of the steepest descent correction steps of sample camera parameters when λ=0.1\lambda=0.1. It shows that the deviations of approximate STBA steps from the LM steps are effectively corrected.

4.4 Stochastic Graph Clustering

The chance constrained relaxation in Sec. 4.2 necessitates an effective random constraint sampler Θ(α)\Theta(\alpha). Among many of the possible designs, we propose a viable implementation named stochastic graph clustering in this section.

The design of the clustering method considers the following requirements. First, the sampler should be randomized with respect to the chance constraints (12). Since (12) indicates that the expectation E(θi)\mathrm{E}(\theta_{i}) should have E(θi)α\mathrm{E}(\theta_{i})\geq\alpha (i=1,,ri=1,...,r), the upper bound of the confidence level α\alpha is defined as mini=1rE(θi)\min_{i=1}^{r}\mathrm{E}(\theta_{i}). Therefore, the sampler should sample as many constraints as possible on average to increase the upper bound of α\alpha. Second, the random sampler is intended to partition the cameras into small independent clusters so that Eqs. 13 can be solved efficiently.

Concretely, the stochastic graph clustering operates over a camera graph 𝒢c=(𝒞,c)\mathcal{G}_{c}=(\mathcal{C},\mathcal{E}_{c}), where the weight wijw_{ij} of an edge eijce_{ij}\in\mathcal{E}_{c} between cameras 𝐜i\mathbf{c}_{i} and 𝐜j\mathbf{c}_{j} is equal to the number of points covisible by the two cameras. At the beginning, each camera forms an individual cluster as formulated in Sec. 4.1. Next, if 𝐜i\mathbf{c}_{i} and 𝐜j\mathbf{c}_{j} are joined, a number of wijw_{ij} pairs of virtual points viewed by 𝐜i\mathbf{c}_{i} and 𝐜j\mathbf{c}_{j} will be merged. Therefore, wijw_{ij} equality constraints will be satisfied. In order to join as many virtual points as possible while yielding a cluster structure of 𝒢c\mathcal{G}_{c}, we aim at finding a clustering that maximizes the modularity below inspired by [7]: Q=12seijcδ(νi,νj)(wijkikj2s),Q=\frac{1}{2s}\sum_{e_{ij}\in\mathcal{E}_{c}}\delta(\nu_{i},\nu_{j})\left(w_{ij}-\frac{k_{i}k_{j}}{2s}\right), where s=eijcwijs=\sum_{e_{ij}\in\mathcal{E}_{c}}w_{ij} is the total sum of edge weights, ki=jwijk_{i}=\sum_{j}w_{ij} is the sum of weights of edges incident to camera 𝐜i\mathbf{c}_{i}, and νi\nu_{i} denotes the cluster of 𝐜i\mathbf{c}_{i}. δ(νi,νj)=1\delta(\nu_{i},\nu_{j})=1 if νi=νj\nu_{i}=\nu_{j} and 0 otherwise. The modularity Q[1,1]Q\in[-1,1] measures the density of connections inside clusters as opposed to those across clusters [7]. Therefore, a larger modularity generally indicates that more virtual points are merged inside clusters. Maximizing the modularity is NP-hard [34], but Louvain’s algorithm [7] provides a greedy strategy which greedily joins the two clusters giving the maximum increase in modularity in a bottom-up manner. It can be efficiently applied to large graphs since its complexity is shown to be linear in the node number on sparse data [7].

Refer to caption
Figure 4: Visualization of the random clustering results produced by stochastic graph clustering. Cameras of different clusters are in different colors.

However, Louvain’s algorithm [7] is deterministic due to its greedy nature. To ensure that every pair of virtual points is likely to be merged, we instead join clusters randomly according to a probability distribution defined based on the modularity increments [12], which is

Prob(NxNy)=exp(βΔQ(Nx,Ny))ijexp(βΔQ(Ni,Nj)),\textrm{Prob}(N_{x}\cup N_{y})=\frac{\exp(\beta\Delta Q(N_{x},N_{y}))}{\sum_{i}\sum_{j}\exp(\beta\Delta Q(N_{i},N_{j}))}, (16)

where β>0\beta>0 is a scaling parameter. Two neighboring clusters NxN_{x} and NyN_{y} are more likely to join together if it leads to a larger modularity increment ΔQ(Nx,Ny)\Delta Q(N_{x},N_{y}). In order to limit the sizes of the sub-problems of STBA, we stop joining clusters if their sizes exceed Γ\Gamma. In Fig. 4, we visualize the stochastic clustering results.

5 Experiments

5.1 Experiment Settings

Datasets. We run experiments on three different types of datasets: 1) 1DSfM dataset [37] which is composed of 14 sets of unordered Internet images; 2) KITTI dataset [19] containing 11 street-view image sequences; and 3) Large-Scale dataset which is collected by ourselves due to the absence of publicly available large-scale 3D datasets. It includes 4 image sets each comprising more than 30,000 images. The problem sizes all exceed the memory of a single compute node and thus we use them particularly for distributed experiments.

Comparisons. On 1DSfM and KITTI datasets, we compare our method with two standard trust region algorithms, Levenberg-Marquardt (LM) [26] and Dogleg (DL) [27]. For the LM algorithm, we use two variants: LM-sparse and LM-iterative, which exploit the exact sparse method and inexact iterative method [24] to solve the RCS (Eq. 5), respectively. For the distributed experiments on the Large-Scale dataset, we compare our distributed implementation of STBA against the state-of-the-art distributed solver DBACC [40]. The ablation studies on steepest descent correction and stochastic graph clustering are presented in Sec. 5.4 and the supplementary material, respectively.

Implementations. We implement LM, DL and STBA in C++, using Eigen for linear algebra computations. All the algorithms are implemented from the same code base, which means that they share the same elementary operations so that they can be compared equitably. For robustness, we use the Huber loss with a scale factor of 0.5 for the errors [39]. LM-sparse exploits the supernodal LLTLL^{T} Cholesky factorization with COLAMD ordering [13] based on CHOLMOD, which is well suited for handling sparse data like KITTI [19]. LM-iterative uses the conjugate gradient method with the advanced cluster-jacobi preconditioner [24]. DL uses the same exact sparse solver as LM-sparse since it requires a reasonably good estimation of the Gauss-Newton step [27]. Dense LLTLL^{T} factorization is used to solve the decomposed RCS (Eqs. 13) for STBA due to the dense connectivity inside camera clusters. Multi-threading is applied to the operations including the reprojection error and Jacobian computation, the preconditioner construction and the matrix-vector multiplications for all the methods as in [38].

Parameters. In the experiments, we assume that the camera intrinsics have been calibrated as in [27, 26]. Camera extrinsics are parameterized with 6-d vectors, using axis-angle representations for rotations. We set the initial damping parameter λ\lambda to 1e-4 and the max iteration number to 100 for all the methods. The iterations could terminate early if the cost, gradient or parameter tolerance [2] drops below 1e-6. For STBA, we empirically set the scaling parameter β\beta to 10 (Eq. 16) and the max cluster size Γ\Gamma to 100.

Hardware. We use a compute node with an 8-core Intel i7-4790K CPU and a 32G RAM. The distributed experiments are deployed on a cluster with 6 compute nodes.

Table 1: The statistics of the (a) 1DSfM and (b) KITTI datasets.
Data #images #tracks #projections #clusters
M. Metropolis 472 73,965 528,881 6
M. N. Dame 569 145,647 1,388,635 7
NYC Library 577 107,867 834,298 7
T. of London 729 166,473 1,354,242 9
Ellis Island 866 167,212 1,044,644 11
Alamo 890 160,652 1,937,554 11
P. del Popolo 1023 140,763 1,067,134 12
Gen.markt 1040 213,292 1,359,867 13
Yorkminster 1057 286,372 2,083,434 13
V. Cathedral 1209 303,926 2,970,504 15
Roman Forum 1799 844,159 5,815,427 22
Piccadilly 3213 378,329 3,330,024 39
Trafalgar 4813 337,638 3,608,454 55
ArtsQuad 5710 1,289,493 8,676,806 65
Data #images #tracks #projections #clusters
00 1400 119,268 475,790 20
01 1046 96,542 577,977 13
02 1732 161,196 591,086 25
03 227 21,307 86,621 4
04 153 13,081 50,762 3
05 722 63,606 246,907 10
06 496 34,339 147,142 8
07 254 26,160 96,607 4
08 1205 112,435 407,985 19
09 589 55,832 204,381 9
10 329 29,422 103,573 5

5.2 Performance Profiles

Following previous works [15, 24], we evaluate the solvers with Performance Profiles over the total of 25 problems of 1DSfM [37] and KITTI [19]. We obtain the SfM results for 1DSfM by COLMAP [35]111Since one of the image sets Union Square has only 10 reconstructed images, we replace it with another public image set ArtsQuad. and the SLAM results for KITTI by stereo ORB-SLAM2 [31], while disabling the final bundle adjustment (BA). Since the SfM/SLAM results are generally accurate because the pipeline uses repeated BA for robust reconstruction, we make the problems more challenging by adding Gaussian noise to the points and camera centers following [17, 21, 32]. We report the number of images, tracks and projections of the datasets as well as the typical cluster number of STBA in Table 1 & 1.

First of all, we give a brief introduction of performance profiles [15]. Given a problem p𝒫p\in\mathcal{P} and a solver s𝒮s\in\mathcal{S}, let F(p,s)F(p,s) denote the final objective the solver ss attained when solving problem pp. Then, for a number of solvers in 𝒮\mathcal{S}, let F(p)=mins𝒮F(p,s)F^{*}(p)=\min_{s\in\mathcal{S}}F(p,s) denote the minimum objective the solvers 𝒮\mathcal{S} attained when solving problem pp. Next, we define an objective threshold for problem pp which is Fτ(p)=F(p)+τ(F0(p)F(p)),F_{\tau}(p)=F^{*}(p)+\tau(F_{0}(p)-F^{*}(p)), where F0(p)F_{0}(p) is the initial objective and τ(0,1)\tau\in(0,1) is the pre-defined tolerance determining how close the threshold is to the minimum objective. After this, we measure the efficiency of a solver ss by computing the time it takes to reduce the objective to Fτ(p)F_{\tau}(p), which is denoted by Tτ(p,s)T_{\tau}(p,s). And the most efficient solver is the one who takes the minimum time, i.e., mins𝒮Tτ(p,s)\min_{s\in\mathcal{S}}T_{\tau}(p,s).

The method Performance Profiles regards that the solver ss solves the problem pp if Tτ(p,s)αmins𝒮Tτ(p,s)T_{\tau}(p,s)\leq\alpha\min_{s\in\mathcal{S}}T_{\tau}(p,s), where α[1,)\alpha\in[1,\infty). Therefore, if α=1\alpha=1, only the most efficient solver is thought to solve the problem, while if α\alpha\rightarrow\infty, all the solvers can be seen to solve the problem. Finally, the performance profile of the solver ss is defined w.r.t. α\alpha over the whole problem set 𝒫\mathcal{P} as ρ(s,α)=100|{p𝒫|Tτ(p,s)αmins𝒮Tτ(p,s)}||𝒫|.\rho(s,\alpha)=100*\frac{|\{p\in\mathcal{P}|T_{\tau}(p,s)\leq\alpha\min_{s\in\mathcal{S}}T_{\tau}(p,s)\}|}{|\mathcal{P}|}. It is basically the percentage of problems solved by ss and is non-decreasing w.r.t. α\alpha.

Refer to caption
Figure 5: Performance profiles [15] of different solvers when solving the total of 25 problems of 1DSfM [37] and KITTI [19].
Refer to caption
Figure 6: The convergence curves of 4 scenes from 1DSfM and KITTI.

We plot the performance profiles of the solvers in Fig. 5. To verify the benefits of using stochastic clustering, herein we also compare STBA with its variant STBA-fixed which uses a fixed clustering as previous methods [24, 17, 40]. When τ\tau is equal to 0.1 and 0.01, our STBA is able to solve nearly 100% of the problems for any α\alpha, because it always reaches the objective threshold Fτ(p)F_{\tau}(p) with less time than LM and DL methods by a factor of more than 5. This is mainly attributed to the reduced per-iteration cost as we can see from the convergence curves in Fig. 6. When τ\tau becomes 0.001 and the threshold Fτ(p)F_{\tau}(p) is harder to achieve, STBA is less efficient but still performs on par with LM-iterative and LM-sparse when α<3\alpha<3 and better than DL for any α\alpha. On the contrary, the performance of STBA-fixed drops drastically when τ\tau decreases to 0.001. The performance change for STBA and STBA-fixed when τ\tau decreases is mainly caused by the fact that the clustering methods come with the price of slower convergence near the stationary points [17, 40]. Compared with the full second-order solvers such as LM and DL, clustering methods only utilize the second order information within clusters. However, as opposed to STBA-fixed which uses fixed clustering, STBA has mitigated the negative effect of clustering by introducing stochasticity so that different second-order information can be utilized to boost convergence as the clustering changes. Despite the slower convergence rate, the benefit of STBA that it can reduce most of the loss with the lowest time cost (e.g., 99% loss reduction with only 1/5 time of the counterparts when τ=0.01\tau=0.01 in Fig. 5(b)) is still supposed to be highlighted, especially for the real-time SLAM applications where bundle adjustment is called repeatedly to correct the drift.

5.3 Results on Large-Scale Dataset

Refer to caption
Figure 7: Visualizations of SfM results (top) and convergence curves (bottom) of the Larse-Scale dataset. Cameras are drawn as blue pyramids.
Table 2: Statistics of the distributed bundle adjustment solvers on the Large-Scale dataset. DBACC [40] consume many more Jacobian and RCS evaluations than STBA.
Data #images #clusters RPE (pixel) #Jacobian/RCS evaluations Mean iteration time (s)
DBACC STBA DBACC STBA DBACC STBA DBACC STBA
LS-1 29975 300 340 0.823 0.818 1011/1080 49/100 912.5 71.0
LS-2 33634 336 386 0.766 0.783 854/860 48/100 934.8 79.3
LS-3 33809 339 391 1.083 1.056 1025/1100 49/100 1107.0 89.9
LS-4 44276 444 505 0.909 0.882 877/900 49/100 988.1 71.2

To evaluate the scalability, we conduct distributed experiments on the Large-Scale (LS) dataset, which includes the urban scenes of four cities named LS-1, LS-2, LS-3 and LS-4. We run a distributed SfM program of our own to produce initial sparse reconstructions of the four scenes and add Gaussian noise with a standard deviation of 3 meters to the camera centers and points. Then we compare our distributed STBA against the state-of-the-art distributed bundle adjustment framework DBACC [40].

We visualize the sparse reconstructions and the convergence curves of the four scenes in Fig. 7 and report the statistics in Table 2. As we can see from Fig. 7, STBA achieves faster convergence rates than DBACC by an order of magnitude. The main cause of the gap is that DBACC, which is based on the ADMM formulation [5], has to take inner iterations to solve a new minimization problem in every ADMM iteration. Although we have set the maximum inner iteration number to merely 10, DBACC still takes many more Jacobian and RCS evaluations and thus has much longer iterations than STBA by an order of magnitude, as shown in Table 2. Besides, as opposed to DBACC, our STBA is free of too many hyper-parameters.

5.4 Ablation Study on Steepest Descent Correction

Refer to caption
Figure 8: Convergence curves of LM-sparse, STBA and STBA* which does not use steepest descent correction (Sec. 4.3). Steepest descent correction helps to correct the deviations between the STBA and LM steps.

Here we perform an ablation study on steepest descent correction proposed in Sec. 4.3 to validate its efficacy. We compare our STBA with its variant called STBA* which does not use the correction. We run STBA, STBA* and LM-sparse on 1DSfM and KITTI. Since steepest descent correction is designed particularly for a small trust region, we set the lower bound of the damping parameter λ\lambda to 0.1 in the experiments. We observe that by using the correction, STBA consistently achieves a faster convergence than STBA* and performs on par with LM-sparse on all the scenes. Visualizations of the sample convergence curves w.r.t. the iterations are shown in Fig. 8, where STBA and LM-sparse have very close convergence curves. It manifests that steep descent correction indeed facilitates the correction of the approximation errors of the STBA steps and hence boosts the convergence.

6 Conclusion

In this paper, we rethink the proper way of integrating the clustering scheme into solving bundle adjustment by proposing STBA. First, STBA reformulates an LM iteration based on the clustering of the visibility graph, but meanwhile introduces additional equality constraints across the clusters. Second, we approximately relax the constraints as chance constraints and solve the problem by sampled convex program which randomly samples the chance constraints with the intention of splitting the large reduced camera system into small clusters. Not only does it reduce the per-iteration cost, but also allows parallel and distributed computing to accommodate the increase of the problem size. Moreover, we present a steepest descent correction technique to remedy the approximation errors of the STBA steps for a small trust region, and provide a practical implementation of stochastic graph clustering for constraint sampling. Extensive experiments on Internet SfM data, SLAM data and large-scale data demonstrate the efficiency and scalability of our approach.

Appendices

Appendix 0.A Complexity Analysis

In this section, we will analyze how our STBA reduces the time and space complexity by solving the split reduced camera system (RCS) (Eqs. 13) in place of the original RCS (Eq. 5), whether the exact or inexact linear solver is used, as shown in Table 3. Please note that the analysis considers the most general case and does not presuppose any special structures, e.g., the extreme sparsity, of the RCS.

Table 3: The time and space complexity of LM and our STBA when solving the reduced camera system. mm denotes the camera number. Γ\Gamma is the maximum cluster size. κ\kappa and κ\kappa^{\prime} are the condition numbers of the Schur complement and split Schur complement after preconditioning. rr and rr^{\prime} denote the edge number and the sampled edge number of the camera graph, respectively.
Cholesky factorization Conjugate gradient
LM STBA LM STBA
Time complexity O(m3)O(m^{3}) O(mΓ2)O(m\Gamma^{2}) O(rκ)O(r\sqrt{\kappa}) O(rκ)O(r^{\prime}\sqrt{\kappa^{\prime}})
Space complexity O(m2)O(m^{2}) O(mΓ)O(m\Gamma) O(r)O(r) O(r)O(r^{\prime})

Cholesky factorization is known to have a cubic time complexity and a quadratic space complexity in the camera number mm when solving the RCS [3]. If using Cholesky factorization to solve the split RCS exactly, the time and space complexity of each sub-problem of STBA are O(Γ3)O(\Gamma^{3}) and O(Γ2)O(\Gamma^{2}), respectively, where Γ\Gamma is the maximum cluster size. Since there are O(m/Γ)O(m/\Gamma) sub-problems, the time and space complexity of STBA are O(mΓ2)O(m\Gamma^{2}) and O(mΓ)O(m\Gamma), respectively. With Γ\Gamma being a constant, the time and space complexity of STBA are linear with mm.

Besides the exact solvers, conjugate gradient is an inexact approach to solving the linear equations iteratively. It is known to have a O(rκ)O(r\sqrt{\kappa}) time complexity and a O(r)O(r) space complexity when solving the RCS [36], where rr is the camera connection number and κ\kappa is the condition number of the Schur complement 𝐒\mathbf{S} in Eq. 5. However, 𝐒\mathbf{S} is generally ill-conditioned, which necessitates preconditioning to reduce the condition number κ\kappa [3, 24, 14, 22]. The amount of decrease in κ\kappa depends on how accurately preconditioning can be performed. If using conjugate gradient to solve the split RCS inexactly, STBA reduces the time complexity to O(rκ)O(r^{\prime}\sqrt{\kappa^{\prime}}) and the space complexity to O(r)O(r^{\prime}). Here, rr^{\prime} is the sampled camera connection number, and κ\kappa^{\prime} is the maximum condition number of the split Schur complements {𝐒i}i=1l\{\mathbf{S}_{i}\}_{i=1}^{l} after preconditioning. Due to the sampling of the camera connections, rr^{\prime} is smaller than rr. In our experiments, rr^{\prime} is less than one fifth of rr when we set the maximum cluster size Γ\Gamma to 100. The condition number κ\kappa^{\prime} also should be smaller than κ\kappa, because preconditioning the low-dimensional 𝐒i\mathbf{S}_{i} can be performed more accurately and efficiently than the high-dimensional 𝐒\mathbf{S}.

Appendix 0.B Ablation Studies on Stochastic Graph Clustering

In Sec. 4.4, we have proposed a stochastic graph clustering (SGC) algorithm to sample the chance constraints in each iteration. In this section, we would like to conduct ablation studies on the clustering strategies and the maximum cluster size Γ\Gamma. First, we make comparisons with 3 clustering methods below.

  • KMeans which partitions the camera centers into k clusters by using the K-Means algorithm. In order to introduce randomness, we randomly choose k camera centers as the initial means in the first step.

  • NCut which uses normalized cut for graph clustering as in the previous works [32, 43, 40]. We turn the camera graph into a random one by keeping its edges with the probability proportional to the edge weights and then run normalized cut on it.

  • NSGC is the abbreviation for non-stochastic graph clustering. It is a variant of SGC which uses the classic greedy Louvain’s algorithm [7] rather than joining clusters randomly as SGC.

Apart from the clustering strategies, we run all the algorithms with 6 different maximum cluster sizes which are Γ=\Gamma= 1, 25, 50, 100, 200 and \infty. Here, “Γ=1\Gamma=1” means that each camera forms a cluster. And “Γ=\Gamma=\infty” means all the cameras are grouped into a single cluster, in which case STBA is equivalent to the classic LM algorithm without using clustering. We run all the methods on each problem of 1DSfM [37] and KITTI [19] in the same way as Sec. 5.2. We record the final losses of all the clustering algorithms and normalize them with the division by the minimum loss that the algorithms attained. Therefore, the smaller the normalized loss is, the better convergence is achieved. We show the average normalized losses of different methods in Fig. 9(a).

Refer to caption
Figure 9: (a) Normalized final losses w.r.t. the maximum cluster size produced by different clustering methods. The smaller the loss is, the better convergence is attained. (b) Reconstructions of Gerrard Hall from the COLMAP dataset [1]. All the methods except our SGC lead to layered facade reconstruction results, as marked by the dashed circles. (Zoom in for best view.)

First of all, the proposed SGC reaches the minimum losses at all the cluster sizes, showing its efficacy compared with KMeans and NCut. The disadvantage of KMeans is that it does not utilize the camera connectivity for clustering, as opposed to NCut and SGC. In comparison with SGC, NCut partitions a graph into clusters in a top-down manner. The downside of this strategy is that it does not explicitly decide whether an edge at the bottom level will be selected or discarded with a defined probability as SGC does (see Eq. 16). Since NCut always stops once the cluster sizes are smaller than Γ\Gamma, some nodes may constantly stay in the same clusters without being exposed to the cuts. Instead, the bottom-up strategy of SGC considers the selection of every edge from the very beginning and contributes to the better convergence than NCut in the end. Besides, the outperformance of SGC over NSGC indicates the necessity of making the graph clustering randomized for better convergence.

Second, all of KMeans, NCut and SGC have better convergence as Γ\Gamma increases. It is reasonable because the larger Γ\Gamma is, the more chance constraints can be sampled, leading to a more accurate approximation by chance constrained relaxation. In the extreme case when Γ=1\Gamma=1, all the chance constraints are neglected (i.e., the confidence level α=0\alpha=0 in Eq. 12). It induces poor approximations for the STBA iterations and hence leads to very bad convergence. However, the final loss can be reduced by an order of magnitude by just increasing Γ\Gamma to 25. Besides, it is noteworthy that SGC is the least sensitive to Γ\Gamma compared against NCut and KMeans, as the loss does not vary a lot when Γ\Gamma changes from 25 to 200. Different from other methods, NSGC gets the larger loss when Γ\Gamma increases from 25 to 200. We found that it is because NSGC uses fixed clusters and neglects the geometric constraints between the clusters all the time, which would cause the inconsistency between the geometries of different clusters. The problem is more severe when the cluster size increases, as it is less flexible to align large clusters seamlessly than small clusters. And the reduced flexibility of large clusters is more likely to cause layered geometries at the cluster boundaries (see Fig. 9(b)). In Fig. 9(b), we show the reconstruction results of Gerrard Hall from the COLMAP dataset [1] produced by different graph clustering methods with Γ=50\Gamma=50. All the methods except our SGC lead to layered facade reconstruction results.

Appendix 0.C Full Algorithm

Below we lay out the full algorithm of STBA.

1
Input: Visibility graph: 𝒢=(𝒞𝒫,)\mathcal{G}=(\mathcal{C}\cup\mathcal{P},\mathcal{E}), initial pose and point parameters: 𝐱=[𝐜T𝐩T]T=𝐱0\mathbf{x}=[\mathbf{c}^{T}\mathbf{p}^{T}]^{T}=\mathbf{x}_{0}
Output: 𝐱\mathbf{x}^{*} minimizing F(𝐱)F(\mathbf{x})
2 t=0t=0, tmax=100t_{max}=100, λ=1e4\lambda=1e-4, Γ=100\Gamma=100, ϵ=1e6\epsilon=1e-6, stop=False
3 Build camera graph 𝒢c=(𝒞,c)\mathcal{G}_{c}=(\mathcal{C},\mathcal{E}_{c}) from 𝒢\mathcal{G}
4 while (not stop) and t++<tmaxt++<t_{max} do
       /* Stochastic graph clustering */
       {Φi}i=1l=StochasticGraphClustering(𝒢c,Γ)\{\Phi_{i}\}_{i=1}^{l}={\text{S}tochasticGraphClustering}(\mathcal{G}_{c},\Gamma)
        // Γ\Gamma is the maximum cluster size
       Build the equality constraint matrix 𝐀\mathbf{A} according to {Φi}i=1l\{\Phi_{i}\}_{i=1}^{l}
        // see Eq. 8.
       /* Evaluations */
5       Evaluate reprojection errors 𝐟\mathbf{f} and Jacobian 𝐉𝐜,𝐉𝐩,𝐉𝐩,𝐉=[𝐉𝐜,𝐉𝐩]\mathbf{J_{c}},\mathbf{J_{p}},\mathbf{J_{p}^{\prime}},\mathbf{J^{\prime}}=[\mathbf{J_{c}},\mathbf{J_{p}^{\prime}}]
6       𝐂=𝐉𝐩T𝐉𝐩+λdiag(𝐉𝐩T𝐉𝐩)\mathbf{C}=\mathbf{J_{p}}^{T}\mathbf{J_{p}}+\lambda\textrm{diag}(\mathbf{J_{p}}^{T}\mathbf{J_{p}}), 𝐄=𝐉𝐜T𝐉𝐩\mathbf{E}=\mathbf{J_{c}}^{T}\mathbf{J_{p}}, 𝐰=𝐉𝐩T𝐟\mathbf{w}=\mathbf{J_{p}}^{T}\mathbf{f}
7       𝐁=𝐉𝐜T𝐉𝐜+λdiag(𝐉𝐜T𝐉𝐜)\mathbf{B}=\mathbf{J_{c}}^{T}\mathbf{J_{c}}+\lambda\textrm{diag}(\mathbf{J_{c}}^{T}\mathbf{J_{c}})
8       𝐂=𝐉𝐩T𝐉𝐩+λdiag(𝐉𝐩T𝐉𝐩)\mathbf{C^{\prime}}=\mathbf{J_{p}^{\prime}}^{T}\mathbf{J_{p}^{\prime}}+\lambda\textrm{diag}(\mathbf{J_{p}^{\prime}}^{T}\mathbf{J_{p}^{\prime}})
9       𝐄=𝐉𝐜T𝐉𝐩\mathbf{E}^{\prime}=\mathbf{J_{c}}^{T}\mathbf{J_{p}^{\prime}}
10       𝐠=𝐉𝐟\mathbf{g}=-\mathbf{J^{\prime}}\mathbf{f}
       /* Steepest descent correction */
11       if λ0.1\lambda\geq 0.1  then
12             𝐇λ=𝐉T𝐉+λ𝐃T𝐃\mathbf{H}_{\lambda}=\mathbf{J^{\prime}}^{T}\mathbf{J^{\prime}}+\lambda\mathbf{D}^{\prime T}\mathbf{D}^{\prime}
13             𝐇~λ=diag(𝐇λ)\mathbf{\tilde{H}}_{\lambda}=\textrm{diag}(\mathbf{H}_{\lambda})
14             𝝂=(𝐀𝐇~λ1𝐀T)1𝐀𝐇~λ1𝐠\boldsymbol{\nu}=(\mathbf{A}\mathbf{\tilde{H}}_{\lambda}^{-1}\mathbf{A}^{T})^{-1}\mathbf{A}\mathbf{\tilde{H}}_{\lambda}^{-1}\mathbf{g}
15             𝐠=𝐠𝐀T𝝂\mathbf{g}=\mathbf{g}-\mathbf{A}^{T}\boldsymbol{\nu}
16      𝐠[𝐯T𝐰T]T\mathbf{g}\triangleq[\mathbf{v}^{\prime T}\mathbf{w}^{\prime T}]^{T}
17       𝐒=𝐁𝐄𝐂1𝐄T{𝐒i}i=1l\mathbf{S^{\prime}}=\mathbf{B}-\mathbf{E^{\prime}}\mathbf{C^{\prime}}^{-1}\mathbf{E^{\prime}}^{T}\triangleq\{\mathbf{S}_{i}\}_{i=1}^{l}
18       𝐛=𝐯𝐄𝐂1𝐰{𝐛i}i=1l\mathbf{b^{\prime}}=\mathbf{v^{\prime}}-\mathbf{E^{\prime}}\mathbf{C^{\prime}}^{-1}\mathbf{w^{\prime}}\triangleq\{\mathbf{b}_{i}\}_{i=1}^{l}
19      
      /* Solve pose steps in parallel */
20       for i=1i=1 to ll  do
21             Solve 𝐒iΔ𝐜i=𝐛i\mathbf{S}_{i}\Delta\mathbf{c}_{i}=\mathbf{b}_{i}
22      Δ𝐜=[Δ𝐜1TΔ𝐜lT]T\Delta\mathbf{c}=[\Delta\mathbf{c}_{1}^{T}...\Delta\mathbf{c}_{l}^{T}]^{T}
23      
24      Δ𝐩=𝐂1(𝐰𝐄TΔ𝐜)\Delta\mathbf{p}=\mathbf{C}^{-1}(\mathbf{w}-\mathbf{E}^{T}\Delta\mathbf{c})
25       𝐱=[𝐜T𝐩T]T\mathbf{x}=[\mathbf{c}^{T}\mathbf{p}^{T}]^{T}, Δ𝐱=[Δ𝐜TΔ𝐩T]T\Delta\mathbf{x}=[\Delta\mathbf{c}^{T}\Delta\mathbf{p}^{T}]^{T}
26       if (Cost tolerance <ϵ<\epsilon) or (Gradient tolerance <ϵ<\epsilon) or (Parameter tolerance <ϵ<\epsilon) (see [2]) then
27             stop=True
28      if F(𝐱)>F(𝐱+Δ𝐱)F(\mathbf{x})>F(\mathbf{x}+\Delta\mathbf{x}) then
29             λ=λ/3\lambda=\lambda/3, 𝐜=𝐜+Δ𝐜\mathbf{c}=\mathbf{c}+\Delta\mathbf{c}, 𝐩=𝐩+Δ𝐩\mathbf{p}=\mathbf{p}+\Delta\mathbf{p}
30            
31      else
32             λ=λ3\lambda=\lambda*3
33            
34      
𝐱=[𝐜T𝐩T]T\mathbf{x}^{*}=[\mathbf{c}^{T}\mathbf{p}^{T}]^{T}
Algorithm 1 Stochastic Bundle Adjustment (STBA)

References

  • [1] https://colmap.github.io/datasets.html
  • [2] Agarwal, S., Mierle, K., Others: Ceres solver. http://ceres-solver.org
  • [3] Agarwal, S., Snavely, N., Seitz, S.M., Szeliski, R.: Bundle adjustment in the large. In: ECCV (2010)
  • [4] Amestoy, P.R., Davis, T.A., Duff, I.S.: An approximate minimum degree ordering algorithm. SIAM Journal on Matrix Analysis and Applications 17(4), 886–905 (1996)
  • [5] Bertsekas, D.P.: Parallel and distributed computation: numerical methods, vol. 23 (1989)
  • [6] Bertsekas, D.P.: Constrained optimization and Lagrange multiplier methods. Academic press (2014)
  • [7] Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment 2008(10) (2008)
  • [8] Calafiore, G., Campi, M.C.: Uncertain convex programs: randomized solutions and confidence levels. Mathematical Programming 102(1), 25–46 (2005)
  • [9] Campi, M.C., Garatti, S.: A sampling-and-discarding approach to chance-constrained optimization: feasibility and optimality. Journal of Optimization Theory and Applications 148(2) (2011)
  • [10] Chatterjee, A., Madhav Govindu, V.: Efficient and robust large-scale rotation averaging. In: ICCV (2013)
  • [11] Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-point algorithms for inverse problems in science and engineering (2011)
  • [12] Darmaillac, Y., Loustau, S.: Mcmc louvain for online community detection. arXiv preprint arXiv:1612.01489 (2016)
  • [13] Davis, T.A., Gilbert, J.R., Larimore, S.I., Ng, E.G.: Algorithm 836: Colamd, a column approximate minimum degree ordering algorithm. TOMS 30(3), 377–380 (2004)
  • [14] Dellaert, F., Carlson, J., Ila, V., Ni, K., Thorpe, C.E.: Subgraph-preconditioned conjugate gradients for large scale slam. In: IROS (2010)
  • [15] Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Mathematical programming 91(2), 201–213 (2002)
  • [16] Engels, C., Stewénius, H., Nistér, D.: Bundle adjustment rules
  • [17] Eriksson, A., Bastian, J., Chin, T.J., Isaksson, M.: A consensus-based framework for distributed bundle adjustment. In: CVPR (2016)
  • [18] Fang, M., Pollok, T., Qu, C.: Merge-sfm: Merging partial reconstructions. In: BMVC (2019)
  • [19] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)
  • [20] Hestenes, M.R., et al.: Methods of conjugate gradients for solving linear systems. Journal of research of the National Bureau of Standards 49(6), 409–436 (1952)
  • [21] Jeong, Y., Nister, D., Steedly, D., Szeliski, R., Kweon, I.S.: Pushing the envelope of modern methods for bundle adjustment. PAMI 34(8), 1605–1617 (2011)
  • [22] Jian, Y.D., Balcan, D.C., Dellaert, F.: Generalized subgraph preconditioners for large-scale bundle adjustment. In: Outdoor and Large-Scale Real-World Scene Analysis (2012)
  • [23] Konolige, K., Garage, W.: Sparse sparse bundle adjustment. In: BMVC (2010)
  • [24] Kushal, A., Agarwal, S.: Visibility based preconditioning for bundle adjustment. In: CVPR (2012)
  • [25] Li, P., Arellano-Garcia, H., Wozny, G.: Chance constrained programming approach to process optimization under uncertainty. Computers & chemical engineering 32(1-2), 25–45 (2008)
  • [26] Lourakis, M.I., Argyros, A.A.: Sba: A software package for generic sparse bundle adjustment. TOMS 36(1),  2 (2009)
  • [27] Lourakis, M., Argyros, A.A.: Is levenberg-marquardt the most efficient optimization algorithm for implementing bundle adjustment? In: ICCV (2005)
  • [28] Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L.: Contextdesc: Local descriptor augmentation with cross-modality context. In: CVPR (2019)
  • [29] Luo, Z., Shen, T., Zhou, L., Zhu, S., Zhang, R., Yao, Y., Fang, T., Quan, L.: Geodesc: Learning local descriptors by integrating geometry constraints. In: ECCV (2018)
  • [30] Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. Journal of the society for Industrial and Applied Mathematics 11(2), 431–441 (1963)
  • [31] Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Transactions on Robotics 33(5), 1255–1262 (2017)
  • [32] Ni, K., Steedly, D., Dellaert, F.: Out-of-core bundle adjustment for large-scale 3d reconstruction. In: ICCV (2007)
  • [33] Rotkin, V., Toledo, S.: The design and implementation of a new out-of-core sparse cholesky factorization method. TOMS 30(1), 19–46 (2004)
  • [34] Schaeffer, S.E.: Survey: Graph clustering. Computer Science Review 1(1), 27–64 (2007)
  • [35] Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)
  • [36] Shewchuk, J.R., et al.: An introduction to the conjugate gradient method without the agonizing pain (1994)
  • [37] Wilson, K., Snavely, N.: Robust global translations with 1dsfm. In: ECCV (2014)
  • [38] Wu, C., Agarwal, S., Curless, B., Seitz, S.M.: Multicore bundle adjustment. In: CVPR (2011)
  • [39] Zach, C.: Robust bundle adjustment revisited. In: ECCV (2014)
  • [40] Zhang, R., Zhu, S., Fang, T., Quan, L.: Distributed very large scale bundle adjustment by global camera consensus. In: ICCV (2017)
  • [41] Zhou, L., Zhu, S., Luo, Z., Shen, T., Zhang, R., Zhen, M., Fang, T., Quan, L.: Learning and matching multi-view descriptors for registration of point clouds. In: ECCV (2018)
  • [42] Zhou, L., Zhu, S., Shen, T., Wang, J., Fang, T., Quan, L.: Progressive large scale-invariant image matching in scale space. In: ICCV (2017)
  • [43] Zhu, S., Shen, T., Zhou, L., Zhang, R., Wang, J., Fang, T., Quan, L.: Parallel structure from motion from local increment to global averaging. arXiv preprint arXiv:1702.08601 (2017)
  • [44] Zhu, S., Zhang, R., Zhou, L., Shen, T., Fang, T., Tan, P., Quan, L.: Very large-scale global sfm by distributed motion averaging. In: CVPR (2018)