11email: {lzhouai,zluoag,mzhen,tshenaa,zhuangbr,quan}@cse.ust.hk 22institutetext: Everest Innovation Technology
22email: {sli,fangtian}@altizure.com
Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction
Abstract
Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number. When the problem is scaled up, this step is neither efficient in computation nor manageable for a single compute node. In this work, we propose a stochastic bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM iterations to improve the efficiency and scalability. It first reformulates the quadratic programming problem of an LM iteration based on the clustering of the visibility graph by introducing the equality constraints across clusters. Then, we propose to relax it into a chance constrained problem and solve it through sampled convex program. The relaxation is intended to eliminate the interdependence between clusters embodied by the constraints, so that a large RCS can be decomposed into independent linear sub-problems. Numerical experiments on unordered Internet image sets and sequential SLAM image sets, as well as distributed experiments on large-scale datasets, have demonstrated the high efficiency and scalability of the proposed approach. Codes are released at https://github.com/zlthinker/STBA.
Keywords:
Stochastic bundle adjustment, Clustering, 3D reconstruction1 Introduction
Bundle Adjustment (BA) is typically formulated as a nonlinear least square problem to refine the parameters of cameras and 3D points. It is usually addressed by the Levenberg-Marquardt (LM) algorithm, where a linear equation system called Reduced Camera System (RCS) [26, 16] must be solved in each iteration. However, when the problem is scaled up, solving the RCS has been a bottleneck which takes a major portion of computation time (see the first bar of Fig. 1). The dimension of the RCS is proportional to the camera number, and thus the increase of cameras would ramp up the computation and memory consumption, although methods have been proposed to use efficient linear solvers [3, 24, 14, 22, 38] and economize on matrix manipulations [3, 23, 38]. Furthermore, different to other operations such as Jacobian or gradient evaluations, this step is indivisible, making it hard to fit BA for parallel and distributed computing.

In order to accomplish efficient and scalable reconstructions, clustering has been adopted as a useful practice to decompose a large problem into smaller, more manageable ones. For example, a number of SfM approaches have been developed in a divide and conquer fashion, which first reconstruct the partitioned sub-maps independently and then merge the partial reconstructions together [32, 43, 44, 18]. Although these methods are able to produce the initial sparse reconstructions in an efficient and scalable way, a full bundle adjustment is still indispensable to optimize the camera and point parameters globally. Therefore, in the context of BA, the methods [17, 40] proposed to distribute the objectives of BA to the split sub-models and optimize the sum of the objectives under the distributed optimization frameworks [11, 6], which, however, involves extra costly inner iterations and thus makes the optimization over-complicated.
In this work, we follow the direction of exploiting the clustering methods and push forward the investigation on how to integrate a clustering scheme into the BA problem systematically. Instead of applying a fixed, and one-time partition at the pre-processing step, we derive a stochastic clustering-based strategy within each LM iteration so as to decompose the RCS for efficiency and scalability.
-
•
First, we reformulate the quadratic programming problem of an LM iteration based on the clustering of the visibility graph. Such a formulation splits the problem into the most elementary structures, but meanwhile introduces additional equality constraints and raises the computational cost.
-
•
Second, in order to make the above problem efficiently solvable, we propose to relax the constraints into chance constraints [25] and then solve it with sampled convex program [8]. The approach helps to eliminate the interdependence between different clusters by randomized constraint reduction, which hence decomposes the RCS into independent linear sub-problems related to the clusters. In this way, an approximate step can be achieved efficiently.
-
•
Third, we present an add-on technique which helps to correct the approximate steps towards the steepest descent direction within a small trust region to improve the convergence.
Due to the stochastic process induced by the sampled convex program, we term our algorithm STochastic Bundle Adjustment (STBA), which brings the following tangible advantages. First, solving the split RCS in place of the original one has achieved a great speedup thanks to the reduced complexity. Second, the solving process can be made parallel and scalable to accommodate the growth of camera numbers, since all the sub-steps of a BA iteration can be decomposed. In Fig. 1, we visualize how the running time is reduced by distributing STBA over multiple compute nodes.
2 Related Works
Bundle adjustment (BA) is an indispensable step of 3D reconstruction [42, 29, 28, 41, 44, 43]. It is typically solved by the Levenberg-Marquardt (LM) algorithm [27], which approximately linearizes the error functions inside a local trust region and then solves a linear normal equation for an update step. SBA [26] first simplified the norm equation into a reduced camera system (RCS) through Schur complement by taking advantage of the special problem structure. After this, efforts were dedicated to solving the RCS faster in either exact or inexact ways. The exact solvers apply Cholesky factorization to the reduced camera matrix , while exploiting variable ordering [4, 13] and supernodal methods [33, 13] for acceleration. The inexact solvers are based on the Conjugate Gradient (CG) method [20] coupled with various preconditioners [3, 21, 24], which attains inexact solutions with better efficiency. Apart from the algorithmic improvements, [23, 38] presented well-optimized implementations of the LM solver to save the memory usage and exploit the CPU and GPU parallelism. However, despite the efforts above, solving a large and indivisible RCS will increasingly become the bottleneck of a BA solver when the problem is scaled up.
In order to make large-scale reconstruction tractable, the clustering methods are initially introduced into the structure from motion (SfM) domain. Basically, a divide-and-conquer strategy is applied, which first partitions a large scene into multiple sub-maps and then merges the partial reconstructions globally [32, 43, 44, 18]. In the formulation of these approaches, a reduced optimization problem other than the original BA problem is addressed, thus leading to a sub-optimal result. For example, [32, 44] factored out the internal variables inside the sub-maps and [43] registered all the cameras with motion averaging [10] without the involvement of points. In the realm of BA, [24] derived a block-diagonal preconditioner for the RCS from the clustering of cameras, but the clustering did not help to decompose the problem as it is done in the SfM algorithms [32, 43, 44, 18]. Instead, [17, 40] proposed to apply the distributed optimization frameworks like the Douglas-Rachford method [11] and ADMM [6] onto the empirically clusterized BA problems towards large scales. Although built upon a theoretical foundation, the methods required costly inner iterations and introduced a plethora of latent parameters during optimization.
3 Bundle Adjustment Revisited
In this section, we first revisit the bundle adjustment problem and its LM solution to give the necessary preliminaries and terminologies. Henceforth, vectors and matrices appear in boldface and denotes the L2 norm.
A bundle adjustment problem is built upon a bipartite visibility graph . Here, denotes the set of cameras parameterized by -dimensional vectors, denotes the set of 3D points, and denotes the set of projections. The objective is to minimize , where denotes a -dimensional vector of reprojection errors and concatenates camera parameters and point parameters , i.e., .
The LM algorithm achieves an update step at each iteration by linearizing as in a trust region around , where is the Jacobian matrix. Then the minimization of is turned into
(1) |
whose solution comes from the normal equation below
(2) |
where is the damping parameter and typically . For notational simplicity, we write , , and . After multiplying at both sides of Eq. 2, we have
(3) |
which can be re-written in the form
(4) |
where , , , , and . Eq. 4 can be simplified by the Schur complement [26], which leads to
(5) | ||||
(6) |
where is the Schur complement of . Here, , known as the reduced camera matrix, is a block structured symmetric positive definite matrix. The block is nonzero iff cameras and observe at least one common point. Although a variety of sparse Cholesky factorization techniques [4, 13, 33] and preconditioned conjugate gradient methods [3, 21, 24] have been developed to solve the reduced camera system (RCS) of Eq. 5, it still can be prohibitive when the camera number grows large.
4 Stochastic Bundle Adjustment
In this section, we present our stochastic bundle adjustment (STBA) method that decomposes the RCS into clusters inside the LM iterations. In Sec. 4.1, we first reformulate problem (1) based on the clustering of the visibility graph , yet subject to additional equality constraints. Next, in Sec. 4.2, we apply chance constrained relaxation to the reformulation and solve it by sampled convex program [8, 9]. It manages to decompose the RCS into cluster-related linear sub-problems and yield an approximate STBA step efficiently. Third, a steepest correction step is proposed to remedy the approximation error of the STBA steps in Sec. 4.3. Finally, in Sec. 4.4, we present a practical implementation of the random constraint sampler required by the chance constrained relaxation.

4.1 Clustering Based Reformulation
In constrast to the previous methods [32, 43, 44, 18, 17, 40] that partition the problem in the pre-processing stage, we present a reformulation of problem (1) to decompose the RCS into clusters inside the LM iterations.
Particularly, we consider the most general case that every single camera forms a cluster. In order to preserve all the projections , we apply point splitting to the physical points, as shown in Fig. 2. For a physical point viewed by cameras, we split it into virtual points , each assigned to one cluster. Such a clustering will reformulate problem (1) equivalently as a new constrained quadratic programming (QP) problem as below
(7) | ||||
s.t. | (8) |
Here, is an expansion of which considers the update steps for all the virtual points, and so is the Jacobian . Accordingly, . The noteworthy distinction between problems (7) and (1) is that the new equality constraints of Eq. 8 are imposed to enforce that the steps of the same points in different clusters are identical. For example, for point and the corresponding j-th row of appears in a form as . Since a point introduces equations, has a row number of . Besides, is full row rank, because the rows each of which defines a unique equality constraint are linearly independent.
The constrained QP problem above can be easily solved by Lagrangian duality, which turns the problem into
(9) |
where and are the Lagrangian multipliers. Eq. 9 is in the similar format to Eq. 3, but has an additional term on the right hand side compared with Eq. 3. While includes the gradients w.r.t. the independent virtual points, acts as a correction term to ensure that the solution complies with the constraints of Eq. 8.
The ultimate benefit of the clustering-based reformulation is revealed below. By likewise applying Schur complement to Eq. 9, we have
(10) |
where , , , and . Due to the fact that any two cameras do not share any common virtual points, now becomes a block-diagonal matrix, i.e., . Then Eq. 10 can be equivalently decomposed into most elementary linear systems each corresponding to one camera.
4.2 Chance Constrained Relaxation
The major problem with the clustering based reformulation above is the excessive cost of evaluating the Lagrangian multipliers , because it requires the evaluation of . In order to make the problem practically solvable, we would like to eliminate the need to evaluate the correction term of Eq. 9 by means of relaxation for problem (7).
We multiply a random binary variable with the i-th equality constraint of Eq. 8, which results in . It could be interpreted that, if , the constraint must be satisfied; otherwise, the constraint is allowed to be violated. In this way, Eq. 8 will be relaxed into chance constraints [25], which leads to
(11) | ||||
s.t. | (12) |
where is a predefined confidence level. It means that, instead of enforcing the hard constraints, we allow them to be satisfied with a probability above . The advantage of the chance constrained relaxation is that we can determine the reliability level of approximation by controlling . The larger is, the closer the chance constrained problem (11) will be to the original deterministic problem (7). One approach to problem (11) is called sampled convex program [8, 9]. It extracts independent samples with a minimum sampling probability of for each variable and replaces the chance constraints (12) with the sampled ones. Below we will elaborate on how problem (11) can be solved given the samples .
For a sample , if , the equality constraint is dropped; if , it enforces the equality of the steps of two virtual points, e.g., . Here the virtual point belongs to the single-camera cluster of camera and similarly to . Then we merge and into one point as shown in Fig. 2, and we call the operation point binding as opposed to point splitting introduced in Sec. 4.1. On the one hand, the point binding leads to the consequence that the Lagrangian multiplier , because the equality always holds. On the other hand, the block of (c.f. Eq. 10) becomes nonzero, since the merged point is now shared by cameras and . After applying point binding to all the virtual points involved in the sampled constraints, i.e., , all the constraints will be eliminated and there is no need to evaluate any more. Meanwhile, it will bring the cameras sharing common points into the same clusters.
Since the cameras in different clusters have no points in common after the point binding, the matrix will appear in a block-diagonal structure which we call cluster-diagonal, as illustrated in Fig. 2. It means that each diagonal block of corresponds to a camera cluster. In particular, we can intentionally design the sampler in order to shape into the desired cluster-diagonal structures, as we will present in Sec. 4.4. As a result, this structure of still enables the decomposition of Eq. 10 into smaller independent linear systems each relating to one camera cluster. Provided that there are clusters, Eq. 10 can be equivalently re-written as
(13) |
where and . After Eqs. 13 are evaluated, we substitute into Eq. 9 to give
(14) |
where and include the Jacobians and virtual point steps w.r.t. the clusters respectively and we omit for ease of notation. To give a uniform step for a physical point, we equalize the steps of its virtual points in different clusters by solving the linear system below in place of Eq. 14:
(15) |
Since , Eq. 15 gives the same solution as the point steps in Eq. 6: .
So far we have presented how an update step of the camera and point parameters is determined approximately by STBA. Besides this, we keep the other components of the LM algorithm unchanged [30]. For reference, we detail the full algorithm in the supplementary material.
4.3 Steepest Descent Correction
The chance constrained relaxation in the last section effectively decomposes the RCS, but leads to approximate solutions with decreased feasibility due to the random constraint sampling. Below, we provide an empirical analysis on the effect of the approximation and present a conditional correction step to remedy the approximation error.
The LM algorithm is known to be the interpolation of the Gauss-Newton and gradient descent methods, depending on the trust region radius controlled by the damping parameter . When is small and the LM algorithm behaves more like the Gauss-Newton method, the approximation induced by STBA in Sec. 4.2 is admissible, in that problem (1) itself is derived from the first order approximation of the error function . And the LM algorithm can automatically contract the trust region when the approximation leads to the increase of the objective.
When is large, i.e., the trust region is small, the LM algorithm is closer to the gradient descent method, which gives a step towards the steepest descent direction defined by the right hand side of Eq. 9, i.e., . However, a problem with STBA is that the correction term is eliminated approximately by the chance constrained relaxation. As a consequence, the derived step would deviate from the steepest descent direction and thus hamper the convergence. Therefore, we propose to recover to remedy the deviation in such a case. Especially, when is large enough, the matrix in Eq. 9 will be dominated by the diagonal terms, so that we can approximate by . After that, can be evaluated efficiently because of the sparsity of . Since the approximation is not accurate unless is large, in practice, we enable the steepest descent correction particularly when . Fig. 3 visualizes the effect of the correction.

4.4 Stochastic Graph Clustering
The chance constrained relaxation in Sec. 4.2 necessitates an effective random constraint sampler . Among many of the possible designs, we propose a viable implementation named stochastic graph clustering in this section.
The design of the clustering method considers the following requirements. First, the sampler should be randomized with respect to the chance constraints (12). Since (12) indicates that the expectation should have (), the upper bound of the confidence level is defined as . Therefore, the sampler should sample as many constraints as possible on average to increase the upper bound of . Second, the random sampler is intended to partition the cameras into small independent clusters so that Eqs. 13 can be solved efficiently.
Concretely, the stochastic graph clustering operates over a camera graph , where the weight of an edge between cameras and is equal to the number of points covisible by the two cameras. At the beginning, each camera forms an individual cluster as formulated in Sec. 4.1. Next, if and are joined, a number of pairs of virtual points viewed by and will be merged. Therefore, equality constraints will be satisfied. In order to join as many virtual points as possible while yielding a cluster structure of , we aim at finding a clustering that maximizes the modularity below inspired by [7]: where is the total sum of edge weights, is the sum of weights of edges incident to camera , and denotes the cluster of . if and otherwise. The modularity measures the density of connections inside clusters as opposed to those across clusters [7]. Therefore, a larger modularity generally indicates that more virtual points are merged inside clusters. Maximizing the modularity is NP-hard [34], but Louvain’s algorithm [7] provides a greedy strategy which greedily joins the two clusters giving the maximum increase in modularity in a bottom-up manner. It can be efficiently applied to large graphs since its complexity is shown to be linear in the node number on sparse data [7].

However, Louvain’s algorithm [7] is deterministic due to its greedy nature. To ensure that every pair of virtual points is likely to be merged, we instead join clusters randomly according to a probability distribution defined based on the modularity increments [12], which is
(16) |
where is a scaling parameter. Two neighboring clusters and are more likely to join together if it leads to a larger modularity increment . In order to limit the sizes of the sub-problems of STBA, we stop joining clusters if their sizes exceed . In Fig. 4, we visualize the stochastic clustering results.
5 Experiments
5.1 Experiment Settings
Datasets. We run experiments on three different types of datasets: 1) 1DSfM dataset [37] which is composed of 14 sets of unordered Internet images; 2) KITTI dataset [19] containing 11 street-view image sequences; and 3) Large-Scale dataset which is collected by ourselves due to the absence of publicly available large-scale 3D datasets. It includes 4 image sets each comprising more than 30,000 images. The problem sizes all exceed the memory of a single compute node and thus we use them particularly for distributed experiments.
Comparisons. On 1DSfM and KITTI datasets, we compare our method with two standard trust region algorithms, Levenberg-Marquardt (LM) [26] and Dogleg (DL) [27]. For the LM algorithm, we use two variants: LM-sparse and LM-iterative, which exploit the exact sparse method and inexact iterative method [24] to solve the RCS (Eq. 5), respectively. For the distributed experiments on the Large-Scale dataset, we compare our distributed implementation of STBA against the state-of-the-art distributed solver DBACC [40]. The ablation studies on steepest descent correction and stochastic graph clustering are presented in Sec. 5.4 and the supplementary material, respectively.
Implementations. We implement LM, DL and STBA in C++, using Eigen for linear algebra computations. All the algorithms are implemented from the same code base, which means that they share the same elementary operations so that they can be compared equitably. For robustness, we use the Huber loss with a scale factor of 0.5 for the errors [39]. LM-sparse exploits the supernodal Cholesky factorization with COLAMD ordering [13] based on CHOLMOD, which is well suited for handling sparse data like KITTI [19]. LM-iterative uses the conjugate gradient method with the advanced cluster-jacobi preconditioner [24]. DL uses the same exact sparse solver as LM-sparse since it requires a reasonably good estimation of the Gauss-Newton step [27]. Dense factorization is used to solve the decomposed RCS (Eqs. 13) for STBA due to the dense connectivity inside camera clusters. Multi-threading is applied to the operations including the reprojection error and Jacobian computation, the preconditioner construction and the matrix-vector multiplications for all the methods as in [38].
Parameters. In the experiments, we assume that the camera intrinsics have been calibrated as in [27, 26]. Camera extrinsics are parameterized with 6-d vectors, using axis-angle representations for rotations. We set the initial damping parameter to 1e-4 and the max iteration number to 100 for all the methods. The iterations could terminate early if the cost, gradient or parameter tolerance [2] drops below 1e-6. For STBA, we empirically set the scaling parameter to 10 (Eq. 16) and the max cluster size to 100.
Hardware. We use a compute node with an 8-core Intel i7-4790K CPU and a 32G RAM. The distributed experiments are deployed on a cluster with 6 compute nodes.
Data | #images | #tracks | #projections | #clusters |
---|---|---|---|---|
M. Metropolis | 472 | 73,965 | 528,881 | 6 |
M. N. Dame | 569 | 145,647 | 1,388,635 | 7 |
NYC Library | 577 | 107,867 | 834,298 | 7 |
T. of London | 729 | 166,473 | 1,354,242 | 9 |
Ellis Island | 866 | 167,212 | 1,044,644 | 11 |
Alamo | 890 | 160,652 | 1,937,554 | 11 |
P. del Popolo | 1023 | 140,763 | 1,067,134 | 12 |
Gen.markt | 1040 | 213,292 | 1,359,867 | 13 |
Yorkminster | 1057 | 286,372 | 2,083,434 | 13 |
V. Cathedral | 1209 | 303,926 | 2,970,504 | 15 |
Roman Forum | 1799 | 844,159 | 5,815,427 | 22 |
Piccadilly | 3213 | 378,329 | 3,330,024 | 39 |
Trafalgar | 4813 | 337,638 | 3,608,454 | 55 |
ArtsQuad | 5710 | 1,289,493 | 8,676,806 | 65 |
Data | #images | #tracks | #projections | #clusters |
---|---|---|---|---|
00 | 1400 | 119,268 | 475,790 | 20 |
01 | 1046 | 96,542 | 577,977 | 13 |
02 | 1732 | 161,196 | 591,086 | 25 |
03 | 227 | 21,307 | 86,621 | 4 |
04 | 153 | 13,081 | 50,762 | 3 |
05 | 722 | 63,606 | 246,907 | 10 |
06 | 496 | 34,339 | 147,142 | 8 |
07 | 254 | 26,160 | 96,607 | 4 |
08 | 1205 | 112,435 | 407,985 | 19 |
09 | 589 | 55,832 | 204,381 | 9 |
10 | 329 | 29,422 | 103,573 | 5 |
5.2 Performance Profiles
Following previous works [15, 24], we evaluate the solvers with Performance Profiles over the total of 25 problems of 1DSfM [37] and KITTI [19]. We obtain the SfM results for 1DSfM by COLMAP [35]111Since one of the image sets Union Square has only 10 reconstructed images, we replace it with another public image set ArtsQuad. and the SLAM results for KITTI by stereo ORB-SLAM2 [31], while disabling the final bundle adjustment (BA). Since the SfM/SLAM results are generally accurate because the pipeline uses repeated BA for robust reconstruction, we make the problems more challenging by adding Gaussian noise to the points and camera centers following [17, 21, 32]. We report the number of images, tracks and projections of the datasets as well as the typical cluster number of STBA in Table 1 & 1.
First of all, we give a brief introduction of performance profiles [15]. Given a problem and a solver , let denote the final objective the solver attained when solving problem . Then, for a number of solvers in , let denote the minimum objective the solvers attained when solving problem . Next, we define an objective threshold for problem which is where is the initial objective and is the pre-defined tolerance determining how close the threshold is to the minimum objective. After this, we measure the efficiency of a solver by computing the time it takes to reduce the objective to , which is denoted by . And the most efficient solver is the one who takes the minimum time, i.e., .
The method Performance Profiles regards that the solver solves the problem if , where . Therefore, if , only the most efficient solver is thought to solve the problem, while if , all the solvers can be seen to solve the problem. Finally, the performance profile of the solver is defined w.r.t. over the whole problem set as It is basically the percentage of problems solved by and is non-decreasing w.r.t. .


We plot the performance profiles of the solvers in Fig. 5. To verify the benefits of using stochastic clustering, herein we also compare STBA with its variant STBA-fixed which uses a fixed clustering as previous methods [24, 17, 40]. When is equal to 0.1 and 0.01, our STBA is able to solve nearly 100% of the problems for any , because it always reaches the objective threshold with less time than LM and DL methods by a factor of more than 5. This is mainly attributed to the reduced per-iteration cost as we can see from the convergence curves in Fig. 6. When becomes 0.001 and the threshold is harder to achieve, STBA is less efficient but still performs on par with LM-iterative and LM-sparse when and better than DL for any . On the contrary, the performance of STBA-fixed drops drastically when decreases to 0.001. The performance change for STBA and STBA-fixed when decreases is mainly caused by the fact that the clustering methods come with the price of slower convergence near the stationary points [17, 40]. Compared with the full second-order solvers such as LM and DL, clustering methods only utilize the second order information within clusters. However, as opposed to STBA-fixed which uses fixed clustering, STBA has mitigated the negative effect of clustering by introducing stochasticity so that different second-order information can be utilized to boost convergence as the clustering changes. Despite the slower convergence rate, the benefit of STBA that it can reduce most of the loss with the lowest time cost (e.g., 99% loss reduction with only 1/5 time of the counterparts when in Fig. 5(b)) is still supposed to be highlighted, especially for the real-time SLAM applications where bundle adjustment is called repeatedly to correct the drift.
5.3 Results on Large-Scale Dataset

Data | #images | #clusters | RPE (pixel) | #Jacobian/RCS evaluations | Mean iteration time (s) | ||||
---|---|---|---|---|---|---|---|---|---|
DBACC | STBA | DBACC | STBA | DBACC | STBA | DBACC | STBA | ||
LS-1 | 29975 | 300 | 340 | 0.823 | 0.818 | 1011/1080 | 49/100 | 912.5 | 71.0 |
LS-2 | 33634 | 336 | 386 | 0.766 | 0.783 | 854/860 | 48/100 | 934.8 | 79.3 |
LS-3 | 33809 | 339 | 391 | 1.083 | 1.056 | 1025/1100 | 49/100 | 1107.0 | 89.9 |
LS-4 | 44276 | 444 | 505 | 0.909 | 0.882 | 877/900 | 49/100 | 988.1 | 71.2 |
To evaluate the scalability, we conduct distributed experiments on the Large-Scale (LS) dataset, which includes the urban scenes of four cities named LS-1, LS-2, LS-3 and LS-4. We run a distributed SfM program of our own to produce initial sparse reconstructions of the four scenes and add Gaussian noise with a standard deviation of 3 meters to the camera centers and points. Then we compare our distributed STBA against the state-of-the-art distributed bundle adjustment framework DBACC [40].
We visualize the sparse reconstructions and the convergence curves of the four scenes in Fig. 7 and report the statistics in Table 2. As we can see from Fig. 7, STBA achieves faster convergence rates than DBACC by an order of magnitude. The main cause of the gap is that DBACC, which is based on the ADMM formulation [5], has to take inner iterations to solve a new minimization problem in every ADMM iteration. Although we have set the maximum inner iteration number to merely 10, DBACC still takes many more Jacobian and RCS evaluations and thus has much longer iterations than STBA by an order of magnitude, as shown in Table 2. Besides, as opposed to DBACC, our STBA is free of too many hyper-parameters.
5.4 Ablation Study on Steepest Descent Correction

Here we perform an ablation study on steepest descent correction proposed in Sec. 4.3 to validate its efficacy. We compare our STBA with its variant called STBA* which does not use the correction. We run STBA, STBA* and LM-sparse on 1DSfM and KITTI. Since steepest descent correction is designed particularly for a small trust region, we set the lower bound of the damping parameter to 0.1 in the experiments. We observe that by using the correction, STBA consistently achieves a faster convergence than STBA* and performs on par with LM-sparse on all the scenes. Visualizations of the sample convergence curves w.r.t. the iterations are shown in Fig. 8, where STBA and LM-sparse have very close convergence curves. It manifests that steep descent correction indeed facilitates the correction of the approximation errors of the STBA steps and hence boosts the convergence.
6 Conclusion
In this paper, we rethink the proper way of integrating the clustering scheme into solving bundle adjustment by proposing STBA. First, STBA reformulates an LM iteration based on the clustering of the visibility graph, but meanwhile introduces additional equality constraints across the clusters. Second, we approximately relax the constraints as chance constraints and solve the problem by sampled convex program which randomly samples the chance constraints with the intention of splitting the large reduced camera system into small clusters. Not only does it reduce the per-iteration cost, but also allows parallel and distributed computing to accommodate the increase of the problem size. Moreover, we present a steepest descent correction technique to remedy the approximation errors of the STBA steps for a small trust region, and provide a practical implementation of stochastic graph clustering for constraint sampling. Extensive experiments on Internet SfM data, SLAM data and large-scale data demonstrate the efficiency and scalability of our approach.
Appendices
Appendix 0.A Complexity Analysis
In this section, we will analyze how our STBA reduces the time and space complexity by solving the split reduced camera system (RCS) (Eqs. 13) in place of the original RCS (Eq. 5), whether the exact or inexact linear solver is used, as shown in Table 3. Please note that the analysis considers the most general case and does not presuppose any special structures, e.g., the extreme sparsity, of the RCS.
Cholesky factorization | Conjugate gradient | |||
---|---|---|---|---|
LM | STBA | LM | STBA | |
Time complexity | ||||
Space complexity |
Cholesky factorization is known to have a cubic time complexity and a quadratic space complexity in the camera number when solving the RCS [3]. If using Cholesky factorization to solve the split RCS exactly, the time and space complexity of each sub-problem of STBA are and , respectively, where is the maximum cluster size. Since there are sub-problems, the time and space complexity of STBA are and , respectively. With being a constant, the time and space complexity of STBA are linear with .
Besides the exact solvers, conjugate gradient is an inexact approach to solving the linear equations iteratively. It is known to have a time complexity and a space complexity when solving the RCS [36], where is the camera connection number and is the condition number of the Schur complement in Eq. 5. However, is generally ill-conditioned, which necessitates preconditioning to reduce the condition number [3, 24, 14, 22]. The amount of decrease in depends on how accurately preconditioning can be performed. If using conjugate gradient to solve the split RCS inexactly, STBA reduces the time complexity to and the space complexity to . Here, is the sampled camera connection number, and is the maximum condition number of the split Schur complements after preconditioning. Due to the sampling of the camera connections, is smaller than . In our experiments, is less than one fifth of when we set the maximum cluster size to 100. The condition number also should be smaller than , because preconditioning the low-dimensional can be performed more accurately and efficiently than the high-dimensional .
Appendix 0.B Ablation Studies on Stochastic Graph Clustering
In Sec. 4.4, we have proposed a stochastic graph clustering (SGC) algorithm to sample the chance constraints in each iteration. In this section, we would like to conduct ablation studies on the clustering strategies and the maximum cluster size . First, we make comparisons with 3 clustering methods below.
-
•
KMeans which partitions the camera centers into k clusters by using the K-Means algorithm. In order to introduce randomness, we randomly choose k camera centers as the initial means in the first step.
- •
-
•
NSGC is the abbreviation for non-stochastic graph clustering. It is a variant of SGC which uses the classic greedy Louvain’s algorithm [7] rather than joining clusters randomly as SGC.
Apart from the clustering strategies, we run all the algorithms with 6 different maximum cluster sizes which are 1, 25, 50, 100, 200 and . Here, “” means that each camera forms a cluster. And “” means all the cameras are grouped into a single cluster, in which case STBA is equivalent to the classic LM algorithm without using clustering. We run all the methods on each problem of 1DSfM [37] and KITTI [19] in the same way as Sec. 5.2. We record the final losses of all the clustering algorithms and normalize them with the division by the minimum loss that the algorithms attained. Therefore, the smaller the normalized loss is, the better convergence is achieved. We show the average normalized losses of different methods in Fig. 9(a).

First of all, the proposed SGC reaches the minimum losses at all the cluster sizes, showing its efficacy compared with KMeans and NCut. The disadvantage of KMeans is that it does not utilize the camera connectivity for clustering, as opposed to NCut and SGC. In comparison with SGC, NCut partitions a graph into clusters in a top-down manner. The downside of this strategy is that it does not explicitly decide whether an edge at the bottom level will be selected or discarded with a defined probability as SGC does (see Eq. 16). Since NCut always stops once the cluster sizes are smaller than , some nodes may constantly stay in the same clusters without being exposed to the cuts. Instead, the bottom-up strategy of SGC considers the selection of every edge from the very beginning and contributes to the better convergence than NCut in the end. Besides, the outperformance of SGC over NSGC indicates the necessity of making the graph clustering randomized for better convergence.
Second, all of KMeans, NCut and SGC have better convergence as increases. It is reasonable because the larger is, the more chance constraints can be sampled, leading to a more accurate approximation by chance constrained relaxation. In the extreme case when , all the chance constraints are neglected (i.e., the confidence level in Eq. 12). It induces poor approximations for the STBA iterations and hence leads to very bad convergence. However, the final loss can be reduced by an order of magnitude by just increasing to 25. Besides, it is noteworthy that SGC is the least sensitive to compared against NCut and KMeans, as the loss does not vary a lot when changes from 25 to 200. Different from other methods, NSGC gets the larger loss when increases from 25 to 200. We found that it is because NSGC uses fixed clusters and neglects the geometric constraints between the clusters all the time, which would cause the inconsistency between the geometries of different clusters. The problem is more severe when the cluster size increases, as it is less flexible to align large clusters seamlessly than small clusters. And the reduced flexibility of large clusters is more likely to cause layered geometries at the cluster boundaries (see Fig. 9(b)). In Fig. 9(b), we show the reconstruction results of Gerrard Hall from the COLMAP dataset [1] produced by different graph clustering methods with . All the methods except our SGC lead to layered facade reconstruction results.
Appendix 0.C Full Algorithm
Below we lay out the full algorithm of STBA.
References
- [1] https://colmap.github.io/datasets.html
- [2] Agarwal, S., Mierle, K., Others: Ceres solver. http://ceres-solver.org
- [3] Agarwal, S., Snavely, N., Seitz, S.M., Szeliski, R.: Bundle adjustment in the large. In: ECCV (2010)
- [4] Amestoy, P.R., Davis, T.A., Duff, I.S.: An approximate minimum degree ordering algorithm. SIAM Journal on Matrix Analysis and Applications 17(4), 886–905 (1996)
- [5] Bertsekas, D.P.: Parallel and distributed computation: numerical methods, vol. 23 (1989)
- [6] Bertsekas, D.P.: Constrained optimization and Lagrange multiplier methods. Academic press (2014)
- [7] Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment 2008(10) (2008)
- [8] Calafiore, G., Campi, M.C.: Uncertain convex programs: randomized solutions and confidence levels. Mathematical Programming 102(1), 25–46 (2005)
- [9] Campi, M.C., Garatti, S.: A sampling-and-discarding approach to chance-constrained optimization: feasibility and optimality. Journal of Optimization Theory and Applications 148(2) (2011)
- [10] Chatterjee, A., Madhav Govindu, V.: Efficient and robust large-scale rotation averaging. In: ICCV (2013)
- [11] Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-point algorithms for inverse problems in science and engineering (2011)
- [12] Darmaillac, Y., Loustau, S.: Mcmc louvain for online community detection. arXiv preprint arXiv:1612.01489 (2016)
- [13] Davis, T.A., Gilbert, J.R., Larimore, S.I., Ng, E.G.: Algorithm 836: Colamd, a column approximate minimum degree ordering algorithm. TOMS 30(3), 377–380 (2004)
- [14] Dellaert, F., Carlson, J., Ila, V., Ni, K., Thorpe, C.E.: Subgraph-preconditioned conjugate gradients for large scale slam. In: IROS (2010)
- [15] Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Mathematical programming 91(2), 201–213 (2002)
- [16] Engels, C., Stewénius, H., Nistér, D.: Bundle adjustment rules
- [17] Eriksson, A., Bastian, J., Chin, T.J., Isaksson, M.: A consensus-based framework for distributed bundle adjustment. In: CVPR (2016)
- [18] Fang, M., Pollok, T., Qu, C.: Merge-sfm: Merging partial reconstructions. In: BMVC (2019)
- [19] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)
- [20] Hestenes, M.R., et al.: Methods of conjugate gradients for solving linear systems. Journal of research of the National Bureau of Standards 49(6), 409–436 (1952)
- [21] Jeong, Y., Nister, D., Steedly, D., Szeliski, R., Kweon, I.S.: Pushing the envelope of modern methods for bundle adjustment. PAMI 34(8), 1605–1617 (2011)
- [22] Jian, Y.D., Balcan, D.C., Dellaert, F.: Generalized subgraph preconditioners for large-scale bundle adjustment. In: Outdoor and Large-Scale Real-World Scene Analysis (2012)
- [23] Konolige, K., Garage, W.: Sparse sparse bundle adjustment. In: BMVC (2010)
- [24] Kushal, A., Agarwal, S.: Visibility based preconditioning for bundle adjustment. In: CVPR (2012)
- [25] Li, P., Arellano-Garcia, H., Wozny, G.: Chance constrained programming approach to process optimization under uncertainty. Computers & chemical engineering 32(1-2), 25–45 (2008)
- [26] Lourakis, M.I., Argyros, A.A.: Sba: A software package for generic sparse bundle adjustment. TOMS 36(1), 2 (2009)
- [27] Lourakis, M., Argyros, A.A.: Is levenberg-marquardt the most efficient optimization algorithm for implementing bundle adjustment? In: ICCV (2005)
- [28] Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L.: Contextdesc: Local descriptor augmentation with cross-modality context. In: CVPR (2019)
- [29] Luo, Z., Shen, T., Zhou, L., Zhu, S., Zhang, R., Yao, Y., Fang, T., Quan, L.: Geodesc: Learning local descriptors by integrating geometry constraints. In: ECCV (2018)
- [30] Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. Journal of the society for Industrial and Applied Mathematics 11(2), 431–441 (1963)
- [31] Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Transactions on Robotics 33(5), 1255–1262 (2017)
- [32] Ni, K., Steedly, D., Dellaert, F.: Out-of-core bundle adjustment for large-scale 3d reconstruction. In: ICCV (2007)
- [33] Rotkin, V., Toledo, S.: The design and implementation of a new out-of-core sparse cholesky factorization method. TOMS 30(1), 19–46 (2004)
- [34] Schaeffer, S.E.: Survey: Graph clustering. Computer Science Review 1(1), 27–64 (2007)
- [35] Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)
- [36] Shewchuk, J.R., et al.: An introduction to the conjugate gradient method without the agonizing pain (1994)
- [37] Wilson, K., Snavely, N.: Robust global translations with 1dsfm. In: ECCV (2014)
- [38] Wu, C., Agarwal, S., Curless, B., Seitz, S.M.: Multicore bundle adjustment. In: CVPR (2011)
- [39] Zach, C.: Robust bundle adjustment revisited. In: ECCV (2014)
- [40] Zhang, R., Zhu, S., Fang, T., Quan, L.: Distributed very large scale bundle adjustment by global camera consensus. In: ICCV (2017)
- [41] Zhou, L., Zhu, S., Luo, Z., Shen, T., Zhang, R., Zhen, M., Fang, T., Quan, L.: Learning and matching multi-view descriptors for registration of point clouds. In: ECCV (2018)
- [42] Zhou, L., Zhu, S., Shen, T., Wang, J., Fang, T., Quan, L.: Progressive large scale-invariant image matching in scale space. In: ICCV (2017)
- [43] Zhu, S., Shen, T., Zhou, L., Zhang, R., Wang, J., Fang, T., Quan, L.: Parallel structure from motion from local increment to global averaging. arXiv preprint arXiv:1702.08601 (2017)
- [44] Zhu, S., Zhang, R., Zhou, L., Shen, T., Fang, T., Tan, P., Quan, L.: Very large-scale global sfm by distributed motion averaging. In: CVPR (2018)