This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

UV-SLAM: Unconstrained Line-based SLAM Using Vanishing Points for Structural Mapping

Hyunjun Lim1,{}^{1}, , Jinwoo Jeon1,{}^{1}, ,
and Hyun Myung2,{}^{2*},
1H. Lim and J. Jeon are with School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea {tp02134, zinuok}@kaist.ac.kr2H. Myung is with School of Electrical Engineering, KI-AI, and KI-R, KAIST, Daejeon, Republic of Korea [email protected]This work has been supported by the Unmanned Swarm CPS Research Laboratory program of Defense Acquisition Program Administration and Agency for Defense Development. (UD190029ED)
Abstract

In feature-based simultaneous localization and mapping (SLAM), line features complement the sparsity of point features, making it possible to map the surrounding environment structure. Existing approaches utilizing line features have primarily employed a measurement model that uses line re-projection. However, the direction vectors used in the 3D line mapping process cannot be corrected because the line measurement model employs only the lines’ normal vectors in the Plücker coordinate. As a result, problems like degeneracy that occur during the 3D line mapping process cannot be solved. To tackle the problem, this paper presents a UV-SLAM, which is an unconstrained line-based SLAM using vanishing points for structural mapping. This paper focuses on using structural regularities without any constraints, such as the Manhattan world assumption. For this, we use the vanishing points that can be obtained from the line features. The difference between the vanishing point observation calculated through line features in the image and the vanishing point estimation calculated through the direction vector is defined as a residual and added to the cost function of optimization-based SLAM. Furthermore, through Fisher information matrix rank analysis, we prove that vanishing point measurements guarantee a unique mapping solution. Finally, we demonstrate that the localization accuracy and mapping quality are improved compared to the state-of-the-art algorithms using public datasets.

I Introduction

The feature-based visual simultaneous localization and mapping (SLAM) has been primarily developed based on point features because a point is the smallest unit that can express the characteristics of an image and has an advantage in low computation environment. In addition, point features have been used for localization because they are easy to track. However, point features have several drawbacks[1]. First, point features are not robust in low-texture environments such as hallways. Moreover, they are weak against illumination change. Finally, point features are sparse, making it difficult to visualize the surrounding environment with a 3D map.

Refer to caption
Refer to caption
Figure 1: Top views of 3D line mapping results with LiDAR point clouds overlaid in real corridor environment 1 ALVIO[2, 3]. 1 UV-SLAM.

To supplement the point features, line-based methods have been proposed. Line features can additionally be used in environments with low textures, such as corridors. In addition, because the line consists of several points, there is a high probability that the characteristics will be maintained even when an illumination change occurs[4, 5]. Finally, because line features have structural regularities, the surrounding environment can be easily identified through 3D mapping[6].

With the above advantages, many studies have been conducted to apply line features to visual SLAM. First, a method using the Plücker coordinate and the orthonormal representation for representing 3D lines was proposed[7] to express a 3D line with a higher degree of freedom (DoF) than a 3D point. Based on this, line measurement model was defined in a similar way to the point measurement model. It re-projects a 3D line and calculates the difference from the new observed line. Most line-based algorithms adopt similar methods on existing point-based methods. Filtering-based approaches[8, 9] were developed from MSCKF[10]. Some optimization-based methods[11, 12, 13] exploited ORB-SLAM[14] and other approaches[1, 15, 2, 3] used VINS-Mono[16].

However, the above algorithms applying only line measurement model does not solve the problems in the 3D mapping of lines. Among the difficulties, a degeneracy problem occurs in the 3D mapping of line features. The degeneracy refers to a phenomenon in which 3D features cannot be uniquely determined through the triangulation of features[17]. When the observed 2D point is close to the epipole, a point feature cannot be determined as a single 3D point because the 3D point exists on the baseline. Similarly, when the observed 2D line passes close to the epipole, a line feature cannot be determined as a single 3D line because the 3D line exists on the epipolar plane. The mapping of line features is inaccurate because degeneracy occurs more frequently than point features do. However, because the line measurement model used in the existing algorithm employs only each line’s normal vector in the Plücker coordinate, their direction vectors cannot be corrected. Fig. 1 shows the comparison of top views of 3D line mapping results between ALVIO[2, 3] and the proposed algorithm with the LiDAR point clouds overlaid in a real corridor environment. The ALVIO has poor mapping result despite the lines having structural regularities as shown in Fig. 1.

In this paper, we propose a UV-SLAM, an algorithm that solves the above problems using the lines’ structural regularities as shown in Fig. 1. The main contributions of this paper are as follows:

  • To the best of our knowledge, the proposed UV-SLAM is the first optimization-based monocular SLAM using vanishing point measurements for structural mapping without any restriction such as camera motion and environment. In particular, our algorithm does not use the Manhattan world assumption in the process of extracting the vanishing points and using them as measurements.

  • We define a novel residual term and Jacobian of the vanishing point measurements based on the most common methods of expressing 3D line features: the Plücker coordinate and the orthonormal representation.

  • We prove that the proposed method guarantees the observability of 3D lines through Fisher information matrix (FIM) rank analysis. Through this, problems that occur in the 3D mapping of line features are proven to be solved.

The rest of this paper is organized as follows. Section II gives a review of related works. Section III explains the proposed method in depth. Section IV analyzes the Fisher information matrix rank to prove the validity of the proposed method. Section V provides the experimental results. Finally, Section VI concludes by summarizing our contributions and discussing future work.

II Related works

II-A Degeneracy of Line Features

Some works looked into the degeneracy for line features[18, 19]. In addition, Yang et al. investigated degenerate motions using two distinct line triangulation methods[9]. Subsequently, they analyzed various features’ observability and degenerate camera motions in the inertial measurement unit (IMU) aided navigation system[20]. However, when line degeneracy occurs in these investigations, all of the degenerate lines have been eliminated. As a result, there is a limitation in that information loss occurs due to the removed lines.

To handle degenerate lines, Ok et al. reconstructed 3D line segments using imaginary points when the lines are close to the epipolar line[21]. However, this approach can be used only when degenerate lines intersect with other lines, and it restricts applicability. Our previous work solved the degeneracy by using structural constraints in parallel conditions, investigating the fact that degenerate lines frequently occur in pure translation motions[3]. However, this method has a limitation in that it can be used only in pure translational motions. Therefore, in order to improve the quality of line mapping, a method that can be used independent of camera motion is required.

II-B Line-based SLAM with Manhattan or Atlanta World Assumption

In [22, 23], rotation matrix was estimated using the Manhattan world assumption. Based on this, decoupled methods have been proposed to estimate the translation after calculating the rotation through the vanishing points[24, 25, 26]. In addition, some approaches applied line features to SLAM by using the Manhattan or Atlanta world assumption [27, 28]. These methods use a novel 2-DoF line representation to exploit lines with structural regularities only. Furthermore, there is a study using 2-DoF line representation to classify structural lines and non-structural lines[29]. However, as these approaches use structural lines with dominant direction only, they are practical only in an indoor environment where the assumptions are mostly correct. Therefore, there is a need for a novel algorithm that is not restricted by assumptions.

II-C Vanishing Point Measurements

Some approaches use vanishing point measurements without assumptions. In [30], parallel lines were clustered based on vanishing points. Then, residuals were constructed using the conditions that parallel lines should be in one plane and their cross product should be zero. However, accurate mapping results could not be obtained when the initial estimation was inaccurate as degeneracy occurred.

Moreover, there is a paper using vanishing points as an observation model. In [31], the residuals are defined to apply unbounded vanishing point measurements to line-based SLAM. Unfortunately, it does not provide a proof that vanishing point measurement improves the localization accuracy and line mapping results of line-based SLAM.

III Proposed method

Refer to caption
Figure 2: Block diagram illustrating the framework of UV-SLAM. The dashed box represents the newly added blocks in this paper. When an RGB image is received, detection and matching of line features are carried out. Afterward, the vanishing points are detected, and each line is clustered. After creating 3D lines through the triangulation process, the residuals of both the lines and the vanishing points are defined. Finally, localization and mapping results can be obtained through sliding window optimization.

III-A Framework of Algorithm

The overall framework of the UV-SLAM is shown in Fig. 2. The proposed method is based on VINS-Mono[16], and the way IMU and point measurements are used is similar to it. The point features are extracted from Shi-Tomasi[32] and are tracked by KLT[33]. In addition, the IMU measurement model is defined by the pre-integration method[34]. Finally, our optimization-based method employs two-way marginalization with Schur complement[35].

To add line features to the monocular visual-inertial odometry (VIO) system, the line features are extracted from line segment detector (LSD)[36] and are tracked by line binary descriptor (LBD)[37]. Whereas 3D points can be intuitively expressed in (x,y,z)(x,y,z), 3D lines require a complicated way to express themselves. Therefore, the proposed algorithm employs a Plücker coordinate and a orthonormal representation used in [7]. The Plücker coordinate is an intuitive way to represent 3D lines, and a 3D line is represented as follows:

𝐋(𝐧,𝐝)6,\mathbf{L}({\mathbf{n}},{\mathbf{d}})^{\top}\in\mathbb{R}^{6}, (1)

where 𝐧\mathbf{n} and 𝐝\mathbf{d} represent normal and direction vectors, respectively. The Plücker coordinate is used in the triangulation and re-projection process. Whereas 3D lines are actually 4-DoF, the lines in the Plücker coordinate are 6-DoF. Therefore, over-parameterization problem occurs in the optimization process of VIO or visual SLAM. To solve this, the orthonormal representation is employed, which is a 4-DoF representation of lines. It is used in the optimization process and can be expressed as follows:

𝐨=[𝝍,ϕ],\mathbf{o}=[\bm{\psi},\phi], (2)

where 𝝍\bm{\psi} is the 3D line’s rotation matrix in Euler angles with respect to the camera coordinate system, and ϕ\phi is the parameter representing the minimal distance from the camera center to the line. The conversion between the Plücker coordinate and the orthonormal representation is given in [7].

In addition, the UV-SLAM can determine whether the extracted lines have structural regularities or not. For vanishing point detection, the proposed algorithm use J-linkage[38], which can find multiple instances in the presence of noise and outliers. The overall process is as follows: First, vanishing point hypotheses are created through random sampling for all lines extracted from the image. Subsequently, after merging similar ones through comparison between the hypotheses, vanishing points are calculated. Because the J-linkage can find all possible vanishing points through the hypotheses, it can find more vanishing points than other algorithms with the Manhattan world assumption.

III-B State Definition

In this paper, ()w(\cdot)^{w}, ()c(\cdot)^{c}, and ()b(\cdot)^{b} represent the world coordinate, camera coordinate, and body coordinate, respectively. In addition, ()bw(\cdot)^{w}_{b} reflects the coordinate transformations of a rotation matrix, quaternion, or translation from the body coordinate to the world coordinate. The state vector used in our system is as follows:

𝒳=[\displaystyle\mathcal{X}=[ 𝐱0,𝐱1,,𝐱I1,\displaystyle\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{I-1}, (3)
λ0,λ1,,λJ1,\displaystyle\lambda_{0},\lambda_{1},\cdots,\lambda_{J-1},
𝐨0,𝐨1,,𝐨K1],\displaystyle\mathbf{o}_{0},\mathbf{o}_{1},\cdots,\mathbf{o}_{K-1}],
𝐱i=[\displaystyle\mathbf{x}_{i}=[ 𝐩biw,𝐪biw,𝐯biw,𝐛a,𝐛g],i[0,I1],\displaystyle\mathbf{p}^{w}_{b_{i}},\mathbf{q}^{w}_{b_{i}},\mathbf{v}^{w}_{b_{i}},\mathbf{b}_{a},\mathbf{b}_{g}],~{}i\in[0,I-1],
𝐨k=[\displaystyle\mathbf{o}_{k}=[ 𝝍k,ϕk],k[0,K1],\displaystyle\bm{\psi}_{k},\phi_{k}],~{}k\in[0,K-1],

where 𝒳\mathcal{X} represents the entire state, and 𝐱i\mathbf{x}_{i} represents the body state in the ii-th sliding window, which is made up of the following parameters: position, quaternion, velocity, and biases of the accelerometer and gyroscope. In addition, the entire state includes the inverse depths of point features, which are represented as λj,j[0,J1]\lambda_{j},~{}j\in[0,J-1]. In this paper, lines expressed in the orthonormal representations are newly added as 𝐨\mathbf{o}. II, JJ, and KK are the numbers of sliding window, point features, and line features, respectively.

Refer to caption
Figure 3: An example of a factor graph for UV-SLAM. For 𝐨0\mathbf{o}_{0}, only the line feature factor is used as a nonstructural line. Therefore, only line feature factor is employed. On the other hand, 𝐨1\mathbf{o}_{1} is a line with structural regularity and both line feature factor and the vanishing point factor are used.

III-C UV-SLAM

Employing defined states in (3), the entire objective for optimization is as follows:

min𝒳{𝐫0𝐉0𝒳2.+i𝐫I(𝐳bi+1bi,𝒳)Σbi+1bi2+(i,j)𝒫ρp𝐫p(𝐳pjci,𝒳)Σpjci2.+(i,k)ρl𝐫l(𝐳lkci,𝒳)Σlkci2+(i,k)𝒱ρv𝐫v(𝐳vkci,𝒳)Σvkci2},\begin{gathered}\min_{\mathcal{X}}\Bigg{\{}\parallel\mathbf{r}_{0}-\mathbf{J}_{0}\mathcal{X}\parallel^{2}\Bigg{.}\\ \left.+\sum_{i\in\mathcal{B}}\parallel{\mathbf{r}_{I}({\mathbf{z}}^{b_{i}}_{b_{i+1}},\mathcal{X})\parallel}_{\Sigma^{b_{i}}_{b_{i+1}}}^{2}+\sum_{(i,j)\in\mathcal{P}}{\color[rgb]{0,0,0}\rho_{p}}\parallel{\mathbf{r}_{p}({\mathbf{z}}^{c_{i}}_{p_{j}},\mathcal{X})\parallel}_{\Sigma^{c_{i}}_{p_{j}}}^{2}\right.\\ \Bigg{.}+\sum_{(i,k)\in\mathcal{L}}{\color[rgb]{0,0,0}\rho_{l}}\parallel{\mathbf{r}_{l}({\mathbf{z}}^{c_{i}}_{l_{k}},\mathcal{X})\parallel}_{\Sigma^{c_{i}}_{l_{k}}}^{2}+\sum_{(i,k)\in\mathcal{V}}{\color[rgb]{0,0,0}\rho_{v}}\parallel{\mathbf{r}_{v}({\mathbf{z}}^{c_{i}}_{v_{k}},\mathcal{X})\parallel}_{\Sigma^{c_{i}}_{v_{k}}}^{2}\Bigg{\}},\end{gathered}

(4)

where 𝐫0\mathbf{r}_{0}, 𝐫I\mathbf{r}_{I}, 𝐫p\mathbf{r}_{p}, 𝐫l\mathbf{r}_{l}, and 𝐫v\mathbf{r}_{v} represent marginalization, IMU, point, line, and vanishing point measurement residuals, respectively. In addition, 𝐳bi+1bi{\mathbf{z}}^{b_{i}}_{b_{i+1}}, 𝐳pjci{\mathbf{z}}^{c_{i}}_{p_{j}}, 𝐳lkci{\mathbf{z}}^{c_{i}}_{l_{k}}, and 𝐳vkci{\mathbf{z}}^{c_{i}}_{v_{k}} stand for observations of IMU, point, line, and vanishing point, respectively; \mathcal{B} is the set of all pre-integrated IMU measurements in a sliding window; 𝒫\mathcal{P}, \mathcal{L} and 𝒱\mathcal{V} are the sets of point, line, and vanishing point measurements in observed frames; and Σbi+1bi\Sigma^{b_{i}}_{b_{i+1}}, Σpjci\Sigma^{c_{i}}_{p_{j}}, Σlkci\Sigma^{c_{i}}_{l_{k}} and Σvkci\Sigma^{c_{i}}_{v_{k}} represent IMU, point, line, and vanishing point measurement covariance matrices, respectively. ρp\rho_{p}, ρl\rho_{l}, and ρv\rho_{v} mean loss functions of the point, line, and vanishing point measurements, respectively. ρp\rho_{p} and ρl\rho_{l} are set to the Huber norm function [39] and ρv\rho_{v} is set to the inverse tangent function because of the vanishing point measurement model’s unbound problem. An example of the factor graph for the defined cost function is shown in Fig. 3. If there is no vanishing point measurement for a specific line, only the line feature factor is used as in the case of 𝐨0\mathbf{o}_{0}. If a specific line has corresponding vanishing point measurement, the line feature and vanishing point factors are employed as in the case of 𝐨1\mathbf{o}_{1}. For the optimization process, Ceres Solver[40] is used.

III-D Line Measurement Model

First, the re-projection of the 3D line 𝐋\mathbf{L} in the Plücker coordinate is as follows:

𝐥c\displaystyle\mathbf{l}^{c} =[l1l2l3]=𝐊𝐧c=fxfy(𝐊1)𝐧c\displaystyle=\begin{bmatrix}l_{1}\\ l_{2}\\ l_{3}\end{bmatrix}=\mathbf{K}^{\prime}{\mathbf{n}}^{c}=f_{x}f_{y}(\mathbf{K}^{-1})^{\top}{\mathbf{n}}^{c} (5)
=[fy000fx0fycxfxcyfxfy]𝐧c=𝐧c,\displaystyle=\begin{bmatrix}f_{y}&0&0\\ 0&f_{x}&0\\ -f_{y}c_{x}&-f_{x}c_{y}&f_{x}f_{y}\end{bmatrix}{\mathbf{n}}^{c}=\mathbf{n}^{c},

where 𝐥{\mathbf{l}}, 𝐊\mathbf{K}^{\prime} and 𝐊\mathbf{K} represent the re-projected line, the projection matrix of a line feature, and the camera’s intrinsic parameter, respectively. (fxf_{x}, fyf_{y}) and (cxc_{x}, cyc_{y}) denote image’s focal lengths and principal points, respectively. Because the proposed algorithm applies to a normalized plane, 𝐊\mathbf{K} and 𝐊\mathbf{K}^{\prime} are identity matrices. As a result, the re-projected line is equal to the normal vector in the proposed method.

Refer to caption
Figure 4: Illustration of line residual. The red solid and blue dashed lines on the image represent observation and re-projected estimation, respectively. The 3D line obtained from triangulation is re-projected into a new frame. Afterward, the distance between both endpoints of the observed line and the re-projected line is defined as the residual 𝐫l\mathbf{r}_{l} of the line.

As shown in Fig. 4, the residual of the line measurement model is defined as the following re-projection error:

𝐫l=[d(𝐩s,𝐥c)d(𝐩e,𝐥c)],\begin{gathered}\mathbf{r}_{l}=\begin{bmatrix}d(\mathbf{p}_{s},\mathbf{l}^{c})\\ d(\mathbf{p}_{e},\mathbf{l}^{c})\end{bmatrix},\end{gathered} (6)

where

d(𝐩,𝐥c)=𝐩𝐥cld,ld=l12+l22,𝐩s=(us,vs,1),𝐩e=(ue,ve,1),\begin{gathered}d(\mathbf{p},\mathbf{l}^{c})=\cfrac{\mathbf{p}^{\top}\mathbf{l}^{c}}{l_{d}},~{}l_{d}=\sqrt{l_{1}^{2}+l_{2}^{2}},\\ \mathbf{p}_{s}=(u_{s},v_{s},1),~{}\mathbf{p}_{e}=(u_{e},v_{e},1),\\ \end{gathered} (7)

and 𝐫l\mathbf{r}_{l} denotes the line residual and dd denotes the distance between both endpoints of the observed line and the re-projected line. 𝐩s\mathbf{p}_{s} and 𝐩e\mathbf{p}_{e} are the endpoints of the observed line in the image. The corresponding Jacobian matrix with respect to the 3D line can be represented by the body state change, δ𝐱\delta\mathbf{x}, and the orthonormal representation change, δ𝐨\delta\mathbf{o}, as follows:

𝐉l=𝐫l𝐥c𝐥c𝐋c[𝐋cδ𝐱𝐋c𝐋w𝐋wδ𝐨],\mathbf{J}_{l}=\cfrac{\partial\mathbf{r}_{l}}{\partial\mathbf{l}^{c}}\cfrac{\partial\mathbf{l}^{c}}{\partial\mathbf{L}^{c}}\begin{bmatrix}\cfrac{\partial\mathbf{L}^{c}}{\partial\delta\mathbf{\mathbf{x}}}&\cfrac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}\cfrac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}}\end{bmatrix}{\color[rgb]{0,0,0},} (8)

with

𝐫l𝐥c=[l1(𝐩s𝐥c)ld3+usldl2(𝐩s𝐥c)ld3+vsld1ldl1(𝐩e𝐥c)ld3+ueldl2(𝐩e𝐥c)ld3+veld1ld]2×3,\displaystyle\cfrac{\partial\mathbf{r}_{l}}{\partial\mathbf{l}^{c}}=\begin{bmatrix}\cfrac{-l_{1}(\mathbf{p}^{\top}_{s}\mathbf{l}^{c})}{l_{d}^{3}}+\cfrac{u_{s}}{l_{d}}&\cfrac{-l_{2}(\mathbf{p}^{\top}_{s}\mathbf{l}^{c})}{l_{d}^{3}}+\cfrac{v_{s}}{l_{d}}&\cfrac{1}{l_{d}}\\ \cfrac{-l_{1}(\mathbf{p}^{\top}_{e}\mathbf{l}^{c})}{l_{d}^{3}}+\cfrac{u_{e}}{l_{d}}&\cfrac{-l_{2}(\mathbf{p}^{\top}_{e}\mathbf{l}^{c})}{l_{d}^{3}}+\cfrac{v_{e}}{l_{d}}&\cfrac{1}{l_{d}}\end{bmatrix}_{2\times 3}, (9)
𝐥c𝐋c=[𝐊𝟎3×3]3×6,\displaystyle\cfrac{\partial\mathbf{l}^{c}}{\partial\mathbf{L}^{c}}=\begin{bmatrix}\mathbf{K^{\prime}}&\mathbf{0}_{3\times 3}\end{bmatrix}_{3\times 6},
𝐋cδ𝐱=\displaystyle\cfrac{\partial\mathbf{L}^{c}}{\partial\delta\mathbf{\mathbf{x}}}=
(𝒯cb)1[(𝐑bw)[𝐝w]×𝟎3×3[(𝐑bw)(𝐧w+[𝐝w]×𝐩bw)]×𝟎3×3\displaystyle(\mathcal{T}^{b}_{c})^{-1}\left[\begin{matrix}\begin{matrix}(\mathbf{R}^{w}_{b})^{\top}[\mathbf{d}^{w}]_{\times}\\ \mathbf{0}_{3\times 3}\end{matrix}&\begin{matrix}[(\mathbf{R}^{w}_{b})^{\top}(\mathbf{n}^{w}+[\mathbf{d}^{w}]_{\times}\mathbf{p}^{w}_{b})]_{\times}\\ \mathbf{0}_{3\times 3}\end{matrix}\end{matrix}\right.
𝟎6×3𝟎6×3𝟎6×3]6×15,\displaystyle\qquad\qquad\qquad\qquad\qquad\left.\begin{matrix}\begin{matrix}\mathbf{0}_{6\times 3}\end{matrix}&\begin{matrix}\mathbf{0}_{6\times 3}\end{matrix}&\begin{matrix}\mathbf{0}_{6\times 3}\end{matrix}&\end{matrix}\right]_{6\times 15},
𝐋c𝐋w𝐋wδ𝐨=\displaystyle\cfrac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}\cfrac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}}=
(𝒯cb)1[𝟎3×1w1𝐮3w1𝐮2w2𝐮1w2𝐮3𝟎3×1w2𝐮1w1𝐮2]6×4,\displaystyle(\mathcal{T}^{b}_{c})^{-1}\begin{bmatrix}\mathbf{0}_{3\times 1}&-w_{1}\mathbf{u}_{3}&w_{1}\mathbf{u}_{2}&-w_{2}\mathbf{u}_{1}\\ w_{2}\mathbf{u}_{3}&\mathbf{0}_{3\times 1}&-w_{2}\mathbf{u}_{1}&w_{1}\mathbf{u}_{2}\end{bmatrix}_{6\times 4},

where

𝐔=[𝐮1𝐮2𝐮3]=[𝐧𝐧𝐝𝐝𝐧×𝐝𝐧×𝐝],\displaystyle\mathbf{U}=\begin{bmatrix}\mathbf{u}_{1}&\mathbf{u}_{2}&\mathbf{u}_{3}\end{bmatrix}=\begin{bmatrix}\cfrac{\mathbf{n}}{\parallel{\mathbf{n}}\parallel}&\cfrac{\mathbf{d}}{\parallel{\mathbf{d}}\parallel}&\cfrac{\mathbf{n}\times\mathbf{d}}{\parallel{\mathbf{n}\times\mathbf{d}}\parallel}\end{bmatrix}, (10)
𝐰=[w1w2]=1𝐧2+𝐝2[𝐧𝐝],\displaystyle\mathbf{w}=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}=\cfrac{1}{\sqrt{\parallel{\mathbf{n}}\parallel^{2}+\parallel{\mathbf{d}}\parallel^{2}}}\begin{bmatrix}\parallel{\mathbf{n}}\parallel\\ \parallel{\mathbf{d}}\parallel\end{bmatrix},

and 𝒯cb\mathcal{T}^{b}_{c} is a transformation matrix from the camera coordinate to the body coordinate in the Plücker coordinate.

III-E Vanishing Point Measurement Model

After the vanishing points are calculated, the observed line features use the corresponding vanishing points as new observations. An example of clustering lines through vanishing points is shown in Fig. 5. The lines with the same vanishing point are expressed in the same color.

Refer to caption
Figure 5: Image of clustered lines using vanishing points. The lines with the same vanishing point are expressed in the same color. Usually, three or more vanishing points can be extracted using J-linkage.

To estimate the vanishing points, the point on the 3D line is expressed in a homogeneous coordinate as follows[17]:

𝐕(t)=𝐕0+t𝐃=[x0+td1y0+td2z0+td31],t(0,),\mathbf{V}(t)=\mathbf{V}_{0}+t\mathbf{D}=\begin{bmatrix}x_{0}+td_{1}\\ y_{0}+td_{2}\\ z_{0}+td_{3}\\ 1\end{bmatrix},t\in(0,\infty), (11)

where

𝐕0=[x0y0z01],𝐃=[𝐝c,0]=[d1,d2,d3,0],\begin{gathered}\mathbf{V}_{0}=\begin{bmatrix}x_{0}&y_{0}&z_{0}&1\end{bmatrix}^{\top},\\ \mathbf{D}=\begin{bmatrix}{\mathbf{d}^{c}}^{\top},0\end{bmatrix}^{\top}=\begin{bmatrix}d_{1},d_{2},d_{3},0\end{bmatrix}^{\top},\end{gathered} (12)

and 𝐕0\mathbf{V}_{0} represents a point on the 3D line. Then, the vanishing point equals the projection of a point at infinity on the 3D line as follows:

𝐯c=[v1v2v3]=limt𝐏(𝐕0+t𝐃)=𝐊𝐝c=𝐝c,\begin{gathered}\mathbf{v}^{c}=\begin{bmatrix}v_{1}\\ v_{2}\\ v_{3}\end{bmatrix}=\lim_{t\to\infty}\mathbf{P}(\mathbf{V}_{0}+t\mathbf{D})=\mathbf{K}\mathbf{d}^{c}=\mathbf{d}^{c},\end{gathered} (13)

where 𝐏=𝐊[𝐈𝟎]\begin{gathered}\mathbf{P}=\mathbf{K}\left[\begin{array}[]{@{}c|c@{}}\mathbf{I}&\mathbf{0}\end{array}\right]\\ \end{gathered} is a camera projection matrix. In the UV-SLAM, the vanishing point from the line is equal to the direction vector of the line. The vanishing point estimation is calculated by the intersection of the 𝐯c\mathbf{v}^{c} and the image plane, as shown in Fig. 6. Finally, the vanishing point residual is as follows:

𝐫v=𝐩v1v3[v1v2],\mathbf{r}_{v}=\mathbf{p}_{v}-\cfrac{1}{v_{3}}\begin{bmatrix}v_{1}\\ v_{2}\end{bmatrix}, (14)

where 𝐫v\mathbf{r}_{v} and 𝐩v\mathbf{p}_{v} represent the vanishing point residual and the vanishing point observation, respectively. The corresponding Jacobian matrix with respect to the vanishing point can be obtained in terms of δ𝐱\delta\mathbf{x} and δ𝐨\delta\mathbf{o} as follows:

𝐉v=𝐫v𝐯c𝐯c𝐋c[𝐋cδ𝐱𝐋c𝐋w𝐋wδ𝐨]\mathbf{J}_{v}=\cfrac{\partial\mathbf{r}_{v}}{\partial\mathbf{v}^{c}}\cfrac{\partial\mathbf{v}^{c}}{\partial\mathbf{L}^{c}}\begin{bmatrix}\cfrac{\partial\mathbf{L}^{c}}{\partial\delta\mathbf{\mathbf{x}}}&\cfrac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}\cfrac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}}\end{bmatrix} (15)

where

𝐫v𝐯c=[1v30v1v3201v3v2v32]2×3,\displaystyle\cfrac{\partial\mathbf{r}_{v}}{\partial\mathbf{v}^{c}}=\begin{bmatrix}-\cfrac{1}{v_{3}}&0&\cfrac{v_{1}}{v_{3}^{2}}\\ 0&-\cfrac{1}{v_{3}}&\cfrac{v_{2}}{v_{3}^{2}}\end{bmatrix}_{2\times 3}, (16)
𝐯c𝐋c=[𝟎3×3𝐊]3×6.\displaystyle\cfrac{\partial\mathbf{v}^{c}}{\partial\mathbf{L}^{c}}=\begin{bmatrix}\mathbf{0}_{3\times 3}&\mathbf{K}\end{bmatrix}_{3\times 6}.
Refer to caption
Figure 6: Illustration of the vanishing point residual. The points where the red solid and blue dashed lines intersect with the image represent observation and estimation, respectively. In vanishing point estimation, the direction vector of the line is used. The vanishing point difference between the observation from the observed line and the estimation from the 3D line is defined as the residual.

IV Fisher Information Matrix Rank Analysis

We rigorously analyze the observability of line features through Fisher information matrix (FIM) rank analysis. If the FIM is singular, the system is unobservable[41]. Wang et al. proved that the Jacobian matrix used in the FIM calculation must satisfy the full column rank condition for the FIM to satisfy nonsingularity[42]. We also use this approach to analyze the observability of the proposed method.

First, the FIM of the line measurement in the orthonormal representation is as follows:

𝐇lδ𝐨=𝐉lδ𝐨𝛀lδ𝐨𝐉lδ𝐨,\begin{gathered}\mathbf{H}_{l}^{\delta\mathbf{o}}=\mathbf{J}_{l}^{\delta\mathbf{o}\top}\bm{\Omega}_{l}^{\delta\mathbf{o}}\mathbf{J}_{l}^{\delta\mathbf{o}},\end{gathered} (17)

where

𝐉lδ𝐨=\displaystyle\mathbf{J}_{l}^{\delta\mathbf{o}}= 𝐫l𝐥c𝐥c𝐋c𝐋c𝐋w𝐋wδ𝐨\displaystyle\cfrac{\partial\mathbf{r}_{l}}{\partial\mathbf{l}^{c}}\cfrac{\partial\mathbf{l}^{c}}{\partial\mathbf{L}^{c}}\cfrac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}\cfrac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}} (18)
=\displaystyle= [0jl12jl13jl140jl22jl23jl24],\displaystyle\begin{bmatrix}0&j_{l_{12}}&j_{l_{13}}&j_{l_{14}}\\ 0&j_{l_{22}}&j_{l_{23}}&j_{l_{24}}\end{bmatrix},

and jlpqj_{l_{pq}} is the non-zero element in the pp-th row and the qq-th column of 𝐉lδ𝐨\mathbf{J}_{l}^{\delta\mathbf{o}}, which is obtained by substituting (9) into (18). 𝛀lδ𝐨\bm{\Omega}_{l}^{\delta\mathbf{o}} represents the inverse of covariance matrix of the line observation. Because 𝐥c𝐋c\frac{\partial\mathbf{l}^{c}}{\partial\mathbf{L}^{c}}, 𝐋c𝐋w\frac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}, and 𝐋wδ𝐨\frac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}} are full rank matrices from (9), the rank of 𝐉lδ𝐨\mathbf{J}_{l}^{\delta\mathbf{o}} is determined by 𝐫l𝐥c\frac{\partial\mathbf{r}_{l}}{\partial\mathbf{l}^{c}} in (18). From (9), the maximum rank of 𝐫l𝐥c\frac{\partial\mathbf{r}_{l}}{\partial\mathbf{l}^{c}} is 2, and the case in which the rank becomes 1 is as follows:

l2l1=vsveusue.\cfrac{l_{2}}{l_{1}}=\cfrac{v_{s}-v_{e}}{u_{s}-u_{e}}. (19)

However, because a line in the orthonormal representation has four parameters, the observability of the line cannot be guaranteed with the line measurement model alone. To solve this problem, a new observation on the line other than both endpoints is introduced as follows:

𝐩l=α𝐩s+(1α)𝐩e,α(0,1),\mathbf{p}_{l}=\alpha\mathbf{p}_{s}+(1-\alpha)\mathbf{p}_{e},\alpha\in(0,1), (20)

where 𝐩l\mathbf{p}_{l} represents the new observation. However, despite adding a new observation, the rank of 𝐉lδ𝐨\mathbf{J}_{l}^{\delta\mathbf{o}} is up to 2. Therefore, the line features using only the line measurement model are still not observable.

From a new perspective, we propose to calculate the FIM with the vanishing point measurements. The FIM of the vanishing point measurement in the orthonormal representation is as follows:

𝐇vδ𝐨=𝐉vδ𝐨𝛀vδ𝐨𝐉vδ𝐨,\begin{gathered}\mathbf{H}_{v}^{\delta\mathbf{o}}=\mathbf{J}_{v}^{\delta\mathbf{o}\top}\bm{\Omega}_{v}^{\delta\mathbf{o}}\mathbf{J}_{v}^{\delta\mathbf{o}},\end{gathered} (21)

where

𝐉vδ𝐨=\displaystyle\mathbf{J}_{v}^{\delta\mathbf{o}}= 𝐫v𝐯c𝐯c𝐋c𝐋c𝐋w𝐋wδ𝐨\displaystyle\cfrac{\partial\mathbf{r}_{v}}{\partial\mathbf{v}^{c}}\cfrac{\partial\mathbf{v}^{c}}{\partial\mathbf{L}^{c}}\cfrac{\partial\mathbf{L}^{c}}{\partial\mathbf{L}^{w}}\cfrac{\partial\mathbf{L}^{w}}{\partial\delta\mathbf{\mathbf{o}}} (22)
=\displaystyle= [jv110jv13jv14jv210jv23jv24],\displaystyle\begin{bmatrix}j_{v_{11}}&0&j_{v_{13}}&j_{v_{14}}\\ j_{v_{21}}&0&j_{v_{23}}&j_{v_{24}}\end{bmatrix},

and jvpqj_{v_{pq}} is the non-zero element in the pp-th row and the qq-th column of 𝐉vδ𝐨\mathbf{J}_{v}^{\delta\mathbf{o}}, which is obtained by substituting (16) into (22). 𝛀vδ𝐨\bm{\Omega}_{v}^{\delta\mathbf{o}} represents the inverse of covariance matrix of the vanishing point observation. Similarly, all matrices are full rank except 𝐫v𝐯c\frac{\partial\mathbf{r}_{v}}{\partial\mathbf{v}^{c}}. Therefore, the rank of 𝐉vδ𝐨\mathbf{J}_{v}^{\delta\mathbf{o}} is 2, which can be obtained from the rank of 𝐫v𝐯c\frac{\partial\mathbf{r}_{v}}{\partial\mathbf{v}^{c}} in (16).

By eigenvalue decomposition, the FIM considering both the line measurement and the vanishing point measurement can be obtained as follows:

𝐇δ𝐨\displaystyle\mathbf{H}^{\delta\mathbf{o}} =𝐇lδ𝐨+𝐇vδ𝐨\displaystyle=\mathbf{H}_{l}^{\delta\mathbf{o}}+\mathbf{H}_{v}^{\delta\mathbf{o}} (23)
=𝐉lδ𝐨𝛀lδ𝐨𝐉lδ𝐨+𝐉vδ𝐨𝛀vδ𝐨𝐉vδ𝐨\displaystyle=\mathbf{J}_{l}^{\delta\mathbf{o}\top}\bm{\Omega}_{l}^{\delta\mathbf{o}}\mathbf{J}_{l}^{\delta\mathbf{o}}+\mathbf{J}_{v}^{\delta\mathbf{o}\top}\bm{\Omega}_{v}^{\delta\mathbf{o}}\mathbf{J}_{v}^{\delta\mathbf{o}}
=𝐉δ𝐨𝛀δ𝐨𝐉δ𝐨,\displaystyle=\mathbf{J}^{\delta\mathbf{o}\top}\bm{\Omega}^{\delta\mathbf{o}}\mathbf{J}^{\delta\mathbf{o}},

where

𝐉δ𝐨=[0jl12jl13jl140jl22jl23jl24jv110jv13jv14jv210jv23jv24],𝛀δ𝐨=[𝛀lδ𝐨00𝛀vδ𝐨].\begin{gathered}\mathbf{J}^{\delta\mathbf{o}}=\begin{bmatrix}0&j_{l_{12}}&j_{l_{13}}&j_{l_{14}}\\ 0&j_{l_{22}}&j_{l_{23}}&j_{l_{24}}\\ j_{v_{11}}&0&j_{v_{13}}&j_{v_{14}}\\ j_{v_{21}}&0&j_{v_{23}}&j_{v_{24}}\end{bmatrix},\\ \bm{\Omega}^{\delta\mathbf{o}}=\begin{bmatrix}\bm{\Omega}_{l}^{\delta\mathbf{o}}&0\\ 0&\bm{\Omega}_{v}^{\delta\mathbf{o}}\end{bmatrix}.\end{gathered} (24)

At this time, the rows of the line measurement and the vanishing point measurement are independent in 𝐉δ𝐨\mathbf{J}^{\delta\mathbf{o}}. Therefore, the rank of 𝐉δ𝐨\mathbf{J}^{\delta\mathbf{o}} is 4, except for the case of (19). We can confirm that the line features become fully observable by additionally using the vanishing point meausrement model.

V Experimental Results

The experiments were carried out on an Intel Core i7-9700K processor with 32GB of RAM. Using the EuRoC micro aerial vehicle (MAV) datasets [43], we tested the state-of-the-art algortihms and the UV-SLAM. Each dataset offers a varied level of complexity depending on factors like lighting, texture, and MAV speed. Therefore, the datasets were appropriate to validate the performance of the proposed method.

We compared the localization accuracy of the proposed method with that of VINS-Mono which is our base algorithm. In addition, we also compared PL-VINS[15], ALVIO, and our previous work which use line features on top of VINS-Mono. The parameters of compared algorithms are set to the default values in the open-source codes. We employed the rpg trajectory evaluation tool[44]. Table I shows the translational root mean square error (RMSE) for the EuRoC datasets. The proposed method has better performance than state-of-the-art algorithms in all datasets. In particular, the proposed algorithm shows 32.3%, 23.8%, 26.4%, and 17.6% smaller average error than VINS-Mono, PL-VINS, ALVIO, and our previous work, respectively. More accurate results could be obtained in the proposed algorithm because the line features become fully observable using the vanishing point measurements. The results for trajectory, rotation error, and translation error of V2_02_medium in the EuRoC datasets are shown in Fig. 7.

TABLE I: Translational RMSE without loop closing for the EuRoC Datasets (Unit: m)
Translation VINS-Mono PL-VINS ALVIO Our method UV-SLAM
RMSE in [3]
MH_01_easy 0.159 0.164 0.148 0.142 0.139
MH_02_easy 0.140 0.174 0.136 0.126 0.094
MH_03_medium 0.225 0.187 0.209 0.198 0.189
MH_04_difficult 0.408 0.335 0.389 0.301 0.261
MH_05_difficult 0.312 0.347 0.317 0.293 0.188
V1_01_easy 0.094 0.071 0.085 0.087 0.067
V1_02_medium 0.115 0.086 0.075 0.072 0.070
V1_03_difficult 0.203 0.152 0.200 0.156 0.109
V2_01_easy 0.099 0.090 0.094 0.098 0.085
V2_02_medium 0.161 0.120 0.133 0.103 0.112
V2_03_difficult 0.341 0.278 0.288 0.277 0.213
Refer to caption
Refer to caption
Figure 7: The top views of 7 trajectory and 7 boxplot of RMSE according to distance traveled for VINS-Mono, PL-VINS, ALVIO, our previous work[3], and UV-SLAM for V2_02_medium in the EuRoC datasets.
Refer to caption
Refer to caption
Refer to caption
Figure 8: The top views of line mapping results of 8 ALVIO, 8 our previous work [3], and 8 UV-SLAM for MH_05_difficult (top) and V2_01_easy (bottom) in the EuRoC datasets.

In addition, the mapping results for MH_05_difficult and V2_01_easy in the EuRoC datasets are shown in Fig. 8. In the case of ALVIO, the quality of mapping is low due to degenerate lines. In our previous work, degenerate lines were corrected only in pure translational camera motion. Noteworthily, lines’ direction vectors are aligned thanks to the vanishing point measurements in UV-SLAM. All top views of the line mapping results for the EuRoC datasets are available at: https://github.com/url-kaist/UV-SLAM/blob/main/mapping_result.pdf.

The average runtime is about 53.528ms for the frontend and 47.086ms for the backend for the EuRoC datasets. UV-SLAM has only about 3ms longer frontend runtime than other algorithms because it extracts vanishing points. Moreover, the runtime of the backend corresponding to optimization is similar to those of other algorithms. This is because the proposed vanishing point measurement model does not use new parameters.

VI Conclusion

In summary, we proposed UV-SLAM, which is the unconstrained line-based SLAM using a vanishing point measurement. The proposed method can be used without any assumptions such as the Manhattan world. We calculated the residual and Jacobian matrices of the vanishing point measurements. Through FIM rank analysis, we verified that line’s observability is guaranteed by introducing the vanishing point measurements into the existing method. In addition, we showed that localization accuracy and mapping quality have increased through quantitative and qualitative comparisons with state-of-the-art algorithms. For future work, we will implement mesh or pixel-wise mapping through sparse line mapping from the proposed algorithm.

References

  • [1] Y. He, J. Zhao, Y. Guo, W. He, and K. Yuan, “PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features,” Sensors, vol. 18, no. 4, p. 1159, 2018.
  • [2] K. Jung, Y. Kim, H. Lim, and H. Myung, “ALVIO: Adaptive line and point feature-based visual inertial odometry for robust localization in indoor environments,” in Proc. International Conference on Robot Intelligence Technology and Applications (RiTA), 2020, https://arxiv.org/abs/2012.15008.
  • [3] H. Lim, Y. Kim, K. Jung, S. Hu, and H. Myung, “Avoiding degeneracy for monocular visual SLAM with point and line features,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 11 675–11 681.
  • [4] D. G. Kottas and S. I. Roumeliotis, “Efficient and consistent vision-aided inertial navigation using line observations,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2013, pp. 1540–1547.
  • [5] X. Kong, W. Wu, L. Zhang, and Y. Wang, “Tightly-coupled stereo visual-inertial navigation using point and line features,” Sensors, vol. 15, no. 6, pp. 12 816–12 833, 2015.
  • [6] G. Zhang, J. H. Lee, J. Lim, and I. H. Suh, “Building a 3-D line-based map using stereo SLAM,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1364–1377, 2015.
  • [7] A. Bartoli and P. Sturm, “Structure-from-motion using lines: Representation, triangulation, and bundle adjustment,” Computer Vision and Image Understanding, vol. 100, no. 3, pp. 416–441, 2005.
  • [8] F. Zheng, G. Tsai, Z. Zhang, S. Liu, C.-C. Chu, and H. Hu, “Trifo-VIO: Robust and efficient stereo visual inertial odometry using points and lines,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 3686–3693.
  • [9] Y. Yang, P. Geneva, K. Eckenhoff, and G. Huang, “Visual-inertial odometry with point and line features,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 2447–2454.
  • [10] A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2007, pp. 3565–3572.
  • [11] A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer, “PL-SLAM: Real-time monocular visual SLAM with points and lines,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 4503–4508.
  • [12] X. Zuo, X. Xie, Y. Liu, and G. Huang, “Robust visual SLAM with point and line features,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1775–1782.
  • [13] S. J. Lee and S. S. Hwang, “Elaborate monocular point and line SLAM with robust initialization,” in Proc. of the IEEE International Conference on Computer Vision, 2019, pp. 1121–1129.
  • [14] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
  • [15] Q. Fu, J. Wang, H. Yu, I. Ali, F. Guo, and H. Zhang, “PL-VINS: Real-time monocular visual-inertial SLAM with point and line,” arXiv preprint arXiv:2009.07462, 2020.
  • [16] T. Qin, P. Li, and S. Shen, “VINS-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  • [17] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision.   Cambridge University Press, 2003.
  • [18] T. Sugiura, A. Torii, and M. Okutomi, “3D surface reconstruction from point-and-line cloud,” in Proc. International Conference on 3D Vision, 2015, pp. 264–272.
  • [19] H. Zhou, D. Zhou, K. Peng, W. Fan, and Y. Liu, “SLAM-based 3D line reconstruction,” in Proc. World Congress on Intelligent Control and Automation (WCICA), 2018, pp. 1148–1153.
  • [20] Y. Yang and G. Huang, “Observability analysis of aided INS with heterogeneous features of points, lines, and planes,” IEEE Transactions on Robotics, vol. 35, no. 6, pp. 1399–1418, 2019.
  • [21] A. Ö. Ok, J. D. Wegner, C. Heipke, F. Rottensteiner, U. Sörgel, and V. Toprak, “Accurate reconstruction of near-epipolar line segments from stereo aerial images,” Photogrammetrie-Fernerkundung-Geoinformation, no. 4, pp. 345–358, 2012.
  • [22] P. Kim, B. Coltin, and H. J. Kim, “Low-drift visual odometry in structured environments by decoupling rotational and translational motion,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7247–7253.
  • [23] ——, “Indoor RGB-D compass from a single line and plane,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4673–4680.
  • [24] Y. Li, N. Brasch, Y. Wang, N. Navab, and F. Tombari, “Structure-SLAM: Low-drift monocular SLAM in indoor environments,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6583–6590, 2020.
  • [25] Y. Li, R. Yunus, N. Brasch, N. Navab, and F. Tombari, “RGB-D SLAM with structural regularities,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 11 581–11 587.
  • [26] R. Yunus, Y. Li, and F. Tombari, “ManhattanSLAM: Robust planar tracking and mapping leveraging mixture of Manhattan frames,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 6687–6693.
  • [27] H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp. 1364–1375, 2015.
  • [28] D. Zou, Y. Wu, L. Pei, H. Ling, and W. Yu, “StructVIO: Visual-inertial odometry with structural regularity of man-made environments,” IEEE Transactions on Robotics, vol. 35, no. 4, pp. 999–1013, 2019.
  • [29] B. Xu, P. Wang, Y. He, Y. Chen, Y. Chen, and M. Zhou, “Leveraging structural information to improve point line visual-inertial odometry,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
  • [30] J. Lee and S.-Y. Park, “PLF-VINS: Real-time monocular visual-inertial SLAM with point-line fusion and parallel-line fusion,” IEEE Robotics and Automation Letters, 2021.
  • [31] J. Ma, X. Wang, Y. He, X. Mei, and J. Zhao, “Line-based stereo SLAM by junction matching and vanishing point alignment,” IEEE Access, vol. 7, pp. 181 800–181 811, 2019.
  • [32] J. Shi et al., “Good features to track,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1994, pp. 593–600.
  • [33] C. Tomasi and T. Kanade, “Detection and tracking of point,” International Journal of Computer Vision, vol. 9, pp. 137–154, 1991.
  • [34] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2016.
  • [35] G. Sibley, L. Matthies, and G. Sukhatme, “Sliding window filter with application to planetary landing,” Journal of Field Robotics, vol. 27, no. 5, pp. 587–608, 2010.
  • [36] R. G. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, “LSD: A fast line segment detector with a false detection control,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722–732, 2008.
  • [37] L. Zhang and R. Koch, “An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency,” Journal of Visual Communication and Image Representation, vol. 24, no. 7, pp. 794–805, 2013.
  • [38] R. Toldo and A. Fusiello, “Robust multiple structures estimation with J-linkage,” in Proc. European Conference on Computer Vision (ECCV).   Springer, 2008, pp. 537–547.
  • [39] P. J. Huber, “Robust estimation of a location parameter,” in Breakthroughs in Statistics.   Springer, 1992, pp. 492–518.
  • [40] S. Agarwal, K. Mierle, and Others, “Ceres solver,” Accessed on: Sep. 09, 2021. [Online], Available: http://ceres-solver.org.
  • [41] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation: Theory, Algorithms, and Software.   John Wiley & Sons, 2004.
  • [42] Z. Wang and G. Dissanayake, “Observability analysis of SLAM using Fisher information matrix,” in Proc. IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV), 2008, pp. 1242–1247.
  • [43] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The EuRoC micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  • [44] Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 7244–7251.