This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Department of Computer Science and Engineering
The Chinese University of Hong Kong, Hong Kong, China
22institutetext: Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Hong Kong, China

Enhanced Scale-aware Depth Estimation
for Monocular Endoscopic Scenes
with Geometric Modeling

Ruofeng Wei 11    Bin Li 22    Kai Chen 11    Yiyao Ma 11    Yunhui Liu 22    Qi Dou 1🖂1🖂
Abstract

Scale-aware monocular depth estimation poses a significant challenge in computer-aided endoscopic navigation. However, existing depth estimation methods that do not consider the geometric priors struggle to learn the absolute scale from training with monocular endoscopic sequences. Additionally, conventional methods face difficulties in accurately estimating details on tissue and instruments boundaries. In this paper, we tackle these problems by proposing a novel enhanced scale-aware framework that only uses monocular images with geometric modeling for depth estimation. Specifically, we first propose a multi-resolution depth fusion strategy to enhance the quality of monocular depth estimation. To recover the precise scale between relative depth and real-world values, we further calculate the 3D poses of instruments in the endoscopic scenes by algebraic geometry based on the image-only geometric primitives (i.e., boundaries and tip of instruments). Afterwards, the 3D poses of surgical instruments enable the scale recovery of relative depth maps. By coupling scale factors and relative depth estimation, the scale-aware depth of the monocular endoscopic scenes can be estimated. We evaluate the pipeline on in-house endoscopic surgery videos and simulated data. The results demonstrate that our method can learn the absolute scale with geometric modeling and accurately estimate scale-aware depth for monocular scenes. Code is available at: https://github.com/med-air/MonoEndoDepth

Keywords:
Scale-aware Monocular Depth Estimation Geometric Modeling Endoscopic Robotic Surgery.

1 Introduction

Estimating the scale-aware depth from a monocular endoscopic image is a key yet challenging topic in computer-assisted surgery [23], especially for next-generation flexible surgical robots which cannot acquire stereo images. It is a prerequisite for down-stream tasks such as 2D-3D image registration, surgical navigation and autonomous tissue manipulation based on modeling of the surgical field. Despite recent progress on the topic [7, 22], there remains fundamental problems being unsolved that may prevent its usage in real-world practice. First, previous methods struggle with estimating the absolute scale from monocular images. Although the evaluation of these methods typically involves re-scaling each estimate using the median ratio between the ground-truth depth and the prediction [9, 13], it can be challenging to acquire these median ratios in the navigation. Second, existing learning-based approaches often trained on relatively small input resolution using lightweight networks cater to real-time application on embedded platforms [3], this leads to the loss of detailed information being leveraged for more accurate estimation.

Several scale-aware depth estimation approaches [18, 24] incorporate kinematics data to estimate the real scale from monocular images. For instance, MetricDepthS-Net [17] utilizes kinematics and camera poses from the ego-motion network [12] to recover the scale of endoscopic depth estimation. DynaDepth [24] integrates IMU measurements during training to learn accurate scale for monocular depth estimation. However, the use of sensors to obtain kinematics information can be costly and their applicability is often limited by specific environments. In addition, some image-based methods [10, 20] employ the geometric relationship between the camera and ground from the image to calculate the scale of relative depth, but these methods are mainly applicable in autonomous driving scenarios. For endoscopic scenes, a promising geometric constraint is the 3D modeling of surgical instruments with cylindrical shafts from the image, which can help estimate the scale-aware depth.

To further achieve depth estimates with high boundary accuracy, various techniques have been developed, such as network design improvements [11] and the integration of high-level constraints [8, 21]. For example, DPT [11] improves depth estimation accuracy by using vision transformers as a backbone. SemHint-MD [8] leverages semantic segmentation to guide the training of depth network, leading to better depth estimation. Depth Anything [21] which is a depth foundation model refines the depth estimation with an auxiliary segmentation task. In this work, instead of proposing a new depth estimation method, we demonstrate that existing depth estimation models can be adapted to generate higher-quality results by fusing estimations from multiple resolution inputs.

In this paper, we propose an image-based pipeline to estimate the scale-aware depth of monocular endoscopic scenes. It utilizes the surgical tools with cylindrical shaft of a known radius as a geometric constraint to recover the absolute scale. In particular, we firstly improve the quality of relative depth estimation based on a multi-resolution depth fusion strategy. Then, we present a geometric-based optimization model to compute the real 3D pose of the instrument from simple geometric primitives (i.e., boundaries and tip of instruments). Based on the 3D pose, the precise scale between relative depth and real world is recovered. Finally, we qualitatively and quantitatively evaluate our pipeline on in-house endoscopic surgery videos and simulated data. Our method exceeds conventional stereo-based approaches, showing the pipeline can achieve depth estimation with real scale on monocular surgical scenes.

2 Method

2.1 Overview of the Scale-aware Depth Estimation Framework

Fig. 1 shows an overview of our proposed scale-aware monocular depth estimation framework which consists of three parts. First, we fuse estimations at different image resolutions, and use a relative depth estimation network to generate high-quality depth maps. Second, we propose a geometry modeling method to track the poses of the surgical instruments in the image using only geometric primitives from the endoscopic images. Third, the 3D poses of the instruments and the depth map are united to recover the scale between relative depth and real-world values. Thus, the absolute depth of the monocular endoscopic scenes can be estimated via the scale recovery based on surgical instruments geometry.

Refer to caption
Figure 1: Overview of our proposed scale-aware monocular depth estimation framework, which consists of modules for relative depth estimation, surgical instrument pose estimation with geometric modeling, and scale recovery.

2.2 Enhancing Monocular Depth via Multi-Resolution Fusion

Our relative depth estimation network utilizes Monodepth2 [5] as its backbone. In the training phase, the network is learned by minimizing the photo-metric loss between the input images and the corresponding synthetic frames generated through novel view synthesis. However, since the inputs images are cropped to a relatively low resolution and then fed to the network for training, many details are lost while performing depth estimation. To address this issue, we develop an enhancing depth module that fuses two estimates for the same endoscopic image at different resolutions to improve the quality of relative depth maps.

Our depth model is capable of handling arbitrary image sizes for depth estimation. As shown in Section A of supplementary material, when input with different resolutions is fed into the network, we can observe a specific trend in the depth maps. In low resolutions that are close to the training data, the estimated depth consistently captures the overall structure but may lack high-frequency details. When the same image is input to the model in higher resolution, more details are captured while the structure consistency of the result gradually degrades. Further, the second observation is that when a fusion operation is performed to combine relative depth estimates with different resolutions, the details in the depth estimated from higher-resolution input are transferred to the lower-resolution depth map while maintaining structural consistency. Following these observations, we present a module to improve the depth maps in low resolution.

In our enhancing module shown in Fig. 1 (a), we generate two depth estimations of the same endoscopic image. One depth map estimated from the network with a smaller-res input has a consistent structure but suffers from fine details. Another depth predicted with a higher-res input contains more details. Thus, we adopt a low pass fusing filter [6] with low-res depth as a guide and apply it to the high-res depth to generate a higher-quality relative depth with good structural consistency and details. Based on the above fusing process, the enhanced relative depth map 𝐃i\mathbf{D}_{i} for each monocular image is predicted.

2.3 Pose Estimation of Instruments with Geometric Modeling

As illustrated in Fig. 1(b), most of surgical instruments have a cylindrical shaft with a constant radius rsr_{s}. Hereby, given an endoscopic frame ii, we can calculate the 3D pose 𝐓ie3×4{}^{e}\mathbf{T}_{i}\in\mathbb{R}^{3\times 4} of the tool based on its geometrical primitives (i.e., boundaries and tip), where ee denotes the endoscopic image coordinate. The axis of the tool at pose 𝐓ie{}^{e}\mathbf{T}_{i} under the endoscope frame can be represented with the Plücker coordinates (𝐬ie,𝐦ie)\left({}^{e}\mathbf{s}_{i},{}^{e}\mathbf{m}_{i}\right), with 𝐬ie3×1{}^{e}\mathbf{s}_{i}\in\mathbb{R}^{3\times 1} being a unit vector denoting the direction of the tool’s shaft and 𝐦ie3×1{}^{e}\mathbf{m}_{i}\in\mathbb{R}^{3\times 1} being the moment of the tool. Further, assume 𝐜je{}^{e}\mathbf{c}_{j} is a 3D point on the tool’s surface and 𝐜~je=[𝐜je 1]𝖳{}^{e}\widetilde{\mathbf{c}}_{j}=[{}^{e}\mathbf{c}_{j}\;1]^{\mathsf{T}} is its homogeneous coordinate. Based on the geometrical modeling [2] of cylindrical objects, the following relationship can be derived:

𝐜~j𝖳e[[𝐬ie]×[𝐬ie]×𝖳[𝐬ie]×𝐦ie𝐦i𝖳e[𝐬ie]×𝖳𝐦ie2rs2]𝐜~je=0.{}^{e}\widetilde{\mathbf{c}}_{j}^{\mathsf{T}}\cdot\begin{bmatrix}[{}^{e}\mathbf{s}_{i}]_{\times}\,[{}^{e}\mathbf{s}_{i}]^{\mathsf{T}}_{\times}&[{}^{e}\mathbf{s}_{i}]_{\times}\,{}^{e}\mathbf{m}_{i}\\ {}^{e}\mathbf{m}_{i}^{\mathsf{T}}\,[{}^{e}\mathbf{s}_{i}]_{\times}^{\mathsf{T}}&\|{}^{e}\mathbf{m}_{i}\|^{2}-r_{s}^{2}\end{bmatrix}\cdot{}^{e}\widetilde{\mathbf{c}}_{j}=0. (1)

According to the perspective projection theory [2], the Plücker coordinates of the tool’s axis can be associated with the edge boundaries (𝐥i3,𝐥i+3)\left(\mathbf{l}_{i}^{-}\in\mathbb{R}^{3},\mathbf{l}_{i}^{+}\in\mathbb{R}^{3}\right) of the shaft in the image plane as follows:

𝐥i=𝐊𝖳(𝐈α[𝐬ie]×)𝐦ie,𝐥i+=𝐊𝖳(𝐈+α[𝐬ie]×)𝐦ie.\mathbf{l}_{i}^{-}=\mathbf{K}^{-\mathsf{T}}\cdot\left(\mathbf{I}-\alpha[{}^{e}\mathbf{s}_{i}]_{\times}\right){}^{e}\mathbf{m}_{i},\quad\mathbf{l}_{i}^{+}=\mathbf{K}^{-\mathsf{T}}\cdot\left(\mathbf{I}+\alpha[{}^{e}\mathbf{s}_{i}]_{\times}\right){}^{e}\mathbf{m}_{i}. (2)

with 𝐊\mathbf{K} is the camera intrinsic and α=rs𝐦ie2rs2\alpha=\frac{r_{s}}{\sqrt{\|{}^{e}\mathbf{m}_{i}\|^{2}-r_{s}^{2}}}. Please refer to Section B of supplementary material for detailed derivation. Besides, based on the shaft mask predicted by the tool segmentor, we can compute the edge boundaries from the mask contour. After that, the (𝐬ie,𝐦ie)\left({}^{e}\mathbf{s}_{i},{}^{e}\mathbf{m}_{i}\right) can be calculated via combining Eq. 2 and the boundaries equations from the 2D mask. In this regard, we can obtain the 3D pose 𝐓ie{}^{e}\mathbf{T}_{i} of the surgical instrument from the Plücker coordinates [𝐬ie,𝐦ie][{}^{e}\mathbf{s}_{i},{}^{e}\mathbf{m}_{i}].

2.4 Scale Recovery by 3D Poses

In this section, we first calculate the 3D points on the surface of the shaft on the basis of the calculated pose. By jointly taking these points and corresponding depth information from the estimated relative depth map in Section 2.2, we can compute the transformation scale parameters between the relative depth and the real world. First, the (𝐬ie,𝐦ie)\left({}^{e}\mathbf{s}_{i},{}^{e}\mathbf{m}_{i}\right) can be further transformed to the Plücker matrix 𝐋i\mathbf{L}_{i} for the representation of the axis:

𝐋i=[[𝐦ie]×𝐬ie𝐬i𝖳e0].\mathbf{L}_{i}=\begin{bmatrix}[{}^{e}\mathbf{m}_{i}]_{\times}&{}^{e}\mathbf{s}_{i}\\ -{}^{e}\mathbf{s}_{i}^{\mathsf{T}}&0\end{bmatrix}. (3)

The projection of the axis of the tool in the image plane is calculated as 𝐥s=𝐊𝖳𝐦ie\mathbf{l}_{s}=\mathbf{K}^{-\mathsf{T}}\cdot{}^{e}\mathbf{m}_{i}. As shown in Fig. 1 (b), the shaft tip 𝐩s0=[us0vs0 1]𝖳\mathbf{p}_{s0}=[u_{s0}\;v_{s0}\;1]^{\mathsf{T}} along 𝐥s\mathbf{l}_{s} is firstly predicted from the endoscopic image, where (us0,vs0)(u_{s0},v_{s0}) denotes the image pixel. The corresponding 3D point 𝐜s0\mathbf{c}_{s0} along the axis of shaft is developed as follows:

𝐰=[𝐊|𝟎3×1]𝐋i[𝐊[0][0] 0𝐊[0][2]us0 0]𝖳,\displaystyle\mathbf{w}=[\mathbf{K}|\mathbf{0}_{3\times 1}]\cdot\mathbf{L}_{i}\cdot[\mathbf{K}[0][0]\ 0\ \mathbf{K}[0][2]\!-\!u_{s0}\ 0]^{\mathsf{T}}, (4)
𝐜s0=[𝐰[0]𝐰[3]𝐰[1]𝐰[3]𝐰[2]𝐰[3]].\displaystyle\to\ \mathbf{c}_{s0}=[\frac{\mathbf{w}[0]}{\mathbf{w}[3]}\ \frac{\mathbf{w}[1]}{\mathbf{w}[3]}\ \frac{\mathbf{w}[2]}{\mathbf{w}[3]}].

where 𝐰\mathbf{w} is an vector calculated by 𝐋i\mathbf{L}_{i} and camera intrinsic. So the depth of the point 𝐜f0\mathbf{c}_{f0} on the surface of the shaft is derived (see Section C of supplementary material). Based on the above process, the depth values (z(𝐜f0),,z(𝐜fn))\left(z(\mathbf{c}_{f0}),\cdots,z(\mathbf{c}_{fn})\right) of 3D points re-projected by the pixel (𝐩s0,,𝐩sn)\left(\mathbf{p}_{s0},\cdots,\mathbf{p}_{sn}\right) along 𝐥s\mathbf{l}_{s} can be computed.

Then, we obtain the relative depth values of (𝐩s0,,𝐩sn)\left(\mathbf{p}_{s0},\cdots,\mathbf{p}_{sn}\right) from the depth map 𝐃i\mathbf{D}_{i}, which are defined as (𝐃i[𝐩s0],,𝐃i[𝐩sn])\left(\mathbf{D}_{i}[\mathbf{p}_{s0}],\cdots,\mathbf{D}_{i}[\mathbf{p}_{sn}]\right). Therefore, the transformation scale parameters between the relative depth and the real world can be solved in closed form as a standard least-squares problem:

(η,γ)=argminη,γk=1n(η×z(𝐜fk)+γ𝐃i[𝐩sk]).\left(\eta,\gamma\right)=\mathop{\arg\min}\limits_{\eta,\gamma}\sum_{k=1}^{n}{\left(\eta\times z(\mathbf{c}_{fk})+\gamma-\mathbf{D}_{i}[\mathbf{p}_{sk}]\right)}. (5)

Afterward, as shown in Fig. 1 (c), we can recover the absolute depth 𝐀i\mathbf{A}_{i} of the monocular endoscopic image utilizing the parameters (η,γ)\left(\eta,\gamma\right).

3 Experiments

Datasets. We evaluate the accuracy of the proposed scale-aware depth estimation pipeline on our in-house surgery and simulator data. We extract a total of 95 clips from 6 surgical videos of our in-house daVinci robotic prostatectomy data. The monocular endoscopic sequences in these clips are used for training and evaluation. The instruments used in surgery are standard daVinci instruments with a cylindrical shaft with a constant radius 4.5mm. For training, validation, and testing, there are 3947, 1189, and 534 frames, respectively. In the experiment, the frames are resized from the resolution 1280×10241280\times 1024 to 640×512640\times 512. We also calculate the ground truth depth maps for each frame in our in-house data via standard stereo matching [16]. Furthermore, we collect several video clips from 5 tasks in SurRoL [19] surgical simulator for quantitative 3D pose evaluation of the scale-aware depth estimation.

Evaluation Metrics. We report the difference between the predicted and ground truth depth maps using six popular depth metrics [4]: absolute relative error (Abs Rel), squared relative error (Sq Rel), root mean squared error (RMSE), log-scale RMSE (RMSElog\text{RMSE}_{log}), δ<1.25\delta<1.25 which denotes percentage of pixels within 20% of ground truth values, and mean absolute error (MAE).

Implementation Details. We follow the training process of Monodepth2 [5] to learn the relative depth estimation, including pre-trained in ImageNet and trained in our medical data. For tool segmentation, we employ a lightweight U-Net with VGG11 [14] as the backbone. The network consists of five scales of down-sample layers to yield a runtime of approximately 12msms per frame. To train the segmentation model, we utilize a publicly available dataset [1], and then apply it to predict binary tool masks on our surgical datasets. To address the domain gap between the training and testing data, we apply morphological operation to refine the tool boundaries in the masks.

Table 1: Quantitative comparisons for scale-aware depth estimation on in-house data. Sq Rel, RMSE, and MAE are in mm. The closer the scale is to 1, the better. The best results are indicated in bold.
Method Scale Error\text{Error}\downarrow Accuracy\text{Accuracy}\uparrow
Abs Rel Sq Rel RMSE RMSElog\text{RMSE}_{log} MAE δ<1.25\delta\!<\!1.25
EndoSfM[9] NA 0.165±\pm0.057 2.481±\pm1.723 11.032±\pm3.646 0.200±\pm0.057 8.564±\pm2.804 0.769±\pm0.115
AF-SfMLearner[13] NA 0.211±\pm0.086 4.432±\pm3.731 13.435±\pm4.873 0.250±\pm0.085 10.342±\pm3.799 0.725±\pm0.118
ManyDepth[15] NA 0.165±\pm0.047 2.489±\pm1.526 11.691±\pm3.599 0.204±\pm0.052 9.608±\pm2.717 0.742±\pm0.110
Depth Anything[21] NA 0.179±\pm0.053 3.734±\pm2.272 15.371±\pm5.206 0.220±\pm0.062 11.177±\pm3.747 0.710±\pm0.134
DPT[11] NA 0.180±\pm0.060 3.201±\pm2.876 13.024±\pm4.952 0.222±\pm0.062 9.997±\pm3.584 0.719±\pm0.128
MonoDepth Stereo[12] 1.197±\pm0.147 0.182±\pm0.052 3.078±\pm1.668 13.194±\pm3.785 0.226±\pm0.056 10.285±\pm2.904 0.699±\pm0.129
Ours 0.959±\pm0.043 0.110±\pm0.043 1.388±\pm1.340 8.637±\pm3.958 0.148±\pm0.053 6.347±\pm2.943 0.880±\pm0.102

3.1 Results

Evaluation on Scale-Aware Depth Estimation. We present the quantitative depth comparison results on our in-house surgery data in Table 1, which rescale the results using the ground truth median scaling method. Apart from depth metrics, we calculate the means and standard errors of the re-scaling factors to showcase the scale-awareness capability. Our proposed depth model demonstrates the best performance in terms of up-to-scale accuracy across all metrics. Specifically, the state-of-the-art baseline method only achieves an MAE error of 8.564 mm, an RMSE of 11.032 mm, and an accuracy of 76.9%. In contrast, our method demonstrates superior performance with an MAE error of 6.347 mm, an RMSE of 8.637 mm, and an accuracy of 88.0%. Notably, our model also achieves near-perfect accuracy in terms of absolute scale. These quantitative results show that the proposed method indeed extracts the absolute scale with geometric modeling, resulting in fine scale-aware depth estimation. Furthermore, four typical images are selected for qualitative depth comparison. As shown in Fig. 2, our method can predict the scale-aware depth with smaller errors, sharp boundaries and fine-grained details compared to other approaches. Moreover, the runtime of the proposed method is around 66 ms per frame.

Refer to caption
Figure 2: Qualitative comparisons on in-house data. Our method outperforms EndoSfM(EndoS) [9], AF-SfMLearner(AF) [13], ManyDepth(ManyD) [15], Depth Anything(DepthA) [21], DPT [11], and MonoDepth Stereo(Stereo) [12] in depth quality.

Evaluation on Instrument Pose Estimation. We present the quantitative pose evaluation results on simulator data in Table 2. In addition, as illustrated in Fig. 3, the calculated poses of the surgical instruments are rendered and put together with ground truth depth maps for qualitative comparison. Table 2 shows that the average pose estimation errors in orientation and translation are 1.459 °, 3.220 °, 1.220 mm, 1.150 mm, and 2.356 mm, indicating that our geometric modeling-based method can predict the tool’s poses with high accuracy. Besides, the qualitative comparison results demonstrate the calculated 3D poses of the instruments are aligned well with the ground truth depth, thereby proving the effectiveness of the proposed pose estimation method.

Table 2: Quantitative evaluation of 3D pose estimation of surgical instruments.The orientation errors are measured in degrees, while the localization errors are in mm.
Task Im. # Ori. X Ori. Y Tip X Tip Y Tip Z
Peg Transfer 49 1.249±\pm1.291 2.909±\pm3.053 1.076±\pm0.676 1.141±\pm0.759 2.415±\pm1.787
Needle Pick 36 1.312±\pm1.274 2.877±\pm3.065 1.342±\pm0.672 1.127±\pm0.727 2.484±\pm2.322
Needle Reach 42 1.503±\pm1.380 3.367±\pm3.243 1.121±\pm0.752 1.105±\pm0.728 2.241±\pm1.835
Gauze Retrieve 43 1.723±\pm1.525 3.680±\pm2.954 1.273±\pm0.757 1.178±\pm0.812 2.325±\pm2.415
Bimual Peg Transfer 37 1.507±\pm1.617 3.267±\pm2.841 1.290±\pm0.697 1.198±\pm0.863 2.316±\pm2.075
Mean 41 1.459 3.220 1.220 1.150 2.356
Refer to caption
Figure 3: Qualitative comparison of 3D pose estimation. Green cylinders represent the rendered calculated poses of surgical tools.

Ablation Studies. To study the impact of different input resolution in the enhancing module, we perform a quantitative ablation on the enhancing module of our relative depth estimation network in Table 3. We observe that the multi-resolution fusion strategy indeed improves the quality of relative depth estimation. Furthermore, when the high-res input images are set to 480×384480\times 384, the relative depth network can obtain higher depth estimation accuracy.

Table 3: Ablation studies of different input resolution in the enhancing module.
High-res Low-res Error\text{Error}\downarrow Accuracy\text{Accuracy}\uparrow
640×512640\times 512 480×384480\times 384 320×256320\times 256 Abs Rel RMSE MAE δ<1.25\delta\!<\!1.25
0.113±\pm0.045 9.008±\pm3.038 6.524±\pm2.068 0.878±\pm0.073
0.113±\pm0.045 8.881±\pm3.028 6.424±\pm2.068 0.881±\pm0.072
0.110±\pm0.045 8.698±\pm3.065 6.322±\pm2.091 0.886±\pm0.074

4 Conclusion

This paper presents a novel image-based method for high-quality and scale-aware monocular depth estimation with geometry modeling. We first improve the relative depth via a fusion strategy. Then, we compute the instruments poses using algebraic geometry. Leveraging these 3D poses, the accurate scale is recovered, enabling scale-aware depth estimation of monocular images. We also evaluate the accuracy of the proposed method on our surgical and simulated data. In the future, the scale-aware depth estimation will be utilized in robotic ENT surgery.

{credits}

4.0.1 Acknowledgements

This work was supported in part by the Shenzhen Portion of Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone under HZQB-KCZYB-20200089, in part by the National Natural Science Foundation of China under Project No. 62322318, in part by the ANR/RGC Joint Research Scheme of the Research Grants Council of the Hong Kong Special Administrative Region, China and the French National Research Agency (Project No. A-CUHK402/23), and in part by Hong Kong Innovation and Technology Commission under Project No. PRP/026/22FX.

4.0.2 \discintname

The authors have no competing interests to declare that are relevant to the content of this article.

References

  • [1] Allan, M., Shvets, A., Kurmann, T., Zhang, Z., Duggal, R., Su, Y.H., Rieke, N., Laina, I., Kalavakonda, N., Bodenstedt, S., et al.: 2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426 (2019)
  • [2] Doignon, C., de Mathelin, M.: A degenerate conic-based method for a direct fitting and 3-d pose of cylinders with a single perspective view. In: Proceedings 2007 IEEE International Conference on Robotics and Automation. pp. 4220–4225 (2007)
  • [3] Dong, X., Garratt, M.A., Anavatti, S.G., Abbass, H.A.: Towards real-time monocular depth estimation for robotics: A survey. IEEE Transactions on Intelligent Transportation Systems 23(10), 16940–16961 (2022)
  • [4] Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. Advances in Neural Information Processing Systems 27 (2014)
  • [5] Godard, C., Mac Aodha, O., Firman, M., Brostow, G.J.: Digging into self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 3828–3838 (2019)
  • [6] He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(6), 1397–1409 (2012)
  • [7] Li, B., Liu, B., Zhu, M., Luo, X., Zhou, F.: Image intrinsic-based unsupervised monocular depth estimation in endoscopy. IEEE Journal of Biomedical and Health Informatics (2024)
  • [8] Lin, S., Zhi, Y., Yip, M.C.: Semhint-md: Learning from noisy semantic labels for self-supervised monocular depth estimation. arXiv preprint arXiv:2303.18219 (2023)
  • [9] Ozyoruk, K.B., Gokceler, G.I., Bobrow, T.L., Coskun, G., Incetan, K., Almalioglu, Y., Mahmood, F., Curto, E., Perdigoto, L., Oliveira, M., et al.: Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Medical image analysis 71, 102058 (2021)
  • [10] Petrovai, A., Nedevschi, S.: Exploiting pseudo labels in a self-supervised learning framework for improved monocular depth estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 1578–1588 (2022)
  • [11] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12179–12188 (2021)
  • [12] Recasens, D., Lamarca, J., Fácil, J.M., Montiel, J., Civera, J.: Endo-depth-and-motion: Reconstruction and tracking in endoscopic videos using depth networks and photometric constraints. IEEE Robotics and Automation Letters 6(4), 7225–7232 (2021)
  • [13] Shao, S., Pei, Z., Chen, W., Zhu, W., Wu, X., Sun, D., Zhang, B.: Self-supervised monocular depth and ego-motion estimation in endoscopy: Appearance flow to the rescue. Medical image analysis 77, 102338 (2022)
  • [14] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [15] Watson, J., Mac Aodha, O., Prisacariu, V., Brostow, G., Firman, M.: The temporal opportunist: Self-supervised multi-frame monocular depth. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 1164–1174 (2021)
  • [16] Wei, R., Li, B., Mo, H., Lu, B., Long, Y., Yang, B., Dou, Q., Liu, Y., Sun, D.: Stereo dense scene reconstruction and accurate localization for learning-based navigation of laparoscope in minimally invasive surgery. IEEE Transactions on Biomedical Engineering 70(2), 488–500 (2022)
  • [17] Wei, R., Li, B., Mo, H., Zhong, F., Long, Y., Dou, Q., Liu, Y.H., Sun, D.: Distilled visual and robot kinematics embeddings for metric depth estimation in monocular scene reconstruction. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 8072–8077 (2022)
  • [18] Wei, R., Li, B., Zhong, F., Mo, H., Dou, Q., Liu, Y.H., Sun, D.: Absolute monocular depth estimation on robotic visual and kinematics data via self-supervised learning. IEEE Transactions on Automation Science and Engineering (2024)
  • [19] Xu, J., Li, B., Lu, B., Liu, Y.H., Dou, Q., Heng, P.A.: Surrol: An open-source reinforcement learning centered and dvrk compatible platform for surgical robot learning. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 1821–1828 (2021)
  • [20] Xue, F., Zhuo, G., Huang, Z., Fu, W., Wu, Z., Ang, M.H.: Toward hierarchical self-supervised monocular absolute depth estimation for autonomous driving applications. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 2330–2337 (2020)
  • [21] Yang, L., Kang, B., Huang, Z., Xu, X., Feng, J., Zhao, H.: Depth anything: Unleashing the power of large-scale unlabeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10371–10381 (2024)
  • [22] Yang, Z., Pan, J., Dai, J., Sun, Z., Xiao, Y.: Self-supervised lightweight depth estimation in endoscopy combining cnn and transformer. IEEE Transactions on Medical Imaging (2024)
  • [23] Yip, M., Salcudean, S., Goldberg, K., Althoefer, K., Menciassi, A., Opfermann, J.D., Krieger, A., Swaminathan, K., Walsh, C.J., Huang, H., et al.: Artificial intelligence meets medical robotics. Science 381(6654), 141–146 (2023)
  • [24] Zhang, S., Zhang, J., Tao, D.: Towards scale-aware, robust, and generalizable unsupervised monocular depth estimation by integrating imu motion dynamics. In: European Conference on Computer Vision. pp. 143–160. Springer (2022)