This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Design and Visual Servoing Control of a Hybrid Dual-Segment Flexible Neurosurgical Robot for Intraventricular Biopsy

Jian Chen, Mingcong Chen, Qingxiang Zhao, Shuai Wang, Yihe Wang, Ying Xiao, Jian Hu,
  Danny Tat Ming Chan, Kam Tong Leo Yeung, David Yuen Chung Chan and Hongbin Liu
Jian Chen is with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, also with the State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China, and also with the Centre of AI and Robotics, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences (CAIR-HKISI-CAS), HKSAR [email protected]Mingcong Chen is with Department of Biomedical Engineering, City University of Hong Kong, HKSAR [email protected]Qingxiang Zhao, Shuai Wang, Jian Hu and Yihe Wang are with CAIR-HKISI-CAS, HKSAR qingxiang.zhao, shuai.wang, hujian, [email protected]Ying Xiao is with the State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, Beijing 100190, China [email protected]Danny Tat Ming Chan, Kam Tong Leo Yeung, David Yuen Chung Chan are with Department of Surgery, The Chinese University of Hong Kong, HKSAR tmdanny, leoyeung, [email protected]Hongbin Liu is with the State Key Laboratory of Management and Control for Complex Systems, CASIA, Beijing 100190, China, also with CAIR-HKISI-CAS, HKSAR, and also with the School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EU,UK [email protected]Corresponding author: Hongbin Liu.
Abstract

Traditional rigid endoscopes have challenges in flexibly treating tumors located deep in the brain, and low operability and fixed viewing angles limit its development. This study introduces a novel dual-segment flexible robotic endoscope MicroNeuro, designed to perform biopsies with dexterous surgical manipulation deep in the brain. Taking into account the uncertainty of the control model, an image-based visual servoing with online robot Jacobian estimation has been implemented to enhance motion accuracy. Furthermore, the application of model predictive control with constraints significantly bolsters the flexible robot’s ability to adaptively track mobile objects and resist external interference. Experimental results underscore that the proposed control system enhances motion stability and precision. Phantom testing substantiates its considerable potential for deployment in neurosurgery.

I Introduction

Tumors located within the brain’s ventricular system pose significant health risks and present considerable treatment challenges due to their difficult-to-reach locations and proximity to critical neurological structures [1]. Over the past three decades, rigid endoscopes have emerged as the primary tool for visualization in diverse intraventricular neurosurgical procedures [2]. For instance, the MINOP endoscope (Aesculap Inc., PA, USA) is employed for intraventricular indications, while the LOTTA endoscope (Karl Storz SE & Co.KG,Tuttlingen, Germany) is preferred for patients with small ventricles. Unfortunately, conventional neurosurgery with rigid endoscopes still has two primary limitations: (i) The rigid structure limited maneuverability [2] within the complex anatomy of the brain, slight movement abruptly or incorrectly may lead to potential brain trauma and complications; and (ii) the limitation of fixed viewing angles of rigid instruments, complicating the biopsy of tumors in difficult locations, as shown in Fig. 1(a). While flexible robots can enhance endoscope dexterity, their use has been limited by the lower-resolution visualization [3], poor accessibility of single flexible segment on traditional endoscopes and the procedural complexities of combined rigid and flexible endoscopy [4]. The confined intracranial space also demands high dexterity and compliance from flexible surgical tools [5, 6], presenting additional control challenges [7].

Refer to caption
Figure 1: (a) Traditional rigid intraventricular endoscopes can only move forward and backward along the axis. (b) The MicroNeuro flexible robot system reach with one burr hole.

With the real-time visual feedback from the robot tip, image-based visual servoing (IBVS) is particularly compatible with this eye-in-hand configuration [8]. The classical IBVS has been widely used to solve the tracking [9], shape control [10], depth estimation [11] problems of flexible endoscopes. During neurosurgical endoscopic operations, external interference, such as inserting internal instruments, may lead to potential issues with the proportional controller. These issues could manifest as slow convergence [12] and decreased tracking performance [9]. To enhance the robustness, Jiang et al. [13] combined a sliding mode control (SMC) with IBVS to overcome the system uncertainties. For environmental interaction, Oliva et al. [14] presented a dynamic IBVS controller with an Extended Kalman Filter (EKF) to improve the tracking speed and stability.

However, most of the above mentioned methods did not take surrounding constraints into account, which is indispensable in neurosurgery. During intraventricular biopsies, unconstrained movement may damage significant nerves or blood vessels [15]. Model predictive control (MPC) [16, 17] utilizes constraints to ensure control actions and system states remain within desired bounds throughout the control horizon. A MPC framework within a visual servoing scheme was proposed to achieve precision manipulation in [18] to deal with the model inaccuracies. Notably, the inherent robustness characteristics of IBVS and MPC significantly improve controller performance [19]. Chen et al. [20] utilized a QPSO-MPC based tracking method for a continuum robot arm. Chien et al. [21] also used MPC method to control the position of a continuum robot, where the inverse kinematics was estimated as the basis. Therefore, the complex model transfer chain could be represented by Jacobian and the surroundings obtained by endoscopic camera passes constraints into MPC control scheme, which are applicable for MIS-oriented scenarios for continuum robots.

To address the design and control issues mentioned above, this work makes two main contributions: (i) a cable-driven hybrid dual-segment flexible endoscope for the intraventricular neurosurgery is proposed, which could pass through one single burr hole and provides sufficient dexterity to biopsy in the narrow ventricle, as shown in Fig. 1(b); (ii) a visual model predictive control framework with the online Jacobian estimation is proposed to enhance the robustness of visual servoing control. The rest of this work is organized as follows. Section II details design rules and prototype. In Section III, the kinematics model of the robot and camera is established with an online Jacobian estimation. Besides, Section IV introduces the visual MPC algorithm. Section V illustrates the effectiveness of the robot and the proposed methods. Finally, Section VI concludes this work.

II Mechanical Design

II-A Design Goals

The MicroNeuro is designed for intraventricular neurosurgery. Based on knowledge of brain anatomy and clinical demand from surgeons, the main design goals are first summarized as follows:

  1. 1.

    Dimension: The mean diameters of the foramen of Monro (FM) were 5.7 mm on the axial image, 7.8 mm on the coronal image, and 5.6 mm on the sagittal image [22]. Thus, the outer diameter of the flexible endoscope should be less than 5.4 mm to avoid collision with the FM.

  2. 2.

    Endoscope features: The MicroNeuro should provide high quality images and a working channel for biopsy instruments. Since clinical surgery needs to be performed underwater, the MicroNeuro also needs to provide irrigation and suction functions.

  3. 3.

    Dexterity: Deflective length of the MicroNeuro should be short and able to bend with a large curvature.

Refer to caption
Figure 2: Overview of The robot system. (a) The MicroNeuro system and MicroNeuro surgical robot. (b) Steering mode 1 without insertion of the distal. (c) Steering mode 2 with SS shape. (d) Endoscopic view of inner endscope. (e) Endoscopic view of outer sheath. (f) Endoscope features.

II-B System Overview

This work was developed based on the surgical robot system for neurosurgery, designated MicroNeuro [23]. As shown in Fig. 2(a), this system mainly consists of the MicroNeuro and its actuation units, which are mounted on the end of a 7 DoFs robot arm (ER7 Pro, ROKAE). The quick release mechanism of the MicroNeuro facilitates the individual disinfection of endoscopes. Besides, a control console is also built for master-slave teleoperation with four monitors, a foot pedal, a joystick (TCA, THRUSTMASTER) and a master device (TouchX, 3D SYSTEM).

The MicroNeuro consists of two bendable flexible robots which are connected to a rigid tube. As shown in Fig. 2(d), (e) and (f), it provides several functions, such as multi-view images, water irrigation and suction, working channel (diameter 1.2mm) and illumination. The distal end of the inner endoscope and the rigid catheter are each equipped with a camera (OV6946). Unlike conventional dual-segment flexible robots with fixed length, each robot of the MicroNeuro can be axially translated relative to each other, so two combined bending modes can be realized: (i) mode 1 [see Fig. 2(b)], the inner endoscope has no axial movement, and only the outer flexible sheath bends; (ii) mode 2 [see Fig. 2(c)], the inner endoscope could be inserted independently (maximum distance is 40mm).

II-C Hybrid Dual-Segment Flexible Endoscope Design

The backbones of each flexible robot are manufactured by femtosecond laser cutting of superelastic nitinol tubes. Fig. 3 shows the parameter definitions and values. The two robots have multiple pairs of notched joints distributed along the axial direction, and each joint has a bidirectional symmetrical rectangular notch. Three nitinol cables, driven by brushless coreless motors (ASSUN), are welded to the distal end of each flexible robot and routed along a crimped grooves.

Refer to caption
Figure 3: Mechanical design of the MicroNeuro robot. (a) Axial section view of outer sheath with cables distribution diagram. (b) Nitinol backbone of the outer sheath. (c) Axial section view of inner endoscope. (d) Nitinol backbone of the inner endoscope. (e) Illustration of coordinate frames.

III Modelling

III-A Kinematics of MicroNeuro

The distribution of notches in the backbone makes it axial stiffness larger than that in lateral direction, so the backbone would bend when the eccentrically fixed cables are stretched. Referring to the piecewise constant curvature (PCC) model [24], each segment of MicroNeuro bends with a constant curvature along its length, similar to a circular arc, when actuated. As shown in Fig. 3, MicroNeuro can be geometrically parameterized by 𝚽=(zbθsφszeθeφe)T\bm{{\Phi}}=\left(\begin{matrix}z_{b}&\theta_{s}&\varphi_{s}&z_{e}&\theta_{e}&\varphi_{e}\end{matrix}\right)^{\mathrm{T}} in the configuration space, where zb{z_{b}} is the overall insertion distance provided by the robot arm, θs\theta_{s} and θe\theta_{e} are the bending angles, φs\varphi_{s} and φe\varphi_{e} are the rotation angles between the bending plane and the oxzoxz plane, and zez_{e} is the variable length of the inner endoscope, provided by the servo motors. θs\theta_{s}, φs\varphi_{s}, θe\theta_{e}, φe\varphi_{e} can be calculated from the actuator space variables 𝐪=(𝐳𝐛𝐥𝐬,𝟏𝐥𝐬,𝟐𝐥𝐬,𝟑𝐳𝐞𝐥𝐞,𝟏𝐥𝐞,𝟐𝐥𝐞,𝟑)T\bf q=\left(\begin{matrix}z_{b}&l_{s,1}&l_{s,2}&l_{s,3}&z_{e}&l_{e,1}&l_{e,2}&l_{e,3}\end{matrix}\right)^{\mathrm{T}}:

θi=2li,12+li,22+li,32li1li2li2li3li1li33ρiφi=tan2(li1+li32li2,3(li3li1))\begin{split}{\theta_{i}}&={\frac{2\sqrt{l^{2}_{i,1}+l^{2}_{i,2}+l^{2}_{i,3}-l_{i1}l_{i2}-l_{i2}l_{i3}-l_{i1}l_{i3}}}{3\rho_{i}}}\\ {\varphi_{i}}&=\mathrm{tan}2({{l_{i1}+l_{i3}-2l_{i2}},{\sqrt{3}(l_{i3}-l_{i1})}})\end{split} (1)

where i{e,s}i\in\left\{e,s\right\}, and the subscripts ee and ss used to represent the outer sheath and inner endoscope, respectively, ρi\rho_{i} is the distance between the center of the cable and the center of the robot, li,ml_{i,m}, m{1,2,3}m\in\left\{1,2,3\right\} are the length of the driving guide wires in each flexible robot. The transformation matrix 𝐓etb4×4{{}^{b}_{{et}}}{{\mathrm{\bf T}}}\in{\mathbb{R}}^{4\times 4} from the base frame ObXbYbZbO_{b}X_{b}Y_{b}Z_{b} to the robot tip frame OetXetYetZetO_{et}X_{et}Y_{et}Z_{et} is:

𝐓𝐞𝐭𝐛=τ𝐳(zb)𝐑𝐳(φs)τ𝐱(zsθs)𝐑𝐲(θs)τ𝐱(zsθs)𝐑𝐳(φe)τ𝐱(zeθe)𝐑𝐲(θe)τ𝐱(zeθe)\begin{split}{}_{{\bf{et}}}^{\bf{b}}{\bf{T}}=&{{\bf{\tau}}_{\bf{z}}}({z_{b}}){{\bf{R}}_{\bf{z}}}({\varphi_{s}}){{\bf{\tau}}_{\bf{x}}}(\frac{{{z_{s}}}}{{{\theta_{s}}}}){{\bf{R}}_{\bf{y}}}({\theta_{s}}){{\bf{\tau}}_{\bf{x}}}(-\frac{{{z_{s}}}}{{{\theta_{s}}}})\\ &{{\bf{R}}_{\bf{z}}}({\varphi_{e}}){{\bf{\tau}}_{\bf{x}}}(\frac{{{z_{e}}}}{{{\theta_{e}}}}){{\bf{R}}_{\bf{y}}}({\theta_{e}}){{\bf{\tau}}_{\bf{x}}}(-\frac{{{z_{e}}}}{{{\theta_{e}}}})\end{split} (2)

where τ𝐣{{\bf{\tau}}_{\bf{j}}}, 𝐑𝐣4×4{\bf R_{j}}\in{\mathbb{R}}^{4\times 4} respectively denote translation and rotation about axis j, zsz_{s} is the length of the outer robot. Considering the offset dd of the camera frame OcXcYcZcO_{c}X_{c}Y_{c}Z_{c} from the robot tip, the camera w.r.t. the base is

𝐓𝐜𝐛=𝐓𝐑𝐳𝐞𝐭𝐛(φsφe)τ𝐲(d)\begin{split}{}_{{\bf{c}}}^{\bf{b}}{\bf{T}}={}_{{\bf{et}}}^{\bf{b}}{\bf{T}}{{\bf{R}}_{\bf{z}}}(-{\varphi_{s}}-{\varphi_{e}}){{\bf{\tau}}_{\bf{y}}}(-d)\end{split} (3)

The Jacobian matrix 𝐉𝐫3×6{{\bf{J}}_{\bf{r}}}\in{\mathbb{R}}^{3\times 6} is used to analytically establish the approximate relationship between camera velocity and joint velocity. Considering the translational motion, at discrete instance kk, the iterative form is Δ𝐏𝐫(k)=𝐉𝐫(k)Δ𝚽(k)\Delta{\bf{P_{r}}}(k)={\bf{J_{r}}}(k)\Delta{\bf{\Phi}}(k), where Δ𝐏(k)=𝐏(k+1)𝐏(k)\Delta{\bf{P}}(k)={\bf{P}}(k+1)-{\bf{P}}(k) is the small displacement of the camera, 𝚽(k)=𝚽(k+1)𝚽(k){\bf{\Phi}}(k)={\bf{\Phi}}(k+1)-{\bf{\Phi}}(k) and 𝐉𝐫(k){\bf{J_{r}}}(k) can be derived through forward kinematics 𝐓𝐞𝐭𝐛{}_{{\bf{et}}}^{\bf{b}}{\bf{T}}. To reach a given target position 𝐏𝐆3{{\bf{P}}_{\bf{G}}}\in{\mathbb{R}}^{3} of the end of the robot in OetXetYetZetO_{et}X_{et}Y_{et}Z_{et}, we need to inversely solve the appropriate joint configuration. The damped least squares method [25] provides an alternative Jacobian matrix to avoid joint velocity near singularities, i.e.

Δ𝚽(k)=𝐉𝐫𝐓(k)(𝐉𝐫(k)𝐉𝐫𝐓(k)+σI)1(𝐏𝐆(k)𝐏(k))\begin{split}\Delta{\bf{\Phi}}(k)={\bf{J^{T}_{r}}}(k)({\bf{J_{r}}}(k){\bf{J^{T}_{r}}}(k)+\sigma{I})^{-1}({\bf{P_{G}}}(k)-{\bf{P}}(k))\end{split} (4)

III-B Visual Servoing Modeling

However, material nonlinearity, segment interaction, external loads, etc. may have a significant negative impact on the accuracy of the PCC model. In this work, we consider a moving camera while the targets are fixed at any instance kk. As shown in Fig. 3(e), for a given point 𝐀3{\bf{A}}\in\mathbb{R}^{3} in OcXcYcZcO_{c}X_{c}Y_{c}Z_{c}, its coordinates in the image frame OIxyO_{I}xy and pixel frame OpuvO_{p}uv are 𝐀(k)=(x,y)T{\bf{A}}(k)=(x,y)^{\mathrm{T}} and ς(k)=(u,v)T{\bf{\varsigma}}(k)=(u,v)^{\mathrm{T}}, respectively. According to the pinhole camera model, the perspective equation can be obtained from the relationship on similar triangles, i.e.

u=λxxλ+cc,v=λyyλ+cy\begin{split}u=\frac{\lambda_{x}x}{\lambda}+c_{c},\enskip v=\frac{\lambda_{y}y}{\lambda}+c_{y}\end{split} (5)

The motion of the features Δ𝝇(k)\Delta\bm{\varsigma}(k) on the pixel plane can be predicted using the interaction matrix:

Δ𝝇(k)=𝐋𝐦(k)Δ𝐏(k)\begin{split}\Delta\bm{\varsigma}(k)={\bf{L_{m}}}(k)\Delta{\bf{P}}(k)\end{split} (6)

where 𝐋𝐦2×3{\bf{L_{m}}}\in\mathbb{R}^{2\times 3} is a block matrix of 𝐋𝐨=[𝐋𝐦2×3|𝐋ω2×3]{\bf{L_{o}}}=[{\bf{L_{m}}}^{2\times 3}|{\bf{L_{\omega}}}^{2\times 3}] related to linear velocity, and

𝐋𝐨=[λzc0xzcxyλλ2+x2λy0λzcyzcλ2+y2λxyλx]{\bf{L_{o}}}=\begin{bmatrix}-\frac{\lambda}{z_{c}}&0&\frac{x}{z_{c}}&\frac{xy}{\lambda}&-\frac{\lambda^{2}+x^{2}}{\lambda}&y\\ 0&-\frac{\lambda}{z_{c}}&\frac{y}{z_{c}}&-\frac{\lambda^{2}+y^{2}}{\lambda}&-\frac{xy}{\lambda}&-x\end{bmatrix} (7)

where λx\lambda_{x}, λy\lambda_{y} are the focal length in pixels, cxc_{x}, cyc_{y} are optical center in pixels and λ\lambda is focal length in millimeter. Define 𝐉𝐚(k)6×8{\bf{J_{a}}}(k)\in\mathbb{R}^{6\times 8} as the Jacobian matrix between the actuator space and the configuration space from Eq. (1), that is, Δ𝚽(k)=𝐉𝐚(k)Δ𝐪(k)\Delta{\bf{\Phi}}(k)={\bf{J_{a}}}(k)\Delta{\bf{q}}(k). Combined Eq. (4) and (6), the overall Jacobian matrix between pixel velocity and actuator velocity can be derived as follow:

Δ𝝇(k)=𝐋𝐦(k)𝐉𝐫(k)𝐉𝐚(k)Δ𝐪(k)\Delta\bm{\varsigma}(k)={\bf{L_{m}}}(k){\bf{J_{r}}}(k){\bf{J_{a}}}(k)\Delta{\bf{q}}(k) (8)

III-C Jacobian Matrix Estimation

In the classic IBVS [26], there are several choices for the depth zcz_{c} in the matrix 𝐋𝐦(k){\bf{L_{m}}}(k). In this study, the depth zcz_{c}^{*} at the desired position was used, and 𝐋𝐦^(k){\bf{\hat{L_{m}}}}(k) denotes the estimation matrix.

As a continuum robot, MicroNeuro has infinite DoFs. When subject to model mismatch problems caused by disturbance or manufacturing error, the model-dependent robot Jacobian matrix 𝐉(k)=𝐉𝐫(k)𝐉𝐚(k){\bf{J}}(k)={\bf{J_{r}}}(k){\bf{J_{a}}}(k) may cause control deviations and need to be estimated online. First, the Jacobian estimate at k=0k=0 needs to be obtained offline, then the Jacobian can be updated iteratively online during the robot movement.

  1. 1.

    Initialization: A small actuator movement Δ𝐪+(0)\Delta{\bf{q_{+}}}(0) is imposed on the MicroNeuro while it is located outside the brain, and an external electromagnetic sensor (NDI Aurora) is mounted on the tip of MicroNeuro to measure the displacement. The ii-th independent actuator variables Δq+,i(0)\Delta{{q_{+,i}}}(0) causes a position deviation of the camera Δ𝐏𝐜,𝐢(0)\Delta{\bf{P_{c,i}}}(0). Hence, 𝐉+^(0){\bf{\hat{J_{+}}}}(0) is constructed as:

    𝐉+^(0)=[Δ𝐏𝐜,𝟎(0)Δq+,0(0)Δ𝐏𝐜,𝟖(0)Δq+,8(0)]{\bf{\hat{J_{+}}}}(0)=\left[\begin{matrix}\frac{\Delta{\bf{P_{c},0}}(0)}{\Delta{{q_{+,0}}}(0)}&\cdots&\frac{\Delta{\bf{P_{c},8}}(0)}{\Delta{{q_{+,8}}}(0)}\end{matrix}\right] (9)

    To reduce manufacturing error, 𝐉^(0){\bf{\hat{J_{-}}}}(0) is similarly constructed while a opposite displacement Δ𝐪(0)=Δ𝐪+(0)\Delta{\bf{q_{-}}}(0)=-\Delta{\bf{q_{+}}}(0) is imposed. 𝐉^(0){\bf{\hat{J}}}(0) is set as:

    𝐉^(0)=0.5(𝐉+^(0)+𝐉^(0)){\bf{\hat{J}}}(0)=0.5({\bf{\hat{J_{+}}}}(0)+{\bf{\hat{J_{-}}}}(0)) (10)
  2. 2.

    Online Estimation: The alterations in the MicroNeuro position and Jacobian matrix between adjacent instance are small, thus, the current analytical Jacobian matrix 𝐉(k){\bf{{J}}}(k) could be appropriately adjusted using 𝐉^(k1){\bf{\hat{J}}}(k-1):

    𝐉^(k)=(1ω(k))𝐉(k)+ω(k)𝐉^(k1){\bf{\hat{J}}}(k)=(1-\omega(k)){\bf{J}}(k)+\omega(k){\bf{\hat{J}}}(k-1) (11)

    where ω(k)=11+ϵ(k)\omega(k)=\frac{1}{1+\epsilon(k)} is the weighting factor, and ϵ(k)=𝝇(k)𝝇𝐆(k)2\epsilon(k)=||\bm{\varsigma}(k)-{{\bm{\varsigma}}_{\bf{G}}}(k)||_{2} denotes as the distance between measured feature 𝝇(k)\bm{\varsigma}(k) and the target feature 𝝇𝐆(k){{\bm{\varsigma}}_{\bf{G}}}(k). The normalized 𝝇(k)\bm{\varsigma}(k) and 𝝇𝐆(k){{\bm{\varsigma}}_{\bf{G}}}(k) could be applied in ω(k)\omega(k).

Refer to caption
Figure 4: The visual MPC controller using an IMC scheme.

IV Visual Model Predictive Controller

IV-A Predictive Model

The goal of the IBVS task is to minimize the error ω(k)\omega(k). Inspired from [27, 28], to reduce the negative impact of model inaccuracy and external disturbance, an internal model control (IMC) scheme [29] is applied in the visual MPC controller, as shown in Fig. 4. 𝐞(k){\bf{e}}(k) is defined as the predictive error, that is, 𝐞(k)=𝝇(k)𝝇𝐋(k){\bf{e}}(k)=\bm{\varsigma}(k)-{{\bm{\varsigma}}_{\bf{L}}}(k), and 𝝇𝐑(k){{\bm{\varsigma}}_{\bf{R}}}(k) denotes the reference image feature without the predictive error. Thus, we can obtain:

𝝇𝐑(k)𝝇𝐋(k)=𝝇𝐆(k)𝝇(k){{\bm{\varsigma}}_{\bf{R}}}(k)-{{\bm{\varsigma}}_{\bf{L}}}(k)={{\bm{\varsigma}}_{\bf{G}}}(k)-\bm{\varsigma}(k) (12)

The object of the visual MPC controller is then transformed into minimizing the tracking error of the prediction model with respect to 𝝇𝐑(k){{\bm{\varsigma}}_{\bf{R}}}(k). Let 𝝇𝐋(k)=𝝇𝐋(k+1)𝝇𝐋(k){{\bm{\varsigma}}_{\bf{L}}}(k)={{\bm{\varsigma}}_{\bf{L}}}(k+1)-{{\bm{\varsigma}}_{\bf{L}}}(k), Eq. (8) can be rewritten as the following state-space representation:

{𝐱(k+1)=𝐱(k)+𝐁(k)𝐮(k)𝐲(k)=𝐱(k)\left\{\begin{matrix}{\bf{x}}(k+1)={\bf{x}}(k)+{\bf{B}}(k){\bf{u}}(k)\\ {\bf{y}}(k)={\bf{x}}(k)\end{matrix}\right. (13)

where the system state 𝐱(k)=𝝇𝐋(k){\bf{x}}(k)={{\bm{\varsigma}}_{\bf{L}}}(k), the control variable 𝐮(k)=Δ𝐪(k){\bf{u}}(k)=\Delta{\bf{q}}(k), 𝐲(k){\bf{y}}(k) is the output and 𝐁(k)=𝐋𝐦^(k)𝐉^(k){\bf{B}}(k)={\bf{\hat{L_{m}}}}(k){\bf{\hat{J}}}(k).

IV-B Constraints

In addition, some constraints should be considered. To ensure that the MicroNeuro remains stable and avoid undesirable contact with the brain ventricles, the camera position should meet certain constraint:

𝐏min𝐏(k)𝐏max{\bf{P}}^{\min}\leq{\bf{P}}(k)\leq{\bf{P}}^{\max} (14)

Correspondingly, considering some physical hard constraints on MicroNeuro, such as the restriction of the capability of the motors, actuator constraint is defined as follows:

𝐪min𝐪(k)𝐪max{\bf{q}}^{\min}\leq{\bf{q}}(k)\leq{\bf{q}}^{\max} (15)

Moreover, to ensure that the target of concern is always within the field of view and away from areas with large camera distortion, output constrain is described as follows:

𝝇𝐋min𝝇𝐋(k)𝝇𝐋max{{\bm{\varsigma}}_{\bf{L}}}^{\min}\leq{{\bm{\varsigma}}_{\bf{L}}}(k)\leq{{\bm{\varsigma}}_{\bf{L}}}^{\max} (16)

IV-C Optimization Objective

At each sample time kk, the current measured system state is set as the initial state of an optimal control problem (OCP) with constrains, and the current control action is determined by solving the problem in the further NPN_{P} sampling periods. Only the first optimal input is applied on the system in the optimal input sequence of length NcN_{c}. NpN_{p} and NcN_{c} are identified as the prediction horizon and control horizon, respectively. The objective is described as follows:

min𝐔(k)𝐕(𝐔(k))=i=kk+Np1𝐘(k)𝐒(k)𝐐2=i=kk+Np1(𝐲(i|k)𝝇𝐑(i|k))T𝐐(𝐲(i|k)𝝇𝐑(i|k))\begin{split}&\min_{{\bf{U}}(k)}{\bf{V}}({\bf{U}}(k))=\sum_{i=k}^{k+N_{p}-1}||{\bf{Y}}(k)-{\bf{S}}(k)||^{2}_{{\bf{Q}}}\\ &=\sum_{i=k}^{k+N_{p}-1}({\bf{y}}(i|k)-{{\bm{\varsigma}}_{\bf{R}}}(i|k))^{\mathrm{T}}{\bf{Q}}({\bf{y}}(i|k)-{{\bm{\varsigma}}_{\bf{R}}}(i|k))\end{split} (17)

subject to Eq. (12), (14), (15) and (16). In Eq. (17), 𝐔(k)8Np×1{\bf{U}}(k)\in\mathbb{R}^{8N_{p}\times 1}, 𝐔(k)=(𝐮(k|k)𝐮(k+Nc1|k)𝐮(k+Nc1|k))T{\bf{U}}(k)=({\bf{u}}(k|k)\cdots{\bf{u}}(k+N_{c}-1|k)\cdots{\bf{u}}(k+N_{c}-1|k))^{\mathrm{T}} is the control sequence, 𝐘(k),𝐒(k)2Np×1{\bf{Y}}(k),{\bf{S}}(k)\in{\mathbb{R}}^{2N_{p}\times 1} are output and reference sequence, 𝐐2×2{\bf{Q}}\in\mathbb{R}^{2\times 2} is the weight matrix. 𝐲(i|k){\bf{y}}(i|k) denotes the predictive value of output at ii-th sample time. Problem (17) can further come down to solve a quadratic programming (QP) with constrains. Specially, in our implementation, problem (17) is formulated in CasADi [30] and is solved using its built-in optimization solvers.

V Experiment and Validation

In this section, we implemented four IBVS scenarios to evaluate the effectiveness of the proposed MicroNeuro robot and visual MPC controller. The camera was well calibrated [31] with a low mean reprojection error of merely 0.2 pixels, and the image resolution was resized to 710×710710\times 710 pixels from the origin resolution 400×400400\times 400. This vision system was specifically designed to track the AprilTags [32], which served as detection features and provided high accuracy localization. The tracking error in following was quantified as the Euclidean distance ϵ(k)\epsilon(k) between the measured and target coordinates of the features. In following experiments, the initial configuration of the robot is in steering mode 1 and remain straight. The kinematics was initialized with 𝐉^(0){\bf{\hat{J}}}(0) and iterated online with ϵ(k)\epsilon(k). In the proposed visual MPC controller, the control horizon and prediction horizon are set to Nc=Np=10N_{c}=N_{p}=10, 𝐐{\bf{Q}} = diag{1,1}\left\{1,1\right\}. According to [33], the average tumor size in the pineal region is 26 mm. Based on Eq. (5) and camera parameters, the Maximum Permissible Error (MPE) was defined as 2.62.6 mm, and the corresponding pixel error is 30.

Refer to caption

Figure 5: Tracking the static object on a plane. (a) Experiment setup. (b) Tags movement trajectories in the image plane. (c) Tracking errors.

Refer to caption

Figure 6: Setups for: (a) Dynamic target tracking. (b) Biopsy in a brain phantom.

Refer to caption

Figure 7: Two test results of tracking the dynamic object on a linear guide.

Refer to caption


Figure 8: "CAIR" Trajectories following results.
Refer to caption
Figure 9: Various scenarios for target tracking in a 3D printed brain phantom.

V-A Static Target Tracking

In this experiment, the region of interest (ROI) was defined as the center of the image 𝝇𝐆(k)=(355,355)T{{\bm{\varsigma}}_{\bf{G}}}(k)=(355,355)^{\mathrm{T}}. As shown in Fig. 5(b), the MicroNeuro system was commanded to bring the specifically chosen markers to the ROI, which were distributed at 6060^{\circ} intervals on a printed circle. The experimental analysis involved conducting six trials, and the effectiveness was demonstrated through the measured trajectories of the markers, as depicted in Fig. 5(c). In each instance, the robot successfully returned the marker to the center with average terminal error was 21.8 pixels. The average time required to complete the tracking task across the six experiments was measured to be 11.25 s. This accomplishment highlights the robustness and reliability of the proposed method in achieving fast and precise tracking.

V-B Dynamic Target Tracking

The experiment was designed to evaluate the stability of the proposed system following a target in a dynamic environment. As shown in Fig. 6(a), the robot tracked an AprilTags marker attached to a linear guide, positioned 20mm from the robot’s camera. The guide reciprocated at a speed of 2.5mm/s2.5mm/s over a 20mm20mm stroke. Fig. 7 illustrates that tracking errors decreased significantly once the marker was captured by the camera, with errors reduced to below the MPE within 6 s in Test 1, reaching a lowest error of 2.23 pixels. After the initial stable tracking of the target was accomplished, the standard deviation (SD) of the errors for test 1 and 2 were 20.85 and 21.81 pixels, respectively, which further supports the effectiveness of the system in maintaining precise tracking of the target.

V-C Trajectory Following

This experiment was designed to evaluate the robot’s ability to follow a set trajectory that guides the marker along a defined path in the captured image. Experiment setup was same as Fig. 5(a). Under the guidance of the controller, the robot automatically completes tracking of multiple key target points on different trajectories to approximately complete the tracking of curves in the image plane. These discrete key target points set on the letters CAIR{CAIR}. The experimental results in Fig. 8 showed that the controller has good tracking performance for the key points of each trajectory. The root mean square error (RMSE) of the four curves were 11.66, 11.62, 11.30 and 11.95 pixels respectively.

V-D Biopsy in a Brain Phantom

In clinical procedures, the use of endoscopic instruments like biopsy gripper and electrocoagulation, inserted via the working channel, can significantly disrupt the flexible endoscope’s view, leading to loss of lesion visibility or inadequate operating angles. This experiment aims to assess the robustness of the proposed method against external disturbances, ensuring the endoscope stays focused on the ROI. In the 3D printed brain shown in Fig. 6(b), we placed a marker in the pineal gland region to mark the area of interest. Initially, the robot was manually operated to roughly approach the target area through one burr hole, and the visual MPC controller has quickly tracked the target, as shown in Fig. 9. The insertion and operation of biopsy forceps introduced rapid noise to the robot, significantly increasing tracking error. However, the controller adjusted the tool within ten steps, reducing the error to less than 30 pixels. This result demonstrates the controller’s ability to enhance the MicroNeuro robot’s resistance to interference, suggesting its potential application in neurosurgery.

VI Conclusion

The presented study in this paper proposes a novel hybrid dual-segment flexible endoscope for neurosurgery. The dual-segment design allows for dexterous maneuverability within the deep brain’s complex structure. This innovative approach substantially assists surgeons in performing procedures on the pineal region concurrently through a single burr hole, thereby enhancing surgical efficiency. The robot meets mechanical design requirements based on clinical needs and provides comprehensive endoscopic functionality. In addition, a visual servoing control system with online estimation of the Jacobian matrix is constructed to improve the motion performance of the robot. Considering unknown disturbance, a visual MPC with constraints has been designed. The experiment verified that the MicroNeuro robot is capable of executing precise visual servoing despite external interference, and demonstrated great potential for clinical applications in neurosurgery. In the future, this work will further consider the nonlinear dynamic model and the impact of contact force during intracranial surgery to enhance the performance of the visual model predictive controller.

VII Acknowledgements

This work was supported by the Centre of AI and Robotics, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, sponsored by InnoHK Funding, HKSAR, and partially supported by Sichuan Science and Technology Program (Grant number: 2023YFH0093). Parts of Fig. 1(a) were created using templates from Servier Medical Art (http://smart.servier.com/), licensed under a Creative Common Attribution 3.0 Generic License.

References

  • [1] M. G. Yaşargil and S. I. Abdulrauf, “Surgery of intraventricular tumors,” Neurosurgery, vol. 62, no. 6, pp. SHC1029–SHC1041, 2008.
  • [2] L. Rigante, H. Borghei-Razavi, P. F. Recinos, and F. Roser, “An overview of endoscopy in neurologic surgery,” Cleve Clin J Med, vol. 86, no. 10, pp. 16ME–24ME, 2019.
  • [3] S. A. Chowdhry and A. R. Cohen, “Intraventricular neuroendoscopy: complication avoidance and management,” World neurosurgery, vol. 79, no. 2, pp. S15–e1, 2013.
  • [4] M. A. I. Amer and H. I. S. Elatrozy, “Combined endoscopic third ventriculostomy and tumor biopsy in the management of pineal region tumors, safety considerations,” Egyptian Journal of Neurosurgery, vol. 33, no. 1, pp. 1–6, 2018.
  • [5] W. Zeng, J. Yan, K. Yan, X. Huang, X. Wang, and S. S. Cheng, “Modeling a symmetrically-notched continuum neurosurgical robot with non-constant curvature and superelastic property,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6489–6496, 2021.
  • [6] B. Qi, Z. Yu, Z. K. Varnamkhasti, Y. Zhou, and J. Sheng, “Toward a telescopic steerable robotic needle for minimally invasive tissue biopsy,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1989–1996, 2021.
  • [7] H.-S. Yoon, H.-J. Cha, J. Chung, and B.-J. Yi, “Compact design of a dual master-slave system for maxillary sinus surgery,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2013, pp. 5027–5032.
  • [8] M. Chen, Y. Huang, J. Chen, T. Zhou, J. Chen, and H. Liu, “Fully robotized 3d ultrasound image acquisition for artery,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 2690–2696.
  • [9] Y. Li, W. Y. Ng, W. Li, Y. Huang, H. Zhang, Y. Xian, J. Li, Y. Sun, P. W. Y. Chiu, and Z. Li, “Towards semi-autonomous colon screening using an electromagnetically actuated soft-tethered colonoscope based on visual servo control,” IEEE Transactions on Biomedical Engineering, 2023.
  • [10] F. Xu, Y. Zhang, J. Sun, and H. Wang, “Adaptive visual servoing shape control of a soft robot manipulator using bezier curve features,” IEEE/ASME Transactions on Mechatronics, vol. 28, no. 2, pp. 945–955, 2022.
  • [11] M. M. Fallah, S. Norouzi-Ghazbi, A. Mehrkish, and F. Janabi-Sharifi, “Depth-based visual predictive control of tendon-driven continuum robots,” in 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).   IEEE, 2020, pp. 488–494.
  • [12] A. A. Nazari, K. Zareinia, and F. Janabi-Sharifi, “Visual servoing of continuum robots: Methods, challenges, and prospects,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 18, no. 3, p. e2384, 2022.
  • [13] J. Jiang, Y. Wang, Y. Jiang, H. Xie, H. Tan, and H. Zhang, “A robust visual servoing controller for anthropomorphic manipulators with field-of-view constraints and swivel-angle motion: Overcoming system uncertainty and improving control performance,” IEEE Robotics & Automation Magazine, vol. 29, no. 4, pp. 104–114, 2022.
  • [14] A. A. Oliva, E. Aertbeliën, J. De Schutter, P. R. Giordano, and F. Chaumette, “Towards dynamic visual servoing for interaction control and moving targets,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 150–156.
  • [15] W. A. Azab, K. Nasim, and W. Salaheddin, “An overview of the current surgical options for pineal region tumors,” Surgical neurology international, vol. 5, 2014.
  • [16] J. Rawlings, D. Mayne, and M. Diehl, Model Predictive Control: Theory, Computation, and Design, 01 2017.
  • [17] C. Lin, S. Liang, J. Chen, and X. Gao, “A multi-objective optimal torque distribution strategy for four in-wheel-motor drive electric vehicles,” IEEE Access, vol. 7, pp. 64 627–64 640, 2019.
  • [18] B. Calli and A. M. Dollar, “Vision-based model predictive control for within-hand precision manipulation with underactuated grippers,” in 2017 IEEE international conference on robotics and automation (ICRA).   IEEE, 2017, pp. 2839–2845.
  • [19] C. P. Bechlioulis, S. Heshmati-Alamdari, G. C. Karras, and K. J. Kyriakopoulos, “Robust image-based visual servoing with prescribed performance under field of view constraints,” IEEE Transactions on Robotics, vol. 35, no. 4, pp. 1063–1070, 2019.
  • [20] Q. Chen, Y. Qin, and G. Li, “Qpso-mpc based tracking algorithm for cable-driven continuum robots,” Frontiers in Neurorobotics, vol. 16, p. 1014163, 2022.
  • [21] J. L. Chien, L. T. L. Clarissa, J. Liu, J. Low, and S. Foong, “Kinematic model predictive control for a novel tethered aerial cable-driven continuum robot,” in 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).   IEEE, 2021, pp. 1348–1354.
  • [22] X. L. Zhu, R. Gao, G. K. C. Wong, H. T. Wong, R. Y. T. Ng, Y. Yu, R. K. M. Wong, and W. S. Poon, “Single burr hole rigid endoscopic third ventriculostomy and endoscopic tumor biopsy: what is the safe displacement range for the foramen of monro?” Asian Journal of Surgery, vol. 36, no. 2, pp. 74–82, 2013.
  • [23] innovationhub@HK. MicroNeuro, year = 2023, url = https://www.innovationhub.hk/article/microneuro.
  • [24] R. J. Webster III and B. A. Jones, “Design and kinematic modeling of constant curvature continuum robots: A review,” The International Journal of Robotics Research, vol. 29, no. 13, pp. 1661–1683, 2010.
  • [25] S. R. Buss, “Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods,” IEEE Journal of Robotics and Automation, vol. 17, no. 1-19, p. 16, 2004.
  • [26] F. Chaumette and S. Hutchinson, “Visual servo control. i. basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.
  • [27] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for constrained image-based visual servoing,” IEEE Transactions on Robotics, vol. 26, no. 5, pp. 933–939, 2010.
  • [28] S. Norouzi-Ghazbi, A. Mehrkish, M. M. Fallah, and F. Janabi-Sharifi, “Constrained visual predictive control of tendon-driven continuum robots,” Robotics and Autonomous Systems, vol. 145, p. 103856, 2021.
  • [29] S. Saxena and Y. V. Hote, “Advances in internal model control technique: A review and future prospects,” IETE Technical Review, vol. 29, no. 6, pp. 461–472, 2012.
  • [30] J. A. E. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl, “CasADi – A software framework for nonlinear optimization and optimal control,” Mathematical Programming Computation, vol. 11, no. 1, pp. 1–36, 2019.
  • [31] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
  • [32] E. Olson, “AprilTag: A robust and flexible visual fiducial system,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   IEEE, May 2011, pp. 3400–3407.
  • [33] H. G. Vuong, T. N. Ngo, and I. F. Dunn, “Incidence, prognostic factors, and survival trend in pineal gland tumors: a population-based analysis,” Frontiers in Oncology, vol. 11, p. 780173, 2021.