Design and Visual Servoing Control of a Hybrid Dual-Segment Flexible Neurosurgical Robot for Intraventricular Biopsy
Abstract
Traditional rigid endoscopes have challenges in flexibly treating tumors located deep in the brain, and low operability and fixed viewing angles limit its development. This study introduces a novel dual-segment flexible robotic endoscope MicroNeuro, designed to perform biopsies with dexterous surgical manipulation deep in the brain. Taking into account the uncertainty of the control model, an image-based visual servoing with online robot Jacobian estimation has been implemented to enhance motion accuracy. Furthermore, the application of model predictive control with constraints significantly bolsters the flexible robot’s ability to adaptively track mobile objects and resist external interference. Experimental results underscore that the proposed control system enhances motion stability and precision. Phantom testing substantiates its considerable potential for deployment in neurosurgery.
I Introduction
Tumors located within the brain’s ventricular system pose significant health risks and present considerable treatment challenges due to their difficult-to-reach locations and proximity to critical neurological structures [1]. Over the past three decades, rigid endoscopes have emerged as the primary tool for visualization in diverse intraventricular neurosurgical procedures [2]. For instance, the MINOP endoscope (Aesculap Inc., PA, USA) is employed for intraventricular indications, while the LOTTA endoscope (Karl Storz SE & Co.KG,Tuttlingen, Germany) is preferred for patients with small ventricles. Unfortunately, conventional neurosurgery with rigid endoscopes still has two primary limitations: (i) The rigid structure limited maneuverability [2] within the complex anatomy of the brain, slight movement abruptly or incorrectly may lead to potential brain trauma and complications; and (ii) the limitation of fixed viewing angles of rigid instruments, complicating the biopsy of tumors in difficult locations, as shown in Fig. 1(a). While flexible robots can enhance endoscope dexterity, their use has been limited by the lower-resolution visualization [3], poor accessibility of single flexible segment on traditional endoscopes and the procedural complexities of combined rigid and flexible endoscopy [4]. The confined intracranial space also demands high dexterity and compliance from flexible surgical tools [5, 6], presenting additional control challenges [7].

With the real-time visual feedback from the robot tip, image-based visual servoing (IBVS) is particularly compatible with this eye-in-hand configuration [8]. The classical IBVS has been widely used to solve the tracking [9], shape control [10], depth estimation [11] problems of flexible endoscopes. During neurosurgical endoscopic operations, external interference, such as inserting internal instruments, may lead to potential issues with the proportional controller. These issues could manifest as slow convergence [12] and decreased tracking performance [9]. To enhance the robustness, Jiang et al. [13] combined a sliding mode control (SMC) with IBVS to overcome the system uncertainties. For environmental interaction, Oliva et al. [14] presented a dynamic IBVS controller with an Extended Kalman Filter (EKF) to improve the tracking speed and stability.
However, most of the above mentioned methods did not take surrounding constraints into account, which is indispensable in neurosurgery. During intraventricular biopsies, unconstrained movement may damage significant nerves or blood vessels [15]. Model predictive control (MPC) [16, 17] utilizes constraints to ensure control actions and system states remain within desired bounds throughout the control horizon. A MPC framework within a visual servoing scheme was proposed to achieve precision manipulation in [18] to deal with the model inaccuracies. Notably, the inherent robustness characteristics of IBVS and MPC significantly improve controller performance [19]. Chen et al. [20] utilized a QPSO-MPC based tracking method for a continuum robot arm. Chien et al. [21] also used MPC method to control the position of a continuum robot, where the inverse kinematics was estimated as the basis. Therefore, the complex model transfer chain could be represented by Jacobian and the surroundings obtained by endoscopic camera passes constraints into MPC control scheme, which are applicable for MIS-oriented scenarios for continuum robots.
To address the design and control issues mentioned above, this work makes two main contributions: (i) a cable-driven hybrid dual-segment flexible endoscope for the intraventricular neurosurgery is proposed, which could pass through one single burr hole and provides sufficient dexterity to biopsy in the narrow ventricle, as shown in Fig. 1(b); (ii) a visual model predictive control framework with the online Jacobian estimation is proposed to enhance the robustness of visual servoing control. The rest of this work is organized as follows. Section II details design rules and prototype. In Section III, the kinematics model of the robot and camera is established with an online Jacobian estimation. Besides, Section IV introduces the visual MPC algorithm. Section V illustrates the effectiveness of the robot and the proposed methods. Finally, Section VI concludes this work.
II Mechanical Design
II-A Design Goals
The MicroNeuro is designed for intraventricular neurosurgery. Based on knowledge of brain anatomy and clinical demand from surgeons, the main design goals are first summarized as follows:
-
1.
Dimension: The mean diameters of the foramen of Monro (FM) were 5.7 mm on the axial image, 7.8 mm on the coronal image, and 5.6 mm on the sagittal image [22]. Thus, the outer diameter of the flexible endoscope should be less than 5.4 mm to avoid collision with the FM.
-
2.
Endoscope features: The MicroNeuro should provide high quality images and a working channel for biopsy instruments. Since clinical surgery needs to be performed underwater, the MicroNeuro also needs to provide irrigation and suction functions.
-
3.
Dexterity: Deflective length of the MicroNeuro should be short and able to bend with a large curvature.

II-B System Overview
This work was developed based on the surgical robot system for neurosurgery, designated MicroNeuro [23]. As shown in Fig. 2(a), this system mainly consists of the MicroNeuro and its actuation units, which are mounted on the end of a 7 DoFs robot arm (ER7 Pro, ROKAE). The quick release mechanism of the MicroNeuro facilitates the individual disinfection of endoscopes. Besides, a control console is also built for master-slave teleoperation with four monitors, a foot pedal, a joystick (TCA, THRUSTMASTER) and a master device (TouchX, 3D SYSTEM).
The MicroNeuro consists of two bendable flexible robots which are connected to a rigid tube. As shown in Fig. 2(d), (e) and (f), it provides several functions, such as multi-view images, water irrigation and suction, working channel (diameter 1.2mm) and illumination. The distal end of the inner endoscope and the rigid catheter are each equipped with a camera (OV6946). Unlike conventional dual-segment flexible robots with fixed length, each robot of the MicroNeuro can be axially translated relative to each other, so two combined bending modes can be realized: (i) mode 1 [see Fig. 2(b)], the inner endoscope has no axial movement, and only the outer flexible sheath bends; (ii) mode 2 [see Fig. 2(c)], the inner endoscope could be inserted independently (maximum distance is 40mm).
II-C Hybrid Dual-Segment Flexible Endoscope Design
The backbones of each flexible robot are manufactured by femtosecond laser cutting of superelastic nitinol tubes. Fig. 3 shows the parameter definitions and values. The two robots have multiple pairs of notched joints distributed along the axial direction, and each joint has a bidirectional symmetrical rectangular notch. Three nitinol cables, driven by brushless coreless motors (ASSUN), are welded to the distal end of each flexible robot and routed along a crimped grooves.

III Modelling
III-A Kinematics of MicroNeuro
The distribution of notches in the backbone makes it axial stiffness larger than that in lateral direction, so the backbone would bend when the eccentrically fixed cables are stretched. Referring to the piecewise constant curvature (PCC) model [24], each segment of MicroNeuro bends with a constant curvature along its length, similar to a circular arc, when actuated. As shown in Fig. 3, MicroNeuro can be geometrically parameterized by in the configuration space, where is the overall insertion distance provided by the robot arm, and are the bending angles, and are the rotation angles between the bending plane and the plane, and is the variable length of the inner endoscope, provided by the servo motors. , , , can be calculated from the actuator space variables :
(1) |
where , and the subscripts and used to represent the outer sheath and inner endoscope, respectively, is the distance between the center of the cable and the center of the robot, , are the length of the driving guide wires in each flexible robot. The transformation matrix from the base frame to the robot tip frame is:
(2) |
where , respectively denote translation and rotation about axis j, is the length of the outer robot. Considering the offset of the camera frame from the robot tip, the camera w.r.t. the base is
(3) |
The Jacobian matrix is used to analytically establish the approximate relationship between camera velocity and joint velocity. Considering the translational motion, at discrete instance , the iterative form is , where is the small displacement of the camera, and can be derived through forward kinematics . To reach a given target position of the end of the robot in , we need to inversely solve the appropriate joint configuration. The damped least squares method [25] provides an alternative Jacobian matrix to avoid joint velocity near singularities, i.e.
(4) |
III-B Visual Servoing Modeling
However, material nonlinearity, segment interaction, external loads, etc. may have a significant negative impact on the accuracy of the PCC model. In this work, we consider a moving camera while the targets are fixed at any instance . As shown in Fig. 3(e), for a given point in , its coordinates in the image frame and pixel frame are and , respectively. According to the pinhole camera model, the perspective equation can be obtained from the relationship on similar triangles, i.e.
(5) |
The motion of the features on the pixel plane can be predicted using the interaction matrix:
(6) |
where is a block matrix of related to linear velocity, and
(7) |
where , are the focal length in pixels, , are optical center in pixels and is focal length in millimeter. Define as the Jacobian matrix between the actuator space and the configuration space from Eq. (1), that is, . Combined Eq. (4) and (6), the overall Jacobian matrix between pixel velocity and actuator velocity can be derived as follow:
(8) |
III-C Jacobian Matrix Estimation
In the classic IBVS [26], there are several choices for the depth in the matrix . In this study, the depth at the desired position was used, and denotes the estimation matrix.
As a continuum robot, MicroNeuro has infinite DoFs. When subject to model mismatch problems caused by disturbance or manufacturing error, the model-dependent robot Jacobian matrix may cause control deviations and need to be estimated online. First, the Jacobian estimate at needs to be obtained offline, then the Jacobian can be updated iteratively online during the robot movement.
-
1.
Initialization: A small actuator movement is imposed on the MicroNeuro while it is located outside the brain, and an external electromagnetic sensor (NDI Aurora) is mounted on the tip of MicroNeuro to measure the displacement. The -th independent actuator variables causes a position deviation of the camera . Hence, is constructed as:
(9) To reduce manufacturing error, is similarly constructed while a opposite displacement is imposed. is set as:
(10) -
2.
Online Estimation: The alterations in the MicroNeuro position and Jacobian matrix between adjacent instance are small, thus, the current analytical Jacobian matrix could be appropriately adjusted using :
(11) where is the weighting factor, and denotes as the distance between measured feature and the target feature . The normalized and could be applied in .

IV Visual Model Predictive Controller
IV-A Predictive Model
The goal of the IBVS task is to minimize the error . Inspired from [27, 28], to reduce the negative impact of model inaccuracy and external disturbance, an internal model control (IMC) scheme [29] is applied in the visual MPC controller, as shown in Fig. 4. is defined as the predictive error, that is, , and denotes the reference image feature without the predictive error. Thus, we can obtain:
(12) |
The object of the visual MPC controller is then transformed into minimizing the tracking error of the prediction model with respect to . Let , Eq. (8) can be rewritten as the following state-space representation:
(13) |
where the system state , the control variable , is the output and .
IV-B Constraints
In addition, some constraints should be considered. To ensure that the MicroNeuro remains stable and avoid undesirable contact with the brain ventricles, the camera position should meet certain constraint:
(14) |
Correspondingly, considering some physical hard constraints on MicroNeuro, such as the restriction of the capability of the motors, actuator constraint is defined as follows:
(15) |
Moreover, to ensure that the target of concern is always within the field of view and away from areas with large camera distortion, output constrain is described as follows:
(16) |
IV-C Optimization Objective
At each sample time , the current measured system state is set as the initial state of an optimal control problem (OCP) with constrains, and the current control action is determined by solving the problem in the further sampling periods. Only the first optimal input is applied on the system in the optimal input sequence of length . and are identified as the prediction horizon and control horizon, respectively. The objective is described as follows:
(17) |
subject to Eq. (12), (14), (15) and (16). In Eq. (17), , is the control sequence, are output and reference sequence, is the weight matrix. denotes the predictive value of output at -th sample time. Problem (17) can further come down to solve a quadratic programming (QP) with constrains. Specially, in our implementation, problem (17) is formulated in CasADi [30] and is solved using its built-in optimization solvers.
V Experiment and Validation
In this section, we implemented four IBVS scenarios to evaluate the effectiveness of the proposed MicroNeuro robot and visual MPC controller. The camera was well calibrated [31] with a low mean reprojection error of merely 0.2 pixels, and the image resolution was resized to pixels from the origin resolution . This vision system was specifically designed to track the AprilTags [32], which served as detection features and provided high accuracy localization. The tracking error in following was quantified as the Euclidean distance between the measured and target coordinates of the features. In following experiments, the initial configuration of the robot is in steering mode 1 and remain straight. The kinematics was initialized with and iterated online with . In the proposed visual MPC controller, the control horizon and prediction horizon are set to , = diag. According to [33], the average tumor size in the pineal region is 26 mm. Based on Eq. (5) and camera parameters, the Maximum Permissible Error (MPE) was defined as mm, and the corresponding pixel error is 30.

V-A Static Target Tracking
In this experiment, the region of interest (ROI) was defined as the center of the image . As shown in Fig. 5(b), the MicroNeuro system was commanded to bring the specifically chosen markers to the ROI, which were distributed at intervals on a printed circle. The experimental analysis involved conducting six trials, and the effectiveness was demonstrated through the measured trajectories of the markers, as depicted in Fig. 5(c). In each instance, the robot successfully returned the marker to the center with average terminal error was 21.8 pixels. The average time required to complete the tracking task across the six experiments was measured to be 11.25 s. This accomplishment highlights the robustness and reliability of the proposed method in achieving fast and precise tracking.
V-B Dynamic Target Tracking
The experiment was designed to evaluate the stability of the proposed system following a target in a dynamic environment. As shown in Fig. 6(a), the robot tracked an AprilTags marker attached to a linear guide, positioned 20mm from the robot’s camera. The guide reciprocated at a speed of over a stroke. Fig. 7 illustrates that tracking errors decreased significantly once the marker was captured by the camera, with errors reduced to below the MPE within 6 s in Test 1, reaching a lowest error of 2.23 pixels. After the initial stable tracking of the target was accomplished, the standard deviation (SD) of the errors for test 1 and 2 were 20.85 and 21.81 pixels, respectively, which further supports the effectiveness of the system in maintaining precise tracking of the target.
V-C Trajectory Following
This experiment was designed to evaluate the robot’s ability to follow a set trajectory that guides the marker along a defined path in the captured image. Experiment setup was same as Fig. 5(a). Under the guidance of the controller, the robot automatically completes tracking of multiple key target points on different trajectories to approximately complete the tracking of curves in the image plane. These discrete key target points set on the letters . The experimental results in Fig. 8 showed that the controller has good tracking performance for the key points of each trajectory. The root mean square error (RMSE) of the four curves were 11.66, 11.62, 11.30 and 11.95 pixels respectively.
V-D Biopsy in a Brain Phantom
In clinical procedures, the use of endoscopic instruments like biopsy gripper and electrocoagulation, inserted via the working channel, can significantly disrupt the flexible endoscope’s view, leading to loss of lesion visibility or inadequate operating angles. This experiment aims to assess the robustness of the proposed method against external disturbances, ensuring the endoscope stays focused on the ROI. In the 3D printed brain shown in Fig. 6(b), we placed a marker in the pineal gland region to mark the area of interest. Initially, the robot was manually operated to roughly approach the target area through one burr hole, and the visual MPC controller has quickly tracked the target, as shown in Fig. 9. The insertion and operation of biopsy forceps introduced rapid noise to the robot, significantly increasing tracking error. However, the controller adjusted the tool within ten steps, reducing the error to less than 30 pixels. This result demonstrates the controller’s ability to enhance the MicroNeuro robot’s resistance to interference, suggesting its potential application in neurosurgery.
VI Conclusion
The presented study in this paper proposes a novel hybrid dual-segment flexible endoscope for neurosurgery. The dual-segment design allows for dexterous maneuverability within the deep brain’s complex structure. This innovative approach substantially assists surgeons in performing procedures on the pineal region concurrently through a single burr hole, thereby enhancing surgical efficiency. The robot meets mechanical design requirements based on clinical needs and provides comprehensive endoscopic functionality. In addition, a visual servoing control system with online estimation of the Jacobian matrix is constructed to improve the motion performance of the robot. Considering unknown disturbance, a visual MPC with constraints has been designed. The experiment verified that the MicroNeuro robot is capable of executing precise visual servoing despite external interference, and demonstrated great potential for clinical applications in neurosurgery. In the future, this work will further consider the nonlinear dynamic model and the impact of contact force during intracranial surgery to enhance the performance of the visual model predictive controller.
VII Acknowledgements
This work was supported by the Centre of AI and Robotics, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, sponsored by InnoHK Funding, HKSAR, and partially supported by Sichuan Science and Technology Program (Grant number: 2023YFH0093). Parts of Fig. 1(a) were created using templates from Servier Medical Art (http://smart.servier.com/), licensed under a Creative Common Attribution 3.0 Generic License.
References
- [1] M. G. Yaşargil and S. I. Abdulrauf, “Surgery of intraventricular tumors,” Neurosurgery, vol. 62, no. 6, pp. SHC1029–SHC1041, 2008.
- [2] L. Rigante, H. Borghei-Razavi, P. F. Recinos, and F. Roser, “An overview of endoscopy in neurologic surgery,” Cleve Clin J Med, vol. 86, no. 10, pp. 16ME–24ME, 2019.
- [3] S. A. Chowdhry and A. R. Cohen, “Intraventricular neuroendoscopy: complication avoidance and management,” World neurosurgery, vol. 79, no. 2, pp. S15–e1, 2013.
- [4] M. A. I. Amer and H. I. S. Elatrozy, “Combined endoscopic third ventriculostomy and tumor biopsy in the management of pineal region tumors, safety considerations,” Egyptian Journal of Neurosurgery, vol. 33, no. 1, pp. 1–6, 2018.
- [5] W. Zeng, J. Yan, K. Yan, X. Huang, X. Wang, and S. S. Cheng, “Modeling a symmetrically-notched continuum neurosurgical robot with non-constant curvature and superelastic property,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6489–6496, 2021.
- [6] B. Qi, Z. Yu, Z. K. Varnamkhasti, Y. Zhou, and J. Sheng, “Toward a telescopic steerable robotic needle for minimally invasive tissue biopsy,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1989–1996, 2021.
- [7] H.-S. Yoon, H.-J. Cha, J. Chung, and B.-J. Yi, “Compact design of a dual master-slave system for maxillary sinus surgery,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013, pp. 5027–5032.
- [8] M. Chen, Y. Huang, J. Chen, T. Zhou, J. Chen, and H. Liu, “Fully robotized 3d ultrasound image acquisition for artery,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 2690–2696.
- [9] Y. Li, W. Y. Ng, W. Li, Y. Huang, H. Zhang, Y. Xian, J. Li, Y. Sun, P. W. Y. Chiu, and Z. Li, “Towards semi-autonomous colon screening using an electromagnetically actuated soft-tethered colonoscope based on visual servo control,” IEEE Transactions on Biomedical Engineering, 2023.
- [10] F. Xu, Y. Zhang, J. Sun, and H. Wang, “Adaptive visual servoing shape control of a soft robot manipulator using bezier curve features,” IEEE/ASME Transactions on Mechatronics, vol. 28, no. 2, pp. 945–955, 2022.
- [11] M. M. Fallah, S. Norouzi-Ghazbi, A. Mehrkish, and F. Janabi-Sharifi, “Depth-based visual predictive control of tendon-driven continuum robots,” in 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2020, pp. 488–494.
- [12] A. A. Nazari, K. Zareinia, and F. Janabi-Sharifi, “Visual servoing of continuum robots: Methods, challenges, and prospects,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 18, no. 3, p. e2384, 2022.
- [13] J. Jiang, Y. Wang, Y. Jiang, H. Xie, H. Tan, and H. Zhang, “A robust visual servoing controller for anthropomorphic manipulators with field-of-view constraints and swivel-angle motion: Overcoming system uncertainty and improving control performance,” IEEE Robotics & Automation Magazine, vol. 29, no. 4, pp. 104–114, 2022.
- [14] A. A. Oliva, E. Aertbeliën, J. De Schutter, P. R. Giordano, and F. Chaumette, “Towards dynamic visual servoing for interaction control and moving targets,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 150–156.
- [15] W. A. Azab, K. Nasim, and W. Salaheddin, “An overview of the current surgical options for pineal region tumors,” Surgical neurology international, vol. 5, 2014.
- [16] J. Rawlings, D. Mayne, and M. Diehl, Model Predictive Control: Theory, Computation, and Design, 01 2017.
- [17] C. Lin, S. Liang, J. Chen, and X. Gao, “A multi-objective optimal torque distribution strategy for four in-wheel-motor drive electric vehicles,” IEEE Access, vol. 7, pp. 64 627–64 640, 2019.
- [18] B. Calli and A. M. Dollar, “Vision-based model predictive control for within-hand precision manipulation with underactuated grippers,” in 2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017, pp. 2839–2845.
- [19] C. P. Bechlioulis, S. Heshmati-Alamdari, G. C. Karras, and K. J. Kyriakopoulos, “Robust image-based visual servoing with prescribed performance under field of view constraints,” IEEE Transactions on Robotics, vol. 35, no. 4, pp. 1063–1070, 2019.
- [20] Q. Chen, Y. Qin, and G. Li, “Qpso-mpc based tracking algorithm for cable-driven continuum robots,” Frontiers in Neurorobotics, vol. 16, p. 1014163, 2022.
- [21] J. L. Chien, L. T. L. Clarissa, J. Liu, J. Low, and S. Foong, “Kinematic model predictive control for a novel tethered aerial cable-driven continuum robot,” in 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2021, pp. 1348–1354.
- [22] X. L. Zhu, R. Gao, G. K. C. Wong, H. T. Wong, R. Y. T. Ng, Y. Yu, R. K. M. Wong, and W. S. Poon, “Single burr hole rigid endoscopic third ventriculostomy and endoscopic tumor biopsy: what is the safe displacement range for the foramen of monro?” Asian Journal of Surgery, vol. 36, no. 2, pp. 74–82, 2013.
- [23] innovationhub@HK. MicroNeuro, year = 2023, url = https://www.innovationhub.hk/article/microneuro.
- [24] R. J. Webster III and B. A. Jones, “Design and kinematic modeling of constant curvature continuum robots: A review,” The International Journal of Robotics Research, vol. 29, no. 13, pp. 1661–1683, 2010.
- [25] S. R. Buss, “Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods,” IEEE Journal of Robotics and Automation, vol. 17, no. 1-19, p. 16, 2004.
- [26] F. Chaumette and S. Hutchinson, “Visual servo control. i. basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.
- [27] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for constrained image-based visual servoing,” IEEE Transactions on Robotics, vol. 26, no. 5, pp. 933–939, 2010.
- [28] S. Norouzi-Ghazbi, A. Mehrkish, M. M. Fallah, and F. Janabi-Sharifi, “Constrained visual predictive control of tendon-driven continuum robots,” Robotics and Autonomous Systems, vol. 145, p. 103856, 2021.
- [29] S. Saxena and Y. V. Hote, “Advances in internal model control technique: A review and future prospects,” IETE Technical Review, vol. 29, no. 6, pp. 461–472, 2012.
- [30] J. A. E. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl, “CasADi – A software framework for nonlinear optimization and optimal control,” Mathematical Programming Computation, vol. 11, no. 1, pp. 1–36, 2019.
- [31] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
- [32] E. Olson, “AprilTag: A robust and flexible visual fiducial system,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, May 2011, pp. 3400–3407.
- [33] H. G. Vuong, T. N. Ngo, and I. F. Dunn, “Incidence, prognostic factors, and survival trend in pineal gland tumors: a population-based analysis,” Frontiers in Oncology, vol. 11, p. 780173, 2021.