A Human Motion Compensation Framework for
a Supernumerary Robotic Arm
Abstract
Supernumerary robotic arms (SRAs) can be used as the third arm to complement and augment the abilities of human users. The user carrying a SRA forms a connected kinodynamic chain, which can be viewed as a special class of floating-base robot systems. However, unlike general floating-base robot systems, human users are the bases of SRAs and they have their subjective behaviors/motions. This implies that human body motions can unintentionally affect the SRA’s end-effector movements. To address this challenge, we propose a framework to compensate for the human whole-body motions that interfere with the SRA’s end-effector trajectories. The SRA system in this study consists of a 6-degree-of-freedom lightweight arm and a wearable interface. The wearable interface allows users to adjust the installation position of the SRA to fit different body shapes. An inertial measurement unit (IMU)-based sensory interface can provide the body skeleton motion feedback of the human user in real time. By simplifying the floating-base kinematics model, we design an effective motion planner by reconstructing the Jacobian matrix of the SRA. Under the proposed framework, the performance of the reconstructed Jacobian method is assessed by comparing the results obtained with the classical nullspace-based method through two sets of experiments.
I INTRODUCTION
Supernumerary robotic limbs (SRLs) provide additional robotic limbs to complement and augment the physical dexterity and capabilities of their human users. Over the past decades, various SRLs (e.g., legs [1], arms [2], and fingers [3]) have been developed for assisting humans to achieve some supporting, grasping, and manipulation tasks. Furthermore, SRLs can reduce risk by minimizing human users’ exposure to potential hazards. For instance, they can be used to handle dangerous materials or carry out tasks in sub-areas that may be inaccessible or risky for human operators. Developing intelligent SRLs has attracted lots of attention in the robotics field.
As one of the most representative SRLs, supernumerary robotic arms (SRAs) can seamlessly cooperate with users regardless of work location. The pioneering work [4, 5] developed two-arm SRA prototypes that can share the static workload acting on human arms in an overhead assembly task. For the human-robot collaboration, the kinematic chains of SRAs are not independent from human limbs, so safe human-robot interfaces and collaborative control frameworks are important aspects that should be taken into consideration. Veronneau et al. [6] designed a 3-degree-of-freedom (DoF) SRA with a teleoperation control system for agricultural harvest tasks. Saraiji et al. [7] proposed two anthropomorphic SRAs and developed a teleoperation interface mapping the users’ feet and leg motion to control the robotic arms. Penaloza et al. [8] use a non-invasive brain-machine interface to detect the user’s intention and control the SRA. Khoramshahi et al. [9] proposed a collision-avoidance algorithm for a 7-DoF SRA using the quadratic programming approach. A nullspace-based controller is proposed to realize collision avoidance by maximizing the distance between the human’s head and the robot in our previous work [10].
The human-SRA systems can be considered a special class of floating-base robot systems, and their motion depends not only on the system itself but also on the movement of the human user. However, unlike general robotic floating-base systems, the movement of humans is not completely controllable or predictable, which poses a unique challenge for the integrated human-SRA system. For example, while carrying out delicate and/or constrained manipulation tasks or inspecting an external environment, the spontaneous or unintentional movements of human users will disturb the operational accuracy of the SRA. Therefore, the SRA system should be able to compensate for the unpredictable or unintentional motions induced by the users. The conception of the human motion compensation is shown in Fig. 1. This also means that the SRA needs to estimate/monitor the human body state to reduce the induced motion. In this context, we integrate an inertial measurement unit (IMU)-based sensory interface into our human-SRA system and develop a motion compensation control framework to address the above issues. To the best of our knowledge, this paper is the first work on the motion compensation control between human users and SRAs. The main contributions of this work can be summarized as:
-
•
A physical and kinematic integration of the human-robot system is developed by combining our SRA and an IMU-based sensory interface.
-
•
A motion compensation control framework is proposed to compensate for the 3-dimensional (3D) translation motion induced by human users.
-
•
Based on a simplified floating-base kinematics model, an effective motion planner is designed by reconstructing the Jacobian matrix.
We focus on the translational axes even if the theory is general and includes the rotational axes. However, since most SRAs have a limited number of DoFs, the isolation of all translational and rotational axes would not be feasible. The rest of the paper is organized as follows. In Section II, the human-robot integration system consisting of our SRA and the IMU-based sensory interface is introduced. The floating-base kinematics modeling and analysis are described in Section III. The motion compensation control framework is described in Section IV. The experiments and results are illustrated in Section V. Finally, Section VI addresses the conclusion of the paper.
II HUMAN-ROBOT INTEGRATION SYSTEM
In this section, we will introduce the design philosophy of our human-SRA system, where the human-robot integration is twofold. The wearable interface keeps the user and the SRA integrated physically. The IMU-based sensory interface works as a bridge between the user and the SRA guaranteeing the kinematic chain integration.
II-A Supernumerary Robotic Arm
Here we briefly introduce the general layout of our SRA. The detailed description is provided in our previous work [10]. The 6-DoF robotic arm is mounted on the user’s shoulder, which aims to guarantee the maximum collision-free workspace between the SRA and the human arm. The unique advantage of our SRA is the body adaptability. We design an adjustable mechanical interface allowing different users to manually adjust the installation position of the manipulator to fit different users’ body dimensions and switch left/right shoulders. The soft shoulder support keeps the SRA stable and comfortable on the user’s body.
II-B IMU-Based Sensory Interface
To achieve the kinematic chain integration of the SRA and the human, the IMU-based motion capture system (Xsens MVN) is chosen as the sensory interface due to its portability. This motion capture system combines the biomechanical model of the human user and an IMU sensor network to estimate the 3D kinematic data of the user. First, the user body dimensions should be measured, including body height, foot size, arm span, ankle height, hip height, hip width, knee height, shoulder width, and shoe sole height. Then, the user performs a calibration process wearing a flexible suit and body straps to attach 17 IMU wireless tracker modules, and the IMU signal data is transmitted to the master (the transmission station and the USB dongle). Through our Xsens-to-ROS bridge [11], the user’s motion is detected in real time, which works as an important feedback for controlling the SRA.
III KINEMATICS MODELING AND ANALYSIS
As shown in Fig. 2, the integration system is a floating-base system (with the human user as its base), where is the inertial frame at the initial calibration position, is the right foot frame at the local working position, is the base frame of the manipulator, is the end-effector (EE) frame of the manipulator, and is the human torso frame. As we stated before, we focus on the 3D translation motion compensation control. Two assumed preconditions are given as follows:
-
(P1)
The user keeps the orientation of the torso frame stable when moving near the local working position.
-
(P2)
With the help of the soft shoulder support, and can keep the same orientation and the relative pose remains unchanged when the user is performing translational motions.
III-A Floating-Base Kinematics
The velocity-level kinematics of the floating-base system[12] can be established as follows:
(1) |
where is the generalized velocity of the EE, is the generalized velocity of the human base, is the joint position of the manipulator, and and are Jacobian matrices of the base and the manipulator defined in . is defined as
(2) |
where is the identity matrix. The symbol denotes a skew-symmetric matrix, namely, for an arbitrary vector , is defined as
(3) |
and is the position vector from to in :
(4) |
where is the orientation of w.r.t. , and is the position vector from to in . Meanwhile, we can deduce
(5) |
where is the Jacobian matrix of the manipulator defined in . According to the assumed precondition (P1), we know and when the human user is performing the translational motion at the local working position. Therefore, we can obtain a simplified kinematics model:
(6) |
where and are the translational and rotational Jacobian matrices of , respectively.
III-B Compensation Problem Analysis
Our basic objective is to keep the EE stable when the user is moving. Due to the translational motion of the human (), we compensate for by planning to keep still (or to follow a desired trajectory). As our manipulator does not have redundant DoFs, it’s hard to keep the 6D pose of the EE unchanged at any point in the whole workspace. Since this problem cannot be addressed, we make a compromise by prioritizing subtasks where we assign the EE position compensation () as the primary task and the EE orientation compensation () as the secondary task. According to (6), the primary task of the compensation problem is formulated as
(7) |
Moreover, we assume is the solution of (7) and the secondary task is to meet
(8a) | ||||
s.t | (8b) |
which means the solution of (8) is within the set of optimizing the primary task [13]. Meanwhile, the planned trajectories should meet the physical constraints ( and ), where and are the lower and upper boundaries of the joint position vector; and are the lower and upper boundaries of the joint velocity vector.
III-C Inverse Kinematics Solutions
In this subsection, two approaches are used to solve the above problem: 1) the nullspace-based method (NBM) and 2) the reconstructed Jacobian method (RJM).
III-C1 NBM
It is one of the most classical approaches to dealing with the task-priority planning problem. This method projects the secondary task into the null space of the Jacobian of the primary task and generates an internal motion without affecting the primary task. In our case, the NBM is formulated as
(9) | |||
(10) |
where for any (), is the pseudo-inverse of , namely the Moore-Penrose pseudo inverse, is the nullspace projection matrix of , and is an arbitrary vector which is the key to incorporating the secondary task. Here we use the quaternion representation to describe the orientation, namely where is the scalar part and is the vector part. Let and represent the desired and current orientation, and the orientation error is expressed as
(11) |
where and is the quaternion product symbol [14]. We use the vector part of the quaternion error [15] to design , as
(12) |
where is the diagonal gain matrix for the EE orientation.
III-C2 RJM
Considering the unsolvable problem of any EE pose, we proposed a simple method by reconstructing the Jacobian matrix. As we know, is a projection matrix. Our idea is to release some DoFs on the EE orientation and reconstruct a new Jacobian matrix. According to (6), we can obtain
(13) |
where is the sub-Jacobian matrix for orientation. The new Jacobian matrix is reconstructed as
(14) |
where and our objective is keeping , , and . According to the closed-loop inverse kinematics algorithm [14], we can deduce
(15) |
where is the diagonal gain matrix for the EE position and is the position variation of the EE.
IV Motion Compensation Control Framework
In this section, we introduce our compensation framework. As we aim to plan feasible and safe joint trajectories for real-time control, the singularity handling and the joint limitation are considered in the proposed framework.
IV-A Compensation Control Framework
The human motion compensation framework is given in Fig. 3, which includes 5 parts: (a) the physical integration, (b) the kinematic chain integration, (c) the coordinate frame transformation, (d) the motion planning, and (e) the motion control. First, the human user wears the SRA with the IMU-based sensory interface to establish the human-robot integration system. After the calibration, the inertial frame lies in the right foot of the user at the initial calibration position (Fig. 2) and all the kinematic information of the human user is defined w.r.t. . As the user usually works at a local working position, the compensation motion control should be finished at the local position. Therefore, we set a local home frame and the human motion should be defined w.r.t. the local home frame.
IV-B Coordinate Frame Transformation
As shown in Fig. 3b, we aim to achieve the motion compensation at a local working position, the human torso frame at (noted as ) is selected as the local home frame. Therefore, the human translation motion should be defined w.r.t. . is set as follows: First, the human user reaches the local working position and stands firmly. Then, we record the local torso frame with its current position and its current orientation w.r.t. , respectively. According to the assumed precondition (P1), we can deduce the position variation of the local home frame from to w.r.t. as
(16) |
where is the position of at w.r.t. . Similarly, we can obtain the local position variation of the base frame from to w.r.t. as
(17) |
where and are the relative position vectors from to at and w.r.t. , respectively. According to the assumed precondition (P2), we know , so we can deduce
(18) |
(19) |
where is the velocity vector of at w.r.t. , and are velocity vectors of and at w.r.t. , respectively. Moreover, we can deduce the local position variation of , namely at w.r.t. , which will be used to control our SRA:
(20) |
where is the relative position from to at w.r.t. and expressed as
(21) |
where , and are the rotation matrices of w.r.t. , w.r.t. , and w.r.t. , respectively. According to the assumed preconditions (P1) and (P2), each of them is and (21) can be reduced into
(22) |
For the EE orientation stabilization, controlling it w.r.t. is the same as controlling it w.r.t. , so we use the following quaternion error to control our SRA:
(23) |
IV-C Singularity Handling
When the manipulator is close to a singularity configuration, the inverse kinematics becomes unstable and the joint increment becomes large. The singular value filtering (SVF) method [16] is adopted in our control framework, which can guarantee the lower-bounded singular values. According to the singular value decomposition, we can deduce
(24) |
where and are the input and output singular vectors which are the th columns of and , respectively, is the singular value meeting , with min being the rank of . The pseudo-inverse of is . The general damped least-squares pseudo-inverse is expressed as
(25) |
where is the damping factor, but a constant small value for cannot always guarantee good performance in the workspace. Using the SVF method, the filtered Jacobian matrix is defined as
(26) | |||
(27) |
where is the filtering function, is the minimum value that we want to impose on the singular values of , is a shape factor, and the corresponding pseudo-inverse of is
(28) |
IV-D Joint Clamping
To avoid joint velocity and angle limits, we use the joint clamping algorithm [17] to constrain the joint increment.
(29) |
where is the th joint velocity boundary. For the joint position at step , we know , where and is a diagonal activation matrix,
(30) |
where and are the th joint position minimum and maximum boundaries, respectively. Through the joint clamping algorithm, we can guarantee that the planned joint trajectories do not violate the joint limits.
V EXPERIMENTAL VERIFICATIONS
In this section, we verify our control framework by two sets of experiments: the NBM set and the RJM set. Two types of user motions (1D motion and 3D motion) are investigated in the experiments. The optical tracking motion capture system (in Fig. 4a) is used to obtain the EE position coordinates as absolute coordinates. Although the origin of the optical tracking frame is not the same as the Xsens origin frame , its results can be used to evaluate the control performance in Cartesian space. A video of the experiments is available in our YouTube channel111The video can be found at https://youtu.be/hBFH2gtw55I.
V-A Experiments and Results
We conduct three 1D motion experiments, namely, the up-down (U-D) motion (Fig. 4), the left-right (L-R) motion (Fig. 5), and the forward-backward (F-B) motion (Fig. 6). Although the user tries to achieve the 1D motion, the human body’s movements are inevitably coupled sometimes. Note that we focus on the z-axis stabilization in the U-D motion, the y-axis stabilization in the L-R motion, and the x-axis stabilization in the F-B motion. As shown in Fig. 7, the human user moves randomly in the 3D motion experiments and we focus on the EE stabilization in 3D space. Accordingly, the experimental results of the NBM and RJM sets are given in Figs. 8 and 9, respectively. Note that the EE motion is defined w.r.t. and the absolute EE position is defined w.r.t. the origin of the optical tracking system. Moreover, we can see clearly that the z-axis and x-axis are coupled in the U-D motion and the x-axis and y-axis are coupled in the F-B motion. The planned joint velocity trajectories (Figs. 8b and 9b) fall in the joint velocity limit (rad/s). The EE motion (Figs. 8d and 9d) has the opposite trend w.r.t. the human motion (Figs. 8a and 9a). Figs. 8e and 9e show that EE goes back to its initial orientation after the compensation. Figs. 8f and 9f show the compensation performance in Cartesian space.
V-B Evaluation and Analysis
Two indices are used to evaluate the 1D motion compensation performance.
(31) | |||
(32) |
where represents the absolute EE position obtained from the optical tracking system, represents the mean value of , represents x, y, and z coordinates, and represents the standard deviation of . Meanwhile, we defined to represent the mean EE position error, where is the initial mean value of .
In the NBM set, the initial positions are , , and . For the U-D motion, , , and . For the L-R motion, , , and . For the F-B motion, , , and . By comparison, in the RJM set, the initial positions are , , and . For the U-D motion, , , and . For the L-R motion, , , and . For the F-B motion, , , and . Furthermore, we propose a distance-related index to evaluate the 3D motion compensation in Cartesian performance.
(33) |
where . In the 3D random motion, of the NBE set is and of the RJM set is . Meanwhile, the evaluation results (, , and ) of two experiments are illustrated in Fig. 10.
V-C Discussion
From the experimental results, we can see that the proposed control framework can effectively compensate for the human translation motion. However, the compensation motion sometimes falls behind the user motion due to the joint clamping (for safety) and the sampling frequency of the IMU-based sensory interface (60 Hz). For example, if the previous step compensation hasn’t been finished, the motion compensation in Fig.6 (d-g) still goes backward. From the evaluation results, we can see that both methods have the same order of magnitude in and (RJM is slightly better than NBM yet simpler).
VI CONCLUSIONS AND FUTURE WORK
The motion compensation problem of the human-robot integration system was investigated in this work. As a special class of floating-base robots, a simplified floating-base kinematics model was deduced to analyze the kinematics of the integration system. As our SRA only has 6 DoFs, which is insufficient to keep the 6D pose of the EE stable, we reformulated the compensation problem by prioritizing the EE position and orientation as the primary task and the secondary task. By virtue of our sensory interfaces, we proposed a motion compensation control framework and a RJM method to solve the task-priority planning problem. Under the proposed framework, the EE position was stabilized in a small range with the mean error (5). Compared with NBM, the proposed RJM can achieve a competitive control performance with a simpler algorithm structure.
Currently, we do not compensate for the rotational motion of users. In the future, the rotational motion and the users’ intention detection will be studied in our compensation framework.
References
- [1] F. Parietti, K. C. Chan, B. Hunter, and H. H. Asada, “Design and control of supernumerary robotic limbs for balance augmentation,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 5010–5017.
- [2] M. Muehlhaus, M. Koelle, A. Saberpour, and J. Steimle, “I need a third arm! eliciting body-based interactions with a wearable robotic arm,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–15.
- [3] I. Hussain, G. Spagnoletti, G. Salvietti, and D. Prattichizzo, “Toward wearable supernumerary robotic fingers to compensate missing grasping abilities in hemiparetic upper limb,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1414–1436, 2017.
- [4] B. Llorens Bonilla, F. Parietti, and H. H. Asada, “Demonstration-based control of supernumerary robotic limbs,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 3936–3942.
- [5] F. Parietti and H. H. Asada, “Supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing,” in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 1176–1183.
- [6] C. Véronneau, J. Denis, L.-P. Lebel, M. Denninger, V. Blanchard, A. Girard, and J.-S. Plante, “Multifunctional remotely actuated 3-dof supernumerary robotic arm based on magnetorheological clutches and hydrostatic transmission lines,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2546–2553, 2020.
- [7] M. Y. Saraiji, T. Sasaki, K. Kunze, K. Minamizawa, and M. Inami, “Metaarms: Body remapping using feet-controlled artificial arms,” in Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, 2018, pp. 65–74.
- [8] C. Penaloza, D. Hernandez-Carmona, and S. Nishio, “Towards intelligent brain-controlled body augmentation robotic limbs,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2018, pp. 1011–1015.
- [9] M. Khoramshahi, A. Poignant, G. Morel, and N. Jarrassé, “A practical control approach for safe collaborative supernumerary robotic arms,” in 2023 IEEE Conference on Advanced Robotics and its Social Impact, 2023.
- [10] Y. Du, X. Zhang, M. Leonori, P. Balatti, J. Jin, Q. Wang, and A. Ajoudani, “Bi-directional human-robot handover using a novel supernumerary robotic system,” in 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). IEEE, 2023, pp. 153–158.
- [11] M. Leonori, M. Lorenzini, L. Fortini, J. M. Gandarias, and A. Ajoudani, “The bridge between xsens motion-capture and robot operating system (ros): Enabling robots with online 3d human motion tracking,” arXiv preprint arXiv:2306.17738, 2023.
- [12] X. Zhang, J. Liu, Y. Tong, Y. Liu, and Z. Ju, “Attitude decoupling control of semifloating space robots using time-delay estimation and supertwisting control,” IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 6, pp. 4280–4295, 2021.
- [13] O. Kanoun, “Real-time prioritized kinematic control under inequality constraints for redundant manipulators,” in Robotics: Science and Systems, vol. 7, 2012, p. 145.
- [14] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: Modelling, Planning and Control, ser. Advanced Textbooks in Control and Signal Processing. Springer London, 2010.
- [15] J. Yuan, “Closed-loop manipulator control using quaternion feedback,” IEEE Journal on Robotics and Automation, vol. 4, no. 4, pp. 434–440, 1988.
- [16] A. Colomé and C. Torras, “Closed-loop inverse kinematics for redundant robots: Comparative assessment and two enhancements,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 944–955, 2014.
- [17] D. Raunhardt and R. Boulic, “Progressive clamping,” in Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007, pp. 4414–4419.