This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Human Motion Compensation Framework for
a Supernumerary Robotic Arm

Xin Zhang1,2,3, Pietro Balatti3, Mattia Leonori3, Arash Ajoudani3 *This work was supported in part by the European Union’s Horizon 2020 research and innovation programme SOPHIA (Grant No.871237), in part by the ERC-StG Ergo-Lean (Grant No.850932 ), in part by the National Natural Science Foundation of China (Grant No.62103407), and in part by the China Scholarship Council,1State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016, Shenyang, China. Email: [email protected]2Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, 110169, Shenyang, China.3Human-Robot Interfaces and Interaction (HRI2) Lab, Istituto Italiano di Tecnologia, 16163 Genova, Italy. Email: [email protected], [email protected], [email protected], [email protected]This work was carried out at the HRI2 lab.Corresponding author: Xin Zhang.
Abstract

Supernumerary robotic arms (SRAs) can be used as the third arm to complement and augment the abilities of human users. The user carrying a SRA forms a connected kinodynamic chain, which can be viewed as a special class of floating-base robot systems. However, unlike general floating-base robot systems, human users are the bases of SRAs and they have their subjective behaviors/motions. This implies that human body motions can unintentionally affect the SRA’s end-effector movements. To address this challenge, we propose a framework to compensate for the human whole-body motions that interfere with the SRA’s end-effector trajectories. The SRA system in this study consists of a 6-degree-of-freedom lightweight arm and a wearable interface. The wearable interface allows users to adjust the installation position of the SRA to fit different body shapes. An inertial measurement unit (IMU)-based sensory interface can provide the body skeleton motion feedback of the human user in real time. By simplifying the floating-base kinematics model, we design an effective motion planner by reconstructing the Jacobian matrix of the SRA. Under the proposed framework, the performance of the reconstructed Jacobian method is assessed by comparing the results obtained with the classical nullspace-based method through two sets of experiments.

I INTRODUCTION

Supernumerary robotic limbs (SRLs) provide additional robotic limbs to complement and augment the physical dexterity and capabilities of their human users. Over the past decades, various SRLs (e.g., legs [1], arms [2], and fingers [3]) have been developed for assisting humans to achieve some supporting, grasping, and manipulation tasks. Furthermore, SRLs can reduce risk by minimizing human users’ exposure to potential hazards. For instance, they can be used to handle dangerous materials or carry out tasks in sub-areas that may be inaccessible or risky for human operators. Developing intelligent SRLs has attracted lots of attention in the robotics field.

Refer to caption

Figure 1: Concept illustration of human motion compensation: when the user body moves, the SRA compensates for the induced motion and decouples it from the end-effector trajectories.

As one of the most representative SRLs, supernumerary robotic arms (SRAs) can seamlessly cooperate with users regardless of work location. The pioneering work [4, 5] developed two-arm SRA prototypes that can share the static workload acting on human arms in an overhead assembly task. For the human-robot collaboration, the kinematic chains of SRAs are not independent from human limbs, so safe human-robot interfaces and collaborative control frameworks are important aspects that should be taken into consideration. Veronneau et al. [6] designed a 3-degree-of-freedom (DoF) SRA with a teleoperation control system for agricultural harvest tasks. Saraiji et al. [7] proposed two anthropomorphic SRAs and developed a teleoperation interface mapping the users’ feet and leg motion to control the robotic arms. Penaloza et al. [8] use a non-invasive brain-machine interface to detect the user’s intention and control the SRA. Khoramshahi et al. [9] proposed a collision-avoidance algorithm for a 7-DoF SRA using the quadratic programming approach. A nullspace-based controller is proposed to realize collision avoidance by maximizing the distance between the human’s head and the robot in our previous work [10].

The human-SRA systems can be considered a special class of floating-base robot systems, and their motion depends not only on the system itself but also on the movement of the human user. However, unlike general robotic floating-base systems, the movement of humans is not completely controllable or predictable, which poses a unique challenge for the integrated human-SRA system. For example, while carrying out delicate and/or constrained manipulation tasks or inspecting an external environment, the spontaneous or unintentional movements of human users will disturb the operational accuracy of the SRA. Therefore, the SRA system should be able to compensate for the unpredictable or unintentional motions induced by the users. The conception of the human motion compensation is shown in Fig. 1. This also means that the SRA needs to estimate/monitor the human body state to reduce the induced motion. In this context, we integrate an inertial measurement unit (IMU)-based sensory interface into our human-SRA system and develop a motion compensation control framework to address the above issues. To the best of our knowledge, this paper is the first work on the motion compensation control between human users and SRAs. The main contributions of this work can be summarized as:

  • A physical and kinematic integration of the human-robot system is developed by combining our SRA and an IMU-based sensory interface.

  • A motion compensation control framework is proposed to compensate for the 3-dimensional (3D) translation motion induced by human users.

  • Based on a simplified floating-base kinematics model, an effective motion planner is designed by reconstructing the Jacobian matrix.

We focus on the translational axes even if the theory is general and includes the rotational axes. However, since most SRAs have a limited number of DoFs, the isolation of all translational and rotational axes would not be feasible. The rest of the paper is organized as follows. In Section II, the human-robot integration system consisting of our SRA and the IMU-based sensory interface is introduced. The floating-base kinematics modeling and analysis are described in Section III. The motion compensation control framework is described in Section IV. The experiments and results are illustrated in Section V. Finally, Section VI addresses the conclusion of the paper.

II HUMAN-ROBOT INTEGRATION SYSTEM

In this section, we will introduce the design philosophy of our human-SRA system, where the human-robot integration is twofold. The wearable interface keeps the user and the SRA integrated physically. The IMU-based sensory interface works as a bridge between the user and the SRA guaranteeing the kinematic chain integration.

II-A Supernumerary Robotic Arm

Refer to caption

Figure 2: The human-SRA system from the initial calibration position to a local working position for an inspection task.

Here we briefly introduce the general layout of our SRA. The detailed description is provided in our previous work [10]. The 6-DoF robotic arm is mounted on the user’s shoulder, which aims to guarantee the maximum collision-free workspace between the SRA and the human arm. The unique advantage of our SRA is the body adaptability. We design an adjustable mechanical interface allowing different users to manually adjust the installation position of the manipulator to fit different users’ body dimensions and switch left/right shoulders. The soft shoulder support keeps the SRA stable and comfortable on the user’s body.

II-B IMU-Based Sensory Interface

To achieve the kinematic chain integration of the SRA and the human, the IMU-based motion capture system (Xsens MVN) is chosen as the sensory interface due to its portability. This motion capture system combines the biomechanical model of the human user and an IMU sensor network to estimate the 3D kinematic data of the user. First, the user body dimensions should be measured, including body height, foot size, arm span, ankle height, hip height, hip width, knee height, shoulder width, and shoe sole height. Then, the user performs a calibration process wearing a flexible suit and body straps to attach 17 IMU wireless tracker modules, and the IMU signal data is transmitted to the master (the transmission station and the USB dongle). Through our Xsens-to-ROS bridge [11], the user’s motion is detected in real time, which works as an important feedback for controlling the SRA.

III KINEMATICS MODELING AND ANALYSIS

As shown in Fig. 2, the integration system is a floating-base system (with the human user as its base), where ΣI\Sigma_{I} is the inertial frame at the initial calibration position, ΣF\Sigma_{F} is the right foot frame at the local working position, ΣB\Sigma_{B} is the base frame of the manipulator, ΣE\Sigma_{E} is the end-effector (EE) frame of the manipulator, and ΣH\Sigma_{H} is the human torso frame. As we stated before, we focus on the 3D translation motion compensation control. Two assumed preconditions are given as follows:

  1. (P1)

    The user keeps the orientation of the torso frame ΣH\Sigma_{H} stable when moving near the local working position.

  2. (P2)

    With the help of the soft shoulder support, ΣB\Sigma_{B} and ΣH\Sigma_{H} can keep the same orientation and the relative pose remains unchanged when the user is performing translational motions.

III-A Floating-Base Kinematics

The velocity-level kinematics of the floating-base system[12] can be established as follows:

𝒙˙E=𝑱B𝒙˙B+𝑱M𝜽˙𝑴\displaystyle\boldsymbol{\dot{x}}_{E}=\boldsymbol{J}_{B}\boldsymbol{\dot{x}}_{B}+\boldsymbol{J}_{M}\boldsymbol{\dot{\theta}_{M}} (1)

where 𝒙˙E=[𝒗ET,𝝎ET]T6\boldsymbol{\dot{x}}_{E}=\left[\boldsymbol{v}_{E}^{T},\boldsymbol{\omega}_{E}^{T}\right]^{T}\in\mathbb{R}^{6} is the generalized velocity of the EE, 𝒙˙B=[𝒗BT,𝝎BT]T6\boldsymbol{\dot{x}}_{B}=\left[\boldsymbol{v}_{B}^{T},\boldsymbol{\omega}_{B}^{T}\right]^{T}\in\mathbb{R}^{6} is the generalized velocity of the human base, 𝜽M=[θ1,θ2,,θ6]T6\boldsymbol{\theta}_{M}=\left[\theta_{1},\theta_{2},\cdots,\theta_{6}\right]^{T}\in\mathbb{R}^{6} is the joint position of the manipulator, and 𝑱B6×6\boldsymbol{J}_{B}\in\mathbb{R}^{6\times 6} and 𝑱M6×6\boldsymbol{J}_{M}\in\mathbb{R}^{6\times 6} are Jacobian matrices of the base and the manipulator defined in ΣI\Sigma_{I}. 𝑱B\boldsymbol{J}_{B} is defined as

𝑱B=[𝑬3×3𝒑~BE𝟎3×3𝑬3×3]\boldsymbol{J}_{B}=\begin{bmatrix}\boldsymbol{E}_{3\times 3}&-\boldsymbol{\widetilde{p}}_{BE}\\ \boldsymbol{0}_{3\times 3}&\boldsymbol{E}_{3\times 3}\end{bmatrix} (2)

where 𝑬\boldsymbol{E} is the identity matrix. The symbol (~)\left(\tilde{\cdot}\right) denotes a skew-symmetric matrix, namely, for an arbitrary vector 𝒌=[kx,ky,kz]T3\boldsymbol{k}=\left[k_{x},k_{y},k_{z}\right]^{T}\in\mathbb{R}^{3}, 𝒌~\tilde{\boldsymbol{k}} is defined as

𝒌~=[0kzkykz0kxkykx0]\displaystyle\tilde{\boldsymbol{k}}=\begin{bmatrix}0&-k_{z}&k_{y}\\ k_{z}&0&-k_{x}\\ -k_{y}&k_{x}&0\end{bmatrix} (3)

and 𝒑BE3\boldsymbol{p}_{BE}\in\mathbb{R}^{3} is the position vector from ΣB\Sigma_{B} to ΣE\Sigma_{E} in ΣI\Sigma_{I}:

𝒑BE=𝑹B𝒑BEB\boldsymbol{p}_{BE}=\boldsymbol{R}_{B}\cdot\boldsymbol{p}_{BE}^{B} (4)

where 𝑹B3\boldsymbol{R}_{B}\in\mathbb{R}^{3} is the orientation of ΣB\Sigma_{B} w.r.t. ΣI\Sigma_{I}, and 𝒑BEB3\boldsymbol{p}_{BE}^{B}\in\mathbb{R}^{3} is the position vector from ΣB\Sigma_{B} to ΣE\Sigma_{E} in ΣB\Sigma_{B}. Meanwhile, we can deduce

𝑱M=[𝑹B𝟎3×3𝟎3×3𝑹B]𝑱MB\boldsymbol{J}_{M}=\begin{bmatrix}\boldsymbol{R}_{B}&\boldsymbol{0}_{3\times 3}\\ \boldsymbol{0}_{3\times 3}&\boldsymbol{R}_{B}\end{bmatrix}\cdot\boldsymbol{J}_{M}^{B} (5)

where 𝑱MB6×6\boldsymbol{J}_{M}^{B}\in\mathbb{R}^{6\times 6} is the Jacobian matrix of the manipulator defined in ΣB\Sigma_{B}. According to the assumed precondition (P1), we know 𝝎B𝟎\boldsymbol{\omega}_{B}\approx\boldsymbol{0} and 𝑱B𝒙˙B=[𝒗BT,𝟎T]T\boldsymbol{J}_{B}\boldsymbol{\dot{x}}_{B}=[\boldsymbol{v}_{B}^{T},\mathbf{0}^{T}]^{T} when the human user is performing the translational motion at the local working position. Therefore, we can obtain a simplified kinematics model:

[𝒗E𝝎E]=[𝒗B𝟎]+[𝑱Mv𝑱Mω]𝜽˙𝑴\begin{bmatrix}\boldsymbol{v}_{E}\\ \boldsymbol{\omega}_{E}\end{bmatrix}=\begin{bmatrix}\boldsymbol{v}_{B}\\ \boldsymbol{0}\end{bmatrix}+\begin{bmatrix}\boldsymbol{J}_{Mv}\\ \boldsymbol{J}_{M\omega}\end{bmatrix}\boldsymbol{\dot{\theta}_{M}} (6)

where 𝑱Mv\boldsymbol{J}_{Mv} and 𝑱Mω3×6\boldsymbol{J}_{M\omega}\in\mathbb{R}^{3\times 6} are the translational and rotational Jacobian matrices of 𝑱M\boldsymbol{J}_{M}, respectively.

III-B Compensation Problem Analysis

Our basic objective is to keep the EE stable when the user is moving. Due to the translational motion of the human (𝒗B𝟎\boldsymbol{v}_{B}\neq\boldsymbol{0}), we compensate for 𝒗B\boldsymbol{v}_{B} by planning 𝜽˙𝑴\boldsymbol{\dot{\theta}_{M}} to keep 𝒙E\boldsymbol{x}_{E} still (or to follow a desired trajectory). As our manipulator does not have redundant DoFs, it’s hard to keep the 6D pose of the EE unchanged at any point in the whole workspace. Since this problem cannot be addressed, we make a compromise by prioritizing subtasks where we assign the EE position compensation (𝒗E=0\boldsymbol{v}_{E}=0) as the primary task and the EE orientation compensation (𝝎E=0\boldsymbol{\omega}_{E}=0) as the secondary task. According to (6), the primary task of the compensation problem is formulated as

min𝜽˙M\displaystyle\underset{\boldsymbol{\dot{\theta}}_{M}}{\text{min}}~{}~{}~{} 𝑱Mv𝜽˙M+𝒗B2\displaystyle\|\boldsymbol{J}_{Mv}\boldsymbol{\dot{\theta}}_{M}+\boldsymbol{v}_{B}\|^{2} (7)

Moreover, we assume 𝜽˙M6\boldsymbol{\dot{\theta}}_{M}^{\star}\in\mathbb{R}^{6} is the solution of (7) and the secondary task is to meet

min𝜽˙M\displaystyle\underset{\boldsymbol{\dot{\theta}}_{M}}{\text{min}}~{}~{}~{} 𝑱Mω𝜽˙M2\displaystyle\|\boldsymbol{J}_{M\omega}\boldsymbol{\dot{\theta}}_{M}\|^{2} (8a)
s.t 𝑱Mv𝜽˙M=𝑱Mv𝜽˙M\displaystyle\boldsymbol{J}_{Mv}\boldsymbol{\dot{\theta}}_{M}=\boldsymbol{J}_{Mv}\boldsymbol{\dot{\theta}}_{M}^{\star} (8b)

which means the solution of (8) is within the set of optimizing the primary task [13]. Meanwhile, the planned trajectories should meet the physical constraints (𝜽min𝜽M𝜽max\boldsymbol{\theta}_{min}\leq\boldsymbol{\theta}_{M}\leq\boldsymbol{\theta}_{max} and 𝜽˙min𝜽˙M𝜽˙max\boldsymbol{\dot{\theta}}_{min}\leq\boldsymbol{\dot{\theta}}_{M}\leq\boldsymbol{\dot{\theta}}_{max}), where 𝜽min\boldsymbol{\theta}_{min} and 𝜽max6\boldsymbol{\theta}_{max}\in\mathbb{R}^{6} are the lower and upper boundaries of the joint position vector; 𝜽˙min\boldsymbol{\dot{\theta}}_{min} and 𝜽˙max6\boldsymbol{\dot{\theta}}_{max}\in\mathbb{R}^{6} are the lower and upper boundaries of the joint velocity vector.

III-C Inverse Kinematics Solutions

In this subsection, two approaches are used to solve the above problem: 1) the nullspace-based method (NBM) and 2) the reconstructed Jacobian method (RJM).

III-C1 NBM

It is one of the most classical approaches to dealing with the task-priority planning problem. This method projects the secondary task into the null space of the Jacobian of the primary task and generates an internal motion without affecting the primary task. In our case, the NBM is formulated as

𝜽˙M=𝑱Mv(𝒗B)+𝑵𝝃0\displaystyle\boldsymbol{\dot{\theta}}_{M}=\boldsymbol{J}_{Mv}^{\dagger}(-\boldsymbol{v}_{B})+\boldsymbol{N}\boldsymbol{\xi}_{0} (9)
𝑵=𝑬6𝑱Mv𝑱Mv\displaystyle\boldsymbol{N}=\boldsymbol{E}_{6}-\boldsymbol{J}_{Mv}^{\dagger}\boldsymbol{J}_{Mv} (10)

where for any 𝑱m×n\boldsymbol{J}\in\mathbb{R}^{m\times n} (mm<<nn), 𝑱=𝑱T(𝑱𝑱T)1\boldsymbol{J}^{\dagger}=\boldsymbol{J}^{T}(\boldsymbol{J}\boldsymbol{J}^{T})^{-1} is the pseudo-inverse of 𝑱\boldsymbol{J}, namely the Moore-Penrose pseudo inverse, 𝑵\boldsymbol{N} is the nullspace projection matrix of 𝑱Mv\boldsymbol{J}_{Mv}, and 𝝃06\boldsymbol{\xi}_{0}\in\mathbb{R}^{6} is an arbitrary vector which is the key to incorporating the secondary task. Here we use the quaternion representation to describe the orientation, namely 𝑸={η,ϵ}\boldsymbol{Q}=\left\{\eta,\boldsymbol{\epsilon}\right\} where η\eta is the scalar part and ϵ=[ϵx,ϵy,ϵz]T3\boldsymbol{\epsilon}=\left[\epsilon_{x},\epsilon_{y},\epsilon_{z}\right]^{T}\in\mathbb{R}^{3} is the vector part. Let 𝑸d\boldsymbol{Q}_{d} and 𝑸c\boldsymbol{Q}_{c} represent the desired and current orientation, and the orientation error is expressed as

Δ𝑸=𝑸d𝑸c1\Delta\boldsymbol{Q}=\boldsymbol{Q}_{d}\otimes\boldsymbol{Q}_{c}^{-1} (11)

where Δ𝑸={Δη,Δϵ}\Delta\boldsymbol{Q}=\left\{\Delta\eta,\Delta\boldsymbol{\epsilon}\right\} and \otimes is the quaternion product symbol [14]. We use the vector part of the quaternion error Δϵ\Delta\boldsymbol{\epsilon} [15] to design 𝝃0\boldsymbol{\xi}_{0}, as

𝝃0=𝑱Mω𝑲OΔϵ\boldsymbol{\xi}_{0}=\boldsymbol{J}_{M\omega}^{\dagger}\boldsymbol{K}_{O}\Delta\boldsymbol{\epsilon} (12)

where 𝑲O=diag(KO1,KO2,KO3)3×3\boldsymbol{K}_{O}=diag(K_{O1},K_{O2},K_{O3})\in\mathbb{R}^{3\times 3} is the diagonal gain matrix for the EE orientation.

III-C2 RJM

Considering the unsolvable problem of any EE pose, we proposed a simple method by reconstructing the Jacobian matrix. As we know, 𝑱M\boldsymbol{J}_{M} is a 6×6{{6\times 6}} projection matrix. Our idea is to release some DoFs on the EE orientation and reconstruct a new Jacobian matrix. According to (6), we can obtain

[ωExωEyωEz]=[JMωxJMωyJMωz]𝜽˙M\begin{bmatrix}{\omega}_{Ex}\\ {\omega}_{Ey}\\ {\omega}_{Ez}\end{bmatrix}=\begin{bmatrix}{J}_{M\omega x}\\ {J}_{M\omega y}\\ {J}_{M\omega z}\end{bmatrix}\boldsymbol{\dot{\theta}}_{M} (13)

where JMω1×6{J}_{M\omega}\in\mathbb{R}^{1\times 6} is the sub-Jacobian matrix for orientation. The new Jacobian matrix 𝑱MR\boldsymbol{J}_{MR} is reconstructed as

[𝒗E𝒗BωEyωEz]=[𝑱MvJMωyJMωz]𝑱𝑴𝑹𝜽˙M\begin{bmatrix}\boldsymbol{v}_{E}-\boldsymbol{v}_{B}\\ {\omega}_{Ey}\\ {\omega}_{Ez}\end{bmatrix}=\underset{\boldsymbol{\boldsymbol{J}_{MR}}}{\underbrace{\begin{bmatrix}\boldsymbol{J}_{Mv}\\ {J}_{M\omega y}\\ {J}_{M\omega z}\end{bmatrix}}}\boldsymbol{\dot{\theta}}_{M} (14)

where 𝑱MR5×6\boldsymbol{J}_{MR}\in\mathbb{R}^{5\times 6} and our objective is keeping 𝒗E=𝟎\boldsymbol{v}_{E}=\boldsymbol{0}, ωEy=0{\omega}_{Ey}=0, and ωEz=0{\omega}_{Ez}=0. According to the closed-loop inverse kinematics algorithm [14], we can deduce

Refer to caption

Figure 3: The motion compensation control framework. (a) The physical integration. (b) The kinematic chain integration. (c) The frame transformation. (d) The motion planning. (e) The motion control.
𝜽˙M=𝑱MR[𝒗B+𝑲PΔ𝒑EKOyΔϵyKOzΔϵz]\boldsymbol{\dot{\theta}}_{M}=\boldsymbol{J}_{MR}^{\dagger}\begin{bmatrix}-\boldsymbol{v}_{B}+\boldsymbol{K}_{P}\Delta{\boldsymbol{p}_{E}}\\ {K}_{Oy}\Delta{\epsilon}_{y}\\ {K}_{Oz}\Delta{\epsilon}_{z}\end{bmatrix} (15)

where 𝑲P=diag(KP1,KP2,KP3)3×3\boldsymbol{K}_{P}=diag(K_{P1},K_{P2},K_{P3})\in\mathbb{R}^{3\times 3} is the diagonal gain matrix for the EE position and Δ𝒑E3\Delta{\boldsymbol{p}_{E}\in\mathbb{R}^{3}} is the position variation of the EE.

IV Motion Compensation Control Framework

In this section, we introduce our compensation framework. As we aim to plan feasible and safe joint trajectories for real-time control, the singularity handling and the joint limitation are considered in the proposed framework.

IV-A Compensation Control Framework

The human motion compensation framework is given in Fig. 3, which includes 5 parts: (a) the physical integration, (b) the kinematic chain integration, (c) the coordinate frame transformation, (d) the motion planning, and (e) the motion control. First, the human user wears the SRA with the IMU-based sensory interface to establish the human-robot integration system. After the calibration, the inertial frame ΣI\Sigma_{I} lies in the right foot of the user at the initial calibration position (Fig. 2) and all the kinematic information of the human user is defined w.r.t. ΣI\Sigma_{I}. As the user usually works at a local working position, the compensation motion control should be finished at the local position. Therefore, we set a local home frame and the human motion should be defined w.r.t. the local home frame.

IV-B Coordinate Frame Transformation

As shown in Fig. 3b, we aim to achieve the motion compensation at a local working position, the human torso frame ΣH\Sigma_{H} at t0t_{0} (noted as ΣH,t0\Sigma_{H,t_{0}}) is selected as the local home frame. Therefore, the human translation motion should be defined w.r.t. ΣH,t0\Sigma_{H,t_{0}}. ΣH,t0\Sigma_{H,t_{0}} is set as follows: First, the human user reaches the local working position and stands firmly. Then, we record the local torso frame ΣH,t0\Sigma_{H,t_{0}} with its current position 𝒑H,t0\boldsymbol{p}_{H,t_{0}} and its current orientation 𝑹H,t0I\boldsymbol{R}_{H,t_{0}}^{I} w.r.t. ΣI\Sigma_{I}, respectively. According to the assumed precondition (P1), we can deduce the position variation of the local home frame ΣH\Sigma_{H} from t0t_{0} to tkt_{k} w.r.t. ΣH,t0\Sigma_{H,t_{0}} as

Δ𝒑=H,tkH,t0𝑹IH,t0(𝒑H,t0𝒑H,tk)\Delta\boldsymbol{p}{{}_{H,t_{k}}^{H,t_{0}}}=\boldsymbol{R}_{I}^{H,t_{0}}(\boldsymbol{p}_{H,t_{0}}-\boldsymbol{p}_{H,t_{k}}) (16)

where 𝒑H,tk\boldsymbol{p}_{H,t_{k}} is the position of ΣH\Sigma_{H} at tkt_{k} w.r.t. ΣI\Sigma_{I}. Similarly, we can obtain the local position variation of the base frame ΣB\Sigma_{B} from t0t_{0} to tkt_{k} w.r.t. ΣH,t0\Sigma_{H,t_{0}} as

Δ𝒑=B,tkH,t0𝑹IH,t0(𝒑H,t0+𝒑HB,t0𝒑H,tk𝒑HB,tk)\Delta\boldsymbol{p}{{}_{B,t_{k}}^{H,t_{0}}}=\boldsymbol{R}_{I}^{H,t_{0}}(\boldsymbol{p}_{H,t_{0}}+\boldsymbol{p}_{HB,t_{0}}-\boldsymbol{p}_{H,t_{k}}-\boldsymbol{p}_{HB,t_{k}}) (17)

where 𝒑HB,t0\boldsymbol{p}_{HB,t_{0}} and 𝒑HB,tk3\boldsymbol{p}_{HB,t_{k}}\in\mathbb{R}^{3} are the relative position vectors from ΣH\Sigma_{H} to ΣB\Sigma_{B} at t0t_{0} and tkt_{k} w.r.t. ΣI\Sigma_{I}, respectively. According to the assumed precondition (P2), we know 𝒑HB,t0=𝒑HB,tk\boldsymbol{p}_{HB,t_{0}}=\boldsymbol{p}_{HB,t_{k}}, so we can deduce

Δ𝒑=B,tkH,t0Δ𝒑H,tkH,t0\Delta\boldsymbol{p}{{}_{B,t_{k}}^{H,t_{0}}}=\Delta\boldsymbol{p}{{}_{H,t_{k}}^{H,t_{0}}} (18)
𝒗=B,tkH,t0𝒗=H,tkH,t0𝑹IH,t0𝒗H,tk\boldsymbol{v}{{}_{B,t_{k}}^{H,t_{0}}}=\boldsymbol{v}{{}_{H,t_{k}}^{H,t_{0}}}=\boldsymbol{R}_{I}^{H,t_{0}}\boldsymbol{v}_{H,t_{k}} (19)

where 𝒗H,tk3\boldsymbol{v}_{H,t_{k}}\in\mathbb{R}^{3} is the velocity vector of ΣH\Sigma_{H} at tkt_{k} w.r.t. ΣI\Sigma_{I}, 𝒗B,tkH,t0\boldsymbol{v}{{}_{B,t_{k}}^{H,t_{0}}} and 𝒗H,tkH,t03\boldsymbol{v}{{}_{H,t_{k}}^{H,t_{0}}}\in\mathbb{R}^{3} are velocity vectors of ΣB\Sigma_{B} and ΣH\Sigma_{H} at tkt_{k} w.r.t. ΣH,t0\Sigma_{H,t_{0}}, respectively. Moreover, we can deduce the local position variation of ΣE\Sigma_{E}, namely Δ𝒑E\Delta{\boldsymbol{p}_{E}} at tkt_{k} w.r.t. ΣH,t0\Sigma_{H,t_{0}}, which will be used to control our SRA:

Δ𝒑=E,tkH,t0Δ𝒑+B,tkH,t0Δ𝒑BE,tkH,t0\Delta\boldsymbol{p}{{}_{E,t_{k}}^{H,t_{0}}}=\Delta\boldsymbol{p}{{}_{B,t_{k}}^{H,t_{0}}}+\Delta\boldsymbol{p}{{}_{BE,t_{k}}^{H,t_{0}}} (20)

where Δ𝒑BE,tkH,t03\Delta\boldsymbol{p}{{}_{BE,t_{k}}^{H,t_{0}}\in\mathbb{R}^{3}} is the relative position from ΣB\Sigma_{B} to ΣE\Sigma_{E} at tkt_{k} w.r.t. ΣH,t0\Sigma_{H,t_{0}} and expressed as

Δ𝒑BE,tkH,t0=𝑹H,tkH,t0𝑹B,tkH,tk𝒑E,tkB,tk𝑹B,t0H,t0𝒑E,t0B,t0\Delta\boldsymbol{p}_{{BE,t_{k}}}^{H,t_{0}}=\boldsymbol{R}_{H,t_{k}}^{H,t_{0}}\cdot\boldsymbol{R}_{B,t_{k}}^{H,t_{k}}\cdot\boldsymbol{p}_{E,t_{k}}^{B,t_{k}}-\boldsymbol{R}_{B,t_{0}}^{H,t_{0}}\cdot\boldsymbol{p}_{E,t_{0}}^{B,t_{0}} (21)

where 𝑹H,tkH,t0,𝑹B,tkH,tk\boldsymbol{R}_{H,t_{k}}^{H,t_{0}},\boldsymbol{R}_{B,t_{k}}^{H,t_{k}}, and 𝑹B,t0H,t03×3\boldsymbol{R}_{B,t_{0}}^{H,t_{0}}\in\mathbb{R}^{3\times 3} are the rotation matrices of ΣH,tk\Sigma_{H,t_{k}} w.r.t. ΣH,t0\Sigma_{H,t_{0}}, ΣB,tk\Sigma_{B,t_{k}} w.r.t. ΣH,tk\Sigma_{H,t_{k}}, and ΣB,t0\Sigma_{B,t_{0}} w.r.t. ΣH,t0\Sigma_{H,t_{0}}, respectively. According to the assumed preconditions (P1) and (P2), each of them is 𝑬3×3\boldsymbol{E}_{3\times 3} and (21) can be reduced into

Δ𝒑BE,tkH,t0=𝒑E,tkB,tk𝒑E,t0B,t0\Delta\boldsymbol{p}_{{BE,t_{k}}}^{H,t_{0}}=\boldsymbol{p}_{E,t_{k}}^{B,t_{k}}-\boldsymbol{p}_{E,t_{0}}^{B,t_{0}} (22)

For the EE orientation stabilization, controlling it w.r.t. ΣB\Sigma_{B} is the same as controlling it w.r.t. ΣH\Sigma_{H}, so we use the following quaternion error to control our SRA:

Δ𝑸E,tkB=𝑸E,t0B,t0𝑸E,tkB,tk1={ΔηE,tkB,ΔϵE,tkB}\Delta\boldsymbol{Q}_{E,t_{k}}^{B}=\boldsymbol{Q}_{E,t_{0}}^{B,t_{0}}\otimes\boldsymbol{Q}_{E,t_{k}}^{B,t_{k}-1}=\left\{\Delta\eta_{E,t_{k}}^{B},\Delta\boldsymbol{\epsilon}_{E,t_{k}}^{B}\right\} (23)

IV-C Singularity Handling

When the manipulator is close to a singularity configuration, the inverse kinematics becomes unstable and the joint increment becomes large. The singular value filtering (SVF) method [16] is adopted in our control framework, which can guarantee the lower-bounded singular values. According to the singular value decomposition, we can deduce

𝑱=𝑼𝚺𝑽T=i=1rσi𝒖i𝒗iT\boldsymbol{J}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^{T}=\sum_{i=1}^{r}\sigma_{i}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{T} (24)

where 𝒗i\boldsymbol{v}_{i} and 𝒖i\boldsymbol{u}_{i} are the input and output singular vectors which are the iith columns of 𝑼m×m\boldsymbol{U}\in\mathbb{R}^{m\times m} and 𝑽n×n\boldsymbol{V}\in\mathbb{R}^{n\times n}, respectively, σi\sigma_{i} is the singular value meeting σ1σ2σr>0\sigma_{1}\geq\sigma_{2}\geq...\geq\sigma_{r}>0, with rr\leq min{m,n}\{m,n\} being the rank of 𝑱\boldsymbol{J}. The pseudo-inverse of 𝑱\boldsymbol{J} is 𝑱=𝑼𝚺𝑽T=i=1r1σi𝒖i𝒗iT\boldsymbol{J}^{\dagger}=\boldsymbol{U}\boldsymbol{\Sigma}^{\dagger}\boldsymbol{V}^{T}=\sum_{i=1}^{r}\frac{1}{\sigma_{i}}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{T}. The general damped least-squares pseudo-inverse is expressed as

𝑱D=i=1rσiσi2+λ2𝒖i𝒗iT\boldsymbol{J}^{\dagger}_{D}=\sum_{i=1}^{r}\frac{\sigma_{i}}{\sigma_{i}^{2}+\lambda^{2}}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{T} (25)

where λ\lambda is the damping factor, but a constant small value for λ\lambda cannot always guarantee good performance in the workspace. Using the SVF method, the filtered Jacobian matrix is defined as

𝑱F=i=1rf(σi)𝒖i𝒗iT\displaystyle\boldsymbol{J}_{F}=\sum_{i=1}^{r}f(\sigma_{i})\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{T} (26)
f(σ)=σ3+υσ2+2σ+2σ0σ2+υσ+2\displaystyle f(\sigma)=\frac{\sigma^{3}+\upsilon\sigma^{2}+2\sigma+2\sigma_{0}}{\sigma^{2}+\upsilon\sigma+2} (27)

where f(σ)f(\sigma) is the filtering function, σ0\sigma_{0} is the minimum value that we want to impose on the singular values of 𝑱\boldsymbol{J}, υ\upsilon is a shape factor, and the corresponding pseudo-inverse of 𝑱F\boldsymbol{J}_{F} is

𝑱F=i=1r1f(σi)𝒖i𝒗iT\boldsymbol{J}^{\dagger}_{F}=\sum_{i=1}^{r}\frac{1}{f(\sigma_{i})}\boldsymbol{u}_{i}\boldsymbol{v}_{i}^{T} (28)

IV-D Joint Clamping

To avoid joint velocity and angle limits, we use the joint clamping algorithm [17] to constrain the joint increment.

θ˙i={θ˙iLimit,θ˙i<θ˙iLimitθ˙iLimit,θ˙i>+θ˙iLimitθ˙i,otherwise.\displaystyle\dot{\theta}_{i}=\begin{cases}-\dot{\theta}_{iLimit},&\dot{\theta}_{i}<-\dot{\theta}_{iLimit}\\ \dot{\theta}_{iLimit},&\dot{\theta}_{i}>+\dot{\theta}_{iLimit}\\ \dot{\theta}_{i},&\text{otherwise.}\\ \end{cases} (29)

where θ˙iLimit\dot{\theta}_{iLimit} is the iith joint velocity boundary. For the joint position at step kk, we know 𝜽M(k+1)=𝜽M(k)+𝑯Δ𝜽M\boldsymbol{{\theta}}_{M}(k+1)=\boldsymbol{{\theta}}_{M}(k)+\boldsymbol{H}\Delta\boldsymbol{{\theta}}_{M}, where Δ𝜽M=𝜽˙MΔt\Delta\boldsymbol{{\theta}}_{M}=\boldsymbol{\dot{\theta}}_{M}\Delta t and 𝑯=diag(H1,H2,,H6)6×6\boldsymbol{H}=diag(H_{1},H_{2},...,H_{6})\in\mathbb{R}^{6\times 6} is a diagonal activation matrix,

Hi={1,θiMin<θi<θiMax0,otherwise.\displaystyle{H}_{i}=\begin{cases}1,&{\theta}_{iMin}<{\theta}_{i}<{\theta}_{iMax}\\ 0,&\text{otherwise.}\\ \end{cases} (30)

where θiMin{\theta}_{iMin} and θiMax{\theta}_{iMax} are the iith joint position minimum and maximum boundaries, respectively. Through the joint clamping algorithm, we can guarantee that the planned joint trajectories do not violate the joint limits.

V EXPERIMENTAL VERIFICATIONS

In this section, we verify our control framework by two sets of experiments: the NBM set and the RJM set. Two types of user motions (1D motion and 3D motion) are investigated in the experiments. The optical tracking motion capture system (in Fig. 4a) is used to obtain the EE position coordinates as absolute coordinates. Although the origin of the optical tracking frame is not the same as the Xsens origin frame ΣI\Sigma_{I}, its results can be used to evaluate the control performance in Cartesian space. A video of the experiments is available in our YouTube channel111The video can be found at https://youtu.be/hBFH2gtw55I.

Refer to caption

Figure 4: The U-D motion compensation. (a) The optical tracking system. (b) The initial configuration. (c,d) The downward motion. (e-g) The upward motion. (h) The final configuration.

Refer to caption

Figure 5: The L-R motion compensation. (a) The initial configuration. (b,c) The left motion. (d-f) The right motion. (g,h) The second left motion. (i) The final configuration.

Refer to caption

Figure 6: The F-B motion compensation. (a) The initial configuration. (b,c) The forward motion. (d-h) The backward motion. (i,j) The second forward motion. (k) The final configuration.

V-A Experiments and Results

We conduct three 1D motion experiments, namely, the up-down (U-D) motion (Fig. 4), the left-right (L-R) motion (Fig. 5), and the forward-backward (F-B) motion (Fig. 6). Although the user tries to achieve the 1D motion, the human body’s movements are inevitably coupled sometimes. Note that we focus on the z-axis stabilization in the U-D motion, the y-axis stabilization in the L-R motion, and the x-axis stabilization in the F-B motion. As shown in Fig. 7, the human user moves randomly in the 3D motion experiments and we focus on the EE stabilization in 3D space. Accordingly, the experimental results of the NBM and RJM sets are given in Figs. 8 and 9, respectively. Note that the EE motion is defined w.r.t. ΣB,t0\Sigma_{B,t_{0}} and the absolute EE position is defined w.r.t. the origin of the optical tracking system. Moreover, we can see clearly that the z-axis and x-axis are coupled in the U-D motion and the x-axis and y-axis are coupled in the F-B motion. The planned joint velocity trajectories (Figs. 8b and 9b) fall in the joint velocity limit (0.1,0.1)(-0.1,0.1) (rad/s). The EE motion (Figs. 8d and 9d) has the opposite trend w.r.t. the human motion (Figs. 8a and 9a). Figs. 8e and 9e show that EE goes back to its initial orientation after the compensation. Figs. 8f and 9f show the compensation performance in Cartesian space.

Refer to caption

Figure 7: The random motion compensation. (a) The initial configuration. (b,c) The lower left motion. (d,e) The upper right motion. (f,g) The forward left motion. (h,i) The right motion. (j,k) The left motion.

Refer to caption

Figure 8: The motion compensation control performance of the NBM set. (a) The human motion w.r.t. ΣH,t0\Sigma_{H,t_{0}}. (b) The planned joint velocity. (c) The planned joint position. (d) The EE motion w.r.t. ΣB,t0\Sigma_{B,t_{0}}. (e)The EE orientation w.r.t. ΣB,t0\Sigma_{B,t_{0}}. (f) The EE position in the optical tracking system.

Refer to caption

Figure 9: The motion compensation control performance of the RJM set. (a) The human motion w.r.t. ΣH,t0\Sigma_{H,t_{0}}. (b) The planned joint velocity. (c) The planned joint position. (d) The EE motion w.r.t. ΣB,t0\Sigma_{B,t_{0}}. (e) The EE orientation w.r.t. ΣB,t0\Sigma_{B,t_{0}}. (f) The EE position in the optical tracking system.

Refer to caption

Figure 10: Evaluation results of two compensation control.

V-B Evaluation and Analysis

Two indices are used to evaluate the 1D motion compensation performance.

ρ¯Eχ=i=kn+kρEχ(i)n\displaystyle\bar{\rho}_{E\chi}=\frac{\sum_{i=k}^{n+k}{\rho}_{E\chi}\left(i\right)}{n} (31)
sχ=i=kn+k(ρEχ(i)ρ¯Eχ)2n1\displaystyle s_{\chi}=\sqrt{\frac{\sum_{i=k}^{n+k}\left({\rho}_{E\chi}\left(i\right)-\bar{\rho}_{E\chi}\right)^{2}}{n-1}} (32)

where ρEχ{\rho}_{E\chi} represents the absolute EE position obtained from the optical tracking system, ρ¯Eχ\bar{\rho}_{E\chi} represents the mean value of ρEχ{\rho}_{E\chi}, χ\chi represents x, y, and z coordinates, and sχs_{\chi} represents the standard deviation of ρEχ(i){\rho}_{E\chi}(i). Meanwhile, we defined eχ=|ρ¯Eχρ¯EIχ|{e}_{\chi}=\left|\bar{\rho}_{E\chi}-\bar{\rho}_{EI\chi}\right| to represent the mean EE position error, where ρ¯EIχ\bar{\rho}_{EI\chi} is the initial mean value of ρEχ{\rho}_{E\chi} (t3s)(t\leq 3s).

In the NBM set, the initial positions are ρ¯EIx=0.1280\bar{\rho}_{EIx}=0.1280, ρ¯EIy=0.5606\bar{\rho}_{EIy}=-0.5606, and ρ¯EIz=1.9089\bar{\rho}_{EIz}=1.9089. For the U-D motion, ρ¯Ez=1.9377\bar{\rho}_{Ez}=1.9377, sz=0.0390s_{z}=0.0390, and ez=0.0288{e}_{z}=0.0288. For the L-R motion, ρ¯Ey=0.5291\bar{\rho}_{Ey}=-0.5291, sy=0.0411s_{y}=0.0411, and ey=0.0315{e}_{y}=0.0315. For the F-B motion, ρ¯Ex=0.0884\bar{\rho}_{Ex}=0.0884, sx=0.0191s_{x}=0.0191, and ex=0.0396{e}_{x}=0.0396. By comparison, in the RJM set, the initial positions are ρ¯EIx=0.0188\bar{\rho}_{EIx}=-0.0188, ρ¯EIy=0.4993\bar{\rho}_{EIy}=-0.4993, and ρ¯EIz=1.9649\bar{\rho}_{EIz}=1.9649. For the U-D motion, ρ¯Ez=1.9489\bar{\rho}_{Ez}=1.9489, sz=0.0497s_{z}=0.0497, and ez=0.0160{e}_{z}=0.0160. For the L-R motion, ρ¯Ey=0.4589\bar{\rho}_{Ey}=-0.4589, sy=0.0347s_{y}=0.0347, and ey=0.0404{e}_{y}=0.0404. For the F-B motion, ρ¯Ex=0.0024\bar{\rho}_{Ex}=-0.0024, sx=0.0248s_{x}=0.0248, and ex=0.0164{e}_{x}=0.0164. Furthermore, we propose a distance-related index to evaluate the 3D motion compensation in Cartesian performance.

DE=ex2+ey2+ez2ex,max2+ey,max2+ez,max2\displaystyle D_{E}=\frac{\sqrt{e_{x}^{2}+e_{y}^{2}+e_{z}^{2}}}{\sqrt{{e}_{x,max}^{2}+{e}_{y,max}^{2}+{e}_{z,max}^{2}}} (33)

where eχ,max=max(|ρ¯EχρEχ(i)|){e}_{\chi,max}=max(|\bar{\rho}_{E\chi}-{\rho}_{E\chi}\left(i\right)|). In the 3D random motion, DED_{E} of the NBE set is 0.39390.3939 and DED_{E} of the RJM set is 0.37090.3709. Meanwhile, the evaluation results (ρ¯Eχ\bar{\rho}_{E\chi}, sχs_{\chi}, and DED_{E}) of two experiments are illustrated in Fig. 10.

V-C Discussion

From the experimental results, we can see that the proposed control framework can effectively compensate for the human translation motion. However, the compensation motion sometimes falls behind the user motion due to the joint clamping (for safety) and the sampling frequency of the IMU-based sensory interface (60 Hz). For example, if the previous step compensation hasn’t been finished, the motion compensation in Fig.6 (d-g) still goes backward. From the evaluation results, we can see that both methods have the same order of magnitude in eχ{e}_{\chi} and DED_{E} (RJM is slightly better than NBM yet simpler).

VI CONCLUSIONS AND FUTURE WORK

The motion compensation problem of the human-robot integration system was investigated in this work. As a special class of floating-base robots, a simplified floating-base kinematics model was deduced to analyze the kinematics of the integration system. As our SRA only has 6 DoFs, which is insufficient to keep the 6D pose of the EE stable, we reformulated the compensation problem by prioritizing the EE position and orientation as the primary task and the secondary task. By virtue of our sensory interfaces, we proposed a motion compensation control framework and a RJM method to solve the task-priority planning problem. Under the proposed framework, the EE position was stabilized in a small range with the mean error (<<5cmcm). Compared with NBM, the proposed RJM can achieve a competitive control performance with a simpler algorithm structure.

Currently, we do not compensate for the rotational motion of users. In the future, the rotational motion and the users’ intention detection will be studied in our compensation framework.

References

  • [1] F. Parietti, K. C. Chan, B. Hunter, and H. H. Asada, “Design and control of supernumerary robotic limbs for balance augmentation,” in 2015 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2015, pp. 5010–5017.
  • [2] M. Muehlhaus, M. Koelle, A. Saberpour, and J. Steimle, “I need a third arm! eliciting body-based interactions with a wearable robotic arm,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–15.
  • [3] I. Hussain, G. Spagnoletti, G. Salvietti, and D. Prattichizzo, “Toward wearable supernumerary robotic fingers to compensate missing grasping abilities in hemiparetic upper limb,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1414–1436, 2017.
  • [4] B. Llorens Bonilla, F. Parietti, and H. H. Asada, “Demonstration-based control of supernumerary robotic limbs,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 3936–3942.
  • [5] F. Parietti and H. H. Asada, “Supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing,” in 2014 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2014, pp. 1176–1183.
  • [6] C. Véronneau, J. Denis, L.-P. Lebel, M. Denninger, V. Blanchard, A. Girard, and J.-S. Plante, “Multifunctional remotely actuated 3-dof supernumerary robotic arm based on magnetorheological clutches and hydrostatic transmission lines,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2546–2553, 2020.
  • [7] M. Y. Saraiji, T. Sasaki, K. Kunze, K. Minamizawa, and M. Inami, “Metaarms: Body remapping using feet-controlled artificial arms,” in Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, 2018, pp. 65–74.
  • [8] C. Penaloza, D. Hernandez-Carmona, and S. Nishio, “Towards intelligent brain-controlled body augmentation robotic limbs,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC).   IEEE, 2018, pp. 1011–1015.
  • [9] M. Khoramshahi, A. Poignant, G. Morel, and N. Jarrassé, “A practical control approach for safe collaborative supernumerary robotic arms,” in 2023 IEEE Conference on Advanced Robotics and its Social Impact, 2023.
  • [10] Y. Du, X. Zhang, M. Leonori, P. Balatti, J. Jin, Q. Wang, and A. Ajoudani, “Bi-directional human-robot handover using a novel supernumerary robotic system,” in 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO).   IEEE, 2023, pp. 153–158.
  • [11] M. Leonori, M. Lorenzini, L. Fortini, J. M. Gandarias, and A. Ajoudani, “The bridge between xsens motion-capture and robot operating system (ros): Enabling robots with online 3d human motion tracking,” arXiv preprint arXiv:2306.17738, 2023.
  • [12] X. Zhang, J. Liu, Y. Tong, Y. Liu, and Z. Ju, “Attitude decoupling control of semifloating space robots using time-delay estimation and supertwisting control,” IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 6, pp. 4280–4295, 2021.
  • [13] O. Kanoun, “Real-time prioritized kinematic control under inequality constraints for redundant manipulators,” in Robotics: Science and Systems, vol. 7, 2012, p. 145.
  • [14] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: Modelling, Planning and Control, ser. Advanced Textbooks in Control and Signal Processing.   Springer London, 2010.
  • [15] J. Yuan, “Closed-loop manipulator control using quaternion feedback,” IEEE Journal on Robotics and Automation, vol. 4, no. 4, pp. 434–440, 1988.
  • [16] A. Colomé and C. Torras, “Closed-loop inverse kinematics for redundant robots: Comparative assessment and two enhancements,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 944–955, 2014.
  • [17] D. Raunhardt and R. Boulic, “Progressive clamping,” in Proceedings 2007 IEEE International Conference on Robotics and Automation.   IEEE, 2007, pp. 4414–4419.