This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Object-Aware Impedance Control for Human-Robot Collaborative Task with Online Object Parameter Estimation

Jinseong Park1, Yong-Sik Shin1∗ and Sanghyun Kim2∗ *This research was supported by the National Research Council of Science & Technology as part of the project entitled “Development of Core Machinery Technologies for Autonomous Operation and Manufacturing” (NK242G) and the Industrial Strategic Technology Development Program (no. 20018745) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). (Corresponding author: Yong-Sik Shin and Sanghyun Kim)1J. Park and Y.-S Shin are senior researchers at the Korea Institute of Machinery & Materials, Daejeon, South Korea. [email protected]; [email protected]2S. Kim is with the Department of Mechanical Engineering, KyungHee University, Yongin, South Korea. [email protected]
Abstract

Physical human-robot interactions (pHRIs) can improve robot autonomy and reduce physical demands on humans. In this paper, we consider a collaborative task with a considerably long object and no prior knowledge of the object’s parameters. An integrated control framework with an online object parameter estimator and a Cartesian object-aware impedance controller is proposed to realize complicated scenarios. During the transportation task, the object parameters are estimated online while a robot and human lift an object. The perturbation motion is incorporated into the null space of the desired trajectory to enhance the estimator accuracy. An object-aware impedance controller is designed using the real-time estimation results to effectively transmit the intended human motion to the robot through the object. Experimental demonstrations of collaborative tasks, including object transportation and assembly tasks, are implemented to show the effectiveness of our proposed method.

Note to Practitioners

This research was motivated by the need to facilitate collaboration between humans and robots in handling heavy or considerable long objects, which can be challenging for a single operator. This paper proposes a physical Human-Robot Interaction (pHRI) approach that enables physical interaction between the human and the robot through the object without additional sensors such as camera or human-machine interfaces. To achieve collaborative task, the separation between the intended human motion intention and the object dynamics is essential. Most of real-world situations involve uncertain or unknown objects, making it challenging to assume prior knowledge of the target object’s properties. Therefore, this research introduces a real-time approach for estimating the dynamic parameters of unknown objects during collaboration, without requiring additional operational time. Consequently, the design of object-aware impedance controller can be achieved by real-time incorporation of object dynamics. Collaborative transportation and assembly task is demonstrated with whole-body controlled mobile manipulator and 1.5m long object. In future research, we will focus on enhancing the estimation precision through the use of physical informed neural network methods.

Index Terms:
Physical human-robot interaction, Online object parameter estimation, Object-aware control, Cartesian impedance control, Mobile manipulation
publicationid: pubid: 0000–0000/00$00.00 © 2023 IEEE
Refer to caption
Figure 1: The human-robot collaborative task scenario. The coordinates of the robot and human are represented: {w}:world, {h}:human, {o}:object, {r}:end effector of the robot, {m}:mobile base.

I Introduction

Robots have been increasingly used in human environments for repetitive and simple operations with the notable improvements in the safety and performance of hardware and controllers. Nevertheless, their applicability is still limited in industrial and service-related scenarios in which flexibility is needed, including manufacturing, logistics, and construction. On the other hand, robots have several advantages over humans, such as high precision in repetitive tasks and the lack of increasing fatigue during payload lifting. Therefore, the development of robotic technologies for physical interactions between humans and robots is important to improve the autonomy of robots with human decision-making capabilities and reduce the excessive physical demand on humans during payload lifting [1, 2, 3, 4, 5, 6, 7]. In physical human-robot interactions (pHRIs), two or more agents are physically coupled and interact and communicate to perform tasks. pHRIs enable collaborative transportation of heavy or bulky objects and manipulations involving multiple grasping points, which would be difficult to achieve for either humans or robots alone [9, 11].

In this paper, we focus on human-robot collaborative transportation and assembly of bulky objects without prior knowledge of the object’s parameters. Due to the considerable length of the target object, the human cannot directly contact the robot in the leader-follower collaboration task. Consequently, the intended human motion can be transmitted to the robot only indirectly through the object. Additionally, due to the lack of knowledge regarding the object’s parameters, an online object parameter estimation process is necessary to accurately determine the intended human motion.

Human-robot collaborative transportation tasks have been studied in various applications [12, 13, 14]. These tasks are mainly performed under a leader-follower architecture, in which the desired trajectory of the robot is explicitly determined by humans based on the estimated intended human motion. The effects of dynamic role allocation on physical robotic assistants have been investigated, and the load distribution between the partners was efficiently determined with redundancies of the control inputs [4, 13]. However, only a planar task was performed without considering the inertial effects of the object. In pHRI-based cooperative object manipulation tasks, the performance estimator-based intention was presented to reduce human effort during collaborative tasks [7, 8]. However, the demonstration with a human was performed with direct manipulation of the robot, and the geometry of the object was not considered. Online adaptation of the admittance control parameters reduced human physical effort during collaborative transportation tasks [14]. Human intention was inferred by monitoring the magnitude and direction of the desired acceleration and the present velocity value. However, the inertial parameters and shapes of the objects have not yet been considered.

The use of external sensors has also been studied for collaborative tasks with physical interactions. Human-robot collaborative transportation tasks with unknown deformable objects have been performed by utilizing both haptic and kinematic information to estimate the intended human motion [2]. A low-cost cover made of soft, large-area capacitive tactile sensors was placed around the platform base to measure the interaction forces applied to the robot base to implement an expanded Cartesian impedance controller, and follow-me controllers were implemented [5]. An object lifted by a human and robot was presented using biceps activity, which was measured based on surface electromyography (EMG), to estimate the changes in the user’s hand height [15]. Invariance control was also presented for safe human-robot interactions in dynamic environments by enforcing dynamic constraints even in the presence of external forces [16]. Safe-aware human-robot collaborative transportation tasks considering aerial robots with cable-suspended payloads have also been demonstrated, in which a safe distance is maintained between the human operator and each agent without affecting performance [17]. The use of external sensors enables direct acquisition of motion and wrench information for the human participant, enhancing the performance and stability of collaborative tasks. However, in practice, it may not always be feasible to equip additional sensors in systems. Therefore, within the context of collaborative tasks, the estimation of intended human motion, even in the absence of such sensors, can be a noteworthy approach, notwithstanding the acceptance of estimation result inaccuracies.

Since the dynamic object parameters are mostly unknown in unstructured environments, online identification strategies for estimating object dynamics should be developed to accurately understand the intended human motion during collaborative tasks. Bias in the object dynamics leads to inaccurately calculated robot wrenches, which may disturb the human during interactions and affect the trust and interaction behavior of the human partner [10, 13, 4]. An online payload identification approach was presented based on the momentum observer using proprioceptive sensors with a calibration scheme that compensates for static offset between the virtual and real object [18]. However, directly applying this method in collaborative tasks is not straightforward due to the presence of dynamically coupled human partners. Alternatively, in some studies, only the weight of the object was estimated during the lifting phase of robot only [11] or human-robot collaborative tasks [19]. However, other dynamic parameters, such as the center of mass and moments of inertia, are also needed depending on the task. An online grasp pose and object parameter estimation strategy for pHRI was also presented [10, 9]. This strategy aims to induce identification-relevant motions that minimally disturb the desired human motion while preventing undesired interaction torques with the human. The use of probing signals for precise estimation and online estimation strategies is similar to our objective; however, these strategies require measurements of interaction torques for both the human and the robot and tracking of human motion, which may not always be possible in real-life scenarios. Indeed, without acquiring the wrench and motion information on the human side, it can be difficult to precisely estimate the object’s parameters. However, the objective of this study is to accurately determine the intended human motion while the human and robot are collaboratively lifting the object. Thus, it is adequate to estimate the effective object parameters, which corresponds to the load endured by the robot during the collaboration task, in this specific configuration. Consequently, in this study, we estimate the effective object parameters without the use of additional sensors on the human side. Note that researchers mainly adopted estimator methods based on least squares methods ([9, 20, 18]) or Kalman filter-based methods ([21, 22] for parameter estimation. We adopted the extended Kalman filter (EKF) because the estimation system varies over time.

In this paper, a Cartesian object-aware impedance controller with online object parameter estimation is proposed. Object parameter estimation enables a precise understanding of the intended human motion during human-robot collaborative tasks. Furthermore, to effectively transmit the intended human motion to the robot through long objects, the desired inertia shaping of impedance is performed with considering the geometry of the object.

The contributions of this paper can be summarized as follows:

  • The effective object parameters are estimated online by inducing identification motions in the null space of the Cartesian trajectory without the use of additional sensors on the human sides during load-sharing conditions.

  • A Cartesian object-aware impedance controller is designed that effectively transmits the intended human motion to the robot through the object by reflecting the object dynamics to the impedance relations.

  • Experimental demonstrations of a collaborative transportation and the assembly of a long object using a mobile manipulator are presented.

The remainder of the paper is organized as follows. In Section II, an online object parameter estimation method is presented by deriving the object dynamic equations. Based on a whole-body control architecture, a Cartesian object-aware impedance controller is designed in Section III. Experimental demonstrations and results are presented in Section IV, and the conclusions are discussed in Section V.

II Online Object Parameter Estimation

II-A Object Dynamics

We investigate a human-robot collaborative task scenario involving an elongated object in which parameters such as the mass, center of mass, and moment of inertia are not provided in advance.

The dynamics of the object are as follows [9, 18]:

𝑴o𝒙¨o+𝑪o(𝒙o,𝒙˙o)=𝑭o\displaystyle\begin{array}[]{l}\boldsymbol{M}_{o}\ddot{\boldsymbol{x}}_{o}+\boldsymbol{C}_{o}(\boldsymbol{x}_{o},\dot{\boldsymbol{x}}_{o})=\boldsymbol{F}_{o}\end{array} (2)

where

𝑴o=[mo𝑰3𝟎3𝟎3𝑱0]\boldsymbol{M}_{o}=\begin{bmatrix}m_{o}\boldsymbol{I}_{3}&\boldsymbol{0}_{3}\\ \boldsymbol{0}_{3}&\boldsymbol{J}_{0}\end{bmatrix}, 𝑪o=[mo𝒈𝝎o×𝑱o𝝎o]\boldsymbol{C}_{o}=\begin{bmatrix}-m_{o}\boldsymbol{g}\\ {\boldsymbol{\omega}}_{o}\times\boldsymbol{J}_{o}{\boldsymbol{\omega}}_{o}\end{bmatrix}

𝒙o=[𝒑oT,𝒒oT]TSE(3)\boldsymbol{x}_{o}=[\boldsymbol{p}_{o}^{T},\boldsymbol{q}_{o}^{T}]^{T}\in SE(3) is the state of the object expressed in the world frame {w}, as illustrated in Fig. 1, and 𝒑o,𝒒o\boldsymbol{p}_{o},\boldsymbol{q}_{o} represent translation and orientation, respectively. Let 𝒙˙o=[𝒗oT,𝝎oT]T\dot{\boldsymbol{x}}_{o}=[\boldsymbol{v}_{o}^{T},{\boldsymbol{\omega}}_{o}^{T}]^{T} and 𝒙¨o=[𝒗˙oT,𝝎˙oT]T\ddot{\boldsymbol{x}}_{o}=[\dot{\boldsymbol{v}}_{o}^{T},\dot{{\boldsymbol{\omega}}}_{o}^{T}]^{T} be twist and acceleration vectors, respectively. 𝑭o=[𝒇oT,𝒕oT]T6\boldsymbol{F}_{o}=[\boldsymbol{f}_{o}^{T},\boldsymbol{t}_{o}^{T}]^{T}\in\mathbb{R}^{6} is the total object wrench, where 𝒇o3\boldsymbol{f}_{o}\in\mathbb{R}^{3} is the force and 𝒕o3\boldsymbol{t}_{o}\in\mathbb{R}^{3} is the torque acting on the object. 𝑴o6×6\boldsymbol{M}_{o}\in\mathbb{R}^{6\times 6} and 𝑪o6\boldsymbol{C}_{o}\in\mathbb{R}^{6} are the inertia matrix and nonlinear matrix containing Coriolis effects and gravity, respectively.

Because human and robot wrenches both act on the object, the relative kinematics can be represented as follows:

𝑭o=𝑮[𝑭r𝑭h]\displaystyle\begin{array}[]{l}\boldsymbol{F}_{o}=\boldsymbol{G}\begin{bmatrix}\boldsymbol{F}_{r}\\ \boldsymbol{F}_{h}\end{bmatrix}\end{array} (4)

where 𝑭r=[𝒇rT,𝒕rT]T6\boldsymbol{F}_{r}=[\boldsymbol{f}_{r}^{T},\boldsymbol{t}_{r}^{T}]^{T}\in\mathbb{R}^{6} and 𝑭h=[𝒇hT,𝒕hT]T6\boldsymbol{F}_{h}=[\boldsymbol{f}_{h}^{T},\boldsymbol{t}_{h}^{T}]^{T}\in\mathbb{R}^{6} are the robot and human wrenches at each grasping point, respectively. 𝑮\boldsymbol{G} represents the kinematic relations in the robot and human frames with respect to the object frame and is defined as follows:

𝑮=[𝑮ro𝑮ho]=[𝑰3𝟎3𝑰3𝟎3|𝒑ro|×𝑰3|𝒑ho|×𝑰3]\displaystyle\begin{array}[]{l}\boldsymbol{G}=\begin{bmatrix}{}^{o}\boldsymbol{G}_{r}&{}^{o}\boldsymbol{G}_{h}\end{bmatrix}=\begin{bmatrix}\boldsymbol{I}_{3}&\boldsymbol{0}_{3}&\boldsymbol{I}_{3}&\boldsymbol{0}_{3}\\ |{}^{o}\boldsymbol{p}_{r}|_{\times}&\boldsymbol{I}_{3}&|{}^{o}\boldsymbol{p}_{h}|_{\times}&\boldsymbol{I}_{3}\end{bmatrix}\end{array} (6)

where 𝒑ro3{}^{o}\boldsymbol{p}_{r}\in\mathbb{R}^{3} and 𝒑ho3{}^{o}\boldsymbol{p}_{h}\in\mathbb{R}^{3} are vectors from the object to the robot/human grasp points, respectively. |𝒑ro|x3×3|{}^{o}\boldsymbol{p}_{r}|_{x}\in\mathbb{R}^{3\times 3} denotes the skew matrix, where vector 𝒑ro{}^{o}\boldsymbol{p}_{r} represents the cross product. 𝑮ro6×6{}^{o}\boldsymbol{G}_{r}\in\mathbb{R}^{6\times 6} and 𝑮ho6×6{}^{o}\boldsymbol{G}_{h}\in\mathbb{R}^{6\times 6} are partial grasp matrices from the object to the robot/human grasp points, respectively.

The kinematic relations between the object frame and robot frame can be represented in terms of the velocity:

[𝒗o𝝎o]=[𝒗r+𝝎r×𝒑or𝝎r]=[𝑰3|𝒑or|×𝟎3𝑰3][𝒗r𝝎r]\displaystyle\begin{array}[]{l}\begin{bmatrix}\boldsymbol{v}_{o}\\ {\boldsymbol{\omega}}_{o}\end{bmatrix}=\begin{bmatrix}\boldsymbol{v}_{r}+{\boldsymbol{\omega}}_{r}\times{}^{r}\boldsymbol{p}_{o}\\ {\boldsymbol{\omega}}_{r}\end{bmatrix}=\begin{bmatrix}\boldsymbol{I}_{3}&-|{}^{r}\boldsymbol{p}_{o}|_{\times}\\ \boldsymbol{0}_{3}&\boldsymbol{I}_{3}\end{bmatrix}\begin{bmatrix}\boldsymbol{v}_{r}\\ {\boldsymbol{\omega}}_{r}\end{bmatrix}\end{array} (8)

They can also be represented in terms of the acceleration as follows:

[𝒗o˙𝝎˙o]=[𝒗˙r+𝝎˙r×𝒑or+𝝎r×(𝝎r×𝒑or)𝝎˙r]\displaystyle\begin{array}[]{l}\begin{bmatrix}\dot{\boldsymbol{v}_{o}}\\ \dot{\boldsymbol{\omega}}_{o}\end{bmatrix}=\begin{bmatrix}\dot{\boldsymbol{v}}_{r}+\dot{\boldsymbol{\omega}}_{r}\times{}^{r}\boldsymbol{p}_{o}+\boldsymbol{\omega}_{r}\times(\boldsymbol{\omega}_{r}\times{}^{r}\boldsymbol{p}_{o})\\ \dot{\boldsymbol{\omega}}_{r}\end{bmatrix}\end{array} (10)

According to (4), FoF_{o} can be represented by 𝑭or{}^{r}\boldsymbol{F}_{o} as follows:

𝑭o=𝑮ro×𝑭or\displaystyle\begin{array}[]{l}\boldsymbol{F}_{o}={}^{o}\boldsymbol{G}_{r}\times{{}^{r}\boldsymbol{F}_{o}}\end{array} (12)

where

𝑮ro=[𝑰3𝟎3|𝒑ro|×𝑰3]{}^{o}\boldsymbol{G}_{r}=\begin{bmatrix}\boldsymbol{I}_{3}&\boldsymbol{0}_{3}\\ |{}^{o}\boldsymbol{p}_{r}|_{\times}&\boldsymbol{I}_{3}\end{bmatrix}

𝑭or=[𝒇or𝒕or]=[𝒇r+𝒇h𝒕r+𝒑hr×𝒇h+𝒕h]{}^{r}\boldsymbol{F}_{o}=\begin{bmatrix}{}^{r}\boldsymbol{f}_{o}\\ {}^{r}\boldsymbol{t}_{o}\end{bmatrix}=\begin{bmatrix}\boldsymbol{f}_{r}+\boldsymbol{f}_{h}\\ \boldsymbol{t}_{r}+{}^{r}\boldsymbol{p}_{h}\times\boldsymbol{f}_{h}+\boldsymbol{t}_{h}\end{bmatrix}

The variable 𝑭or{}^{r}\boldsymbol{F}_{o} denotes the wrench exerted on the object with respect to the robot frame and can be directly estimated based on the robot’s sensory feedback and measurement capabilities.

By substituting (10) and (12) into (2) and manipulating the result with the parallel axis theorem, the object dynamics (2) can be represented with respect to the robot frame {r}\{r\} as follows:

[𝒎o{𝒗˙r+𝝎˙o×𝒑or+𝝎o×(𝝎o×𝒑or)}𝒎o𝒈𝑱or𝝎˙o+𝝎o×𝑱or𝝎o+mo𝒑or×(𝒗˙r𝒈)]=[𝒇or𝒕or]\displaystyle\begin{array}[]{l}\begin{bmatrix}\boldsymbol{m}_{o}\{\dot{\boldsymbol{v}}_{r}+\dot{\boldsymbol{\omega}}_{o}\times{}^{r}\boldsymbol{p}_{o}+\boldsymbol{\omega}_{o}\times(\boldsymbol{\omega}_{o}\times{}^{r}\boldsymbol{p}_{o})\}-\boldsymbol{m}_{o}\boldsymbol{g}\\ {}^{r}\boldsymbol{J}_{o}\dot{\boldsymbol{\omega}}_{o}+\boldsymbol{\omega}_{o}\times{}^{r}\boldsymbol{J}_{o}\boldsymbol{\omega}_{o}+m_{o}{}^{r}\boldsymbol{p}_{o}\times(\dot{\boldsymbol{v}}_{r}-\boldsymbol{g})\end{bmatrix}\\ =\begin{bmatrix}{}^{r}\boldsymbol{f}_{o}\\ {}^{r}\boldsymbol{t}_{o}\end{bmatrix}\par\end{array} (15)

where 𝑱or{}^{r}\boldsymbol{J}_{o} is the object inertia matrix expressed in the robot frame.

II-B Unknown Object Parameter Estimation

For human-robot collaboration tasks, obtaining human motion and wrench measurements requires that the human participant be equipped with additional sensors for motion tracking or force/torque feedback [9]. However, in most real-world scenarios, it is challenging to use additional sensors, leading to limited available human information. Therefore, we estimated the object parameters in a scenario without human motion information and wrench measurements. In this study, the objective of the parameter estimation process is not to precisely obtain the real parameters of the object. Instead, the aim is to accurately recognize the intended human motion by compensating for the effects of the object dynamics, thereby enabling safe and precise human-robot collaboration. Consequently, in the proposed approach, we focus on estimating the effective object parameters that the robot needs during the collaborative task.

According to (4) and (12), the wrench For{}^{r}F_{o} can be represented as follows:

𝑭or=[𝑰6𝑮hr][𝑭r𝑭h]=𝑭r+𝑮hr𝑭h\displaystyle\begin{array}[]{l}{}^{r}\boldsymbol{F}_{o}=\begin{bmatrix}\boldsymbol{I}_{6}&{}^{r}\boldsymbol{G}_{h}\end{bmatrix}\begin{bmatrix}\boldsymbol{F}_{r}\\ \boldsymbol{F}_{h}\end{bmatrix}=\boldsymbol{F}_{r}+{}^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}\\ \end{array} (17)

where

𝑮hr=[𝑰3𝟎3|𝒑hr|×𝑰3]{}^{r}\boldsymbol{G}_{h}=\begin{bmatrix}\boldsymbol{I}_{3}&\boldsymbol{0}_{3}\\ |{}^{r}\boldsymbol{p}_{h}|_{\times}&\boldsymbol{I}_{3}\end{bmatrix}

As previously mentioned, since we do not account for the human wrench during the estimation process, FhF_{h} is set to zero as follows:

𝑭or=𝑭r\displaystyle\begin{array}[]{l}{}^{r}\boldsymbol{F}_{o}=\boldsymbol{F}_{r}\end{array} (19)

By substituting (19) into (15), the object dynamics affecting the robot can be represented as follows:

𝑭r=𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕeff\displaystyle\begin{array}[]{l}\boldsymbol{F}_{r}=\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\boldsymbol{\phi}_{eff}\end{array} (21)

where 𝑨(𝒂,𝝎˙,𝝎,𝒈)\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g}) is a data matrix containing the linear and angular accelerations and angular velocities [18]. The parameter vector 𝜽nθ,nθ=10\boldsymbol{\theta}\in\mathbb{R}^{n_{\theta}},n_{\theta}=10 consists of the object’s dynamic parameters and is defined as follows:

ϕ=[𝒎o𝒎o𝒑oTr𝑱o,vecr]T\displaystyle\begin{array}[]{l}\boldsymbol{\phi}=\begin{bmatrix}\boldsymbol{m}_{o}&\boldsymbol{m}_{o}{{}^{r}\boldsymbol{p}_{o}}^{T}&{}^{r}\boldsymbol{J}_{o,vec}\end{bmatrix}^{T}\end{array} (23)

where 𝒎o\boldsymbol{m}_{o} is the object mass, 𝒑or{}^{r}\boldsymbol{p}_{o} is the center of mass between the robot and object frames, and Jo,vecr{}^{r}J_{o,vec} is the moment of inertia, with Jo,vecr=[𝑱o,xxr𝑱o,xyr𝑱o,xzr𝑱o,yyr𝑱o,yzr𝑱o,zzr]T{}^{r}J_{o,vec}=\begin{bmatrix}{}^{r}\boldsymbol{J}_{o,xx}&{}^{r}\boldsymbol{J}_{o,xy}&{}^{r}\boldsymbol{J}_{o,xz}&{}^{r}\boldsymbol{J}_{o,yy}&{}^{r}\boldsymbol{J}_{o,yz}&{}^{r}\boldsymbol{J}_{o,zz}\end{bmatrix}^{T}.

The extended Kalman filter (EKF) is adopted as an estimator because 𝑨(𝒂,𝝎˙,𝝎,𝒈)\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g}) is time varying. With the filter state 𝒙^(k)=ϕ^eff\hat{\boldsymbol{x}}(k)=\hat{\boldsymbol{\phi}}_{eff}, the prediction of the EKF is formulated as follows:

ϕ^eff(k|k1)=ϕ^eff(k1)\displaystyle\begin{array}[]{l}\hat{\boldsymbol{\phi}}_{eff}(k|k-1)=\hat{\boldsymbol{\phi}}_{eff}(k-1)\end{array} (25)
𝑷(k|k1)=𝑷(k1)+𝑸(k)\displaystyle\begin{array}[]{l}\boldsymbol{P}(k|k-1)=\boldsymbol{P}(k-1)+\boldsymbol{Q}(k)\end{array} (27)

where 𝑸n𝜽×nθ\boldsymbol{Q}\in\mathbb{R}^{n_{\boldsymbol{\theta}}\times n_{\theta}} is the process noise covariance matrix. Then, the correction process is formulated with the effective object dynamics as follows:

𝑲(k)=𝑷(k|k1)𝑯T(k)×[𝑯(k)𝑷(k|k1)𝑯T(k)+𝑹(k)]1\displaystyle\begin{array}[]{l}\boldsymbol{K}(k)=\boldsymbol{P}(k|k-1)\boldsymbol{H}^{T}(k)\times\\ \begin{bmatrix}\boldsymbol{H}(k)\boldsymbol{P}(k|k-1)\boldsymbol{H}^{T}(k)+\boldsymbol{R}(k)\end{bmatrix}^{-1}\end{array} (30)
ϕ^eff(k)=ϕ^eff(k|k1)+𝑲(k){𝑭¯r(k)𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^eff(k|k1)}\displaystyle\begin{array}[]{l}\hat{\boldsymbol{\phi}}_{eff}(k)=\hat{\boldsymbol{\phi}}_{eff}(k|k-1)\\ +\boldsymbol{K}(k)\{\bar{\boldsymbol{F}}_{r}(k)-\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{eff}(k|k-1)\}\end{array} (33)
𝑷(k)=(𝑰𝑲(k)𝑯(k))𝑷(k|k1)\displaystyle\begin{array}[]{l}\boldsymbol{P}(k)=(\boldsymbol{I}-\boldsymbol{K}(k)\boldsymbol{H}(k))\boldsymbol{P}(k|k-1)\end{array} (35)

where 𝑹6×6\boldsymbol{R}\in\mathbb{R}^{6\times 6} is the measurement noise covariance matrix, 𝑷\boldsymbol{P} is the estimated error covariance matrix, and 𝑯(k)=𝒙𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^eff|𝒙^(k|k1)\boldsymbol{H}(k)=\frac{\partial}{\partial\boldsymbol{x}}\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{eff}|_{\hat{\boldsymbol{x}}(k|k-1)} is obtained by backward differentiation. 𝑭¯r(k)\bar{\boldsymbol{F}}_{r}(k) is directly measured based on the robot sensory feedback, and ϕ^eff\hat{\boldsymbol{\phi}}_{eff} is the value estimated according to this filter.

With the estimated effective object dynamics, the real object dynamics can be expressed according to (17) as follows:

𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕeff+𝑮hr𝑭h=𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕreal\displaystyle\begin{array}[]{l}\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\boldsymbol{\phi}_{eff}+{}^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}=\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\boldsymbol{\phi}_{real}\end{array} (37)

Here, 𝑭h\boldsymbol{F}_{h} denotes the intended human motion transmitted to the robot through 𝑮hr{}^{r}\boldsymbol{G}_{h}. Therefore, the real object dynamics are expressed as a sum of the effective object dynamics and the intended human motion projected by the object geometry.

III Object-Aware Impedance Controller

An object-aware impedance controller is designed based on the estimated effective object dynamics for human-robot collaborative mobile manipulation tasks.

III-A Dynamics of the Mobile Manipulator

The dynamics of the nonholonomic mobile manipulator are briefly reviewed [23]. The mobile manipulator consists of a four-wheel differential-drive mobile base and manipulator, as depicted in Fig. 1. The generalized coordinates include 3-degree-of-freedom (DOF) rigid body motion (𝜻\boldsymbol{\zeta}) due to the presence of a floating base, wheel angular motion (𝒒b\boldsymbol{q}_{b}) and arm joint motion (𝒒m\boldsymbol{q}_{m}) and are written as follows:

𝒒=[𝜻T𝒒bT𝒒mT]T\displaystyle\begin{array}[]{l}\boldsymbol{q}=\begin{bmatrix}\boldsymbol{\zeta}^{T}\quad\boldsymbol{q}^{T}_{b}\quad\boldsymbol{q}^{T}_{m}\end{bmatrix}^{T}\end{array} (39)

where 𝜻=[xcycϕ]T3\boldsymbol{\zeta}=\begin{bmatrix}{x}_{c}\quad{y}_{c}\quad{\phi}\end{bmatrix}^{T}\in\mathbb{R}^{3}, 𝒒b=[θrθl]T2\boldsymbol{q}_{b}=\begin{bmatrix}\theta_{r}\quad\theta_{l}\end{bmatrix}^{T}\in\mathbb{R}^{2} and 𝒒mnm\boldsymbol{q}_{m}\in\mathbb{R}^{n_{m}}. Note that nbn_{b} has only two components despite the presence of the four wheels because the left and right pairs of wheels are dependent on each other. As the manipulator used in our system has 7-DOF motion, the total DOF of the generalized coordinates is n=n𝜻+nb+nm=3+2+7=12n=n_{\boldsymbol{\zeta}}+n_{b}+n_{m}=3+2+7=12.

The differential-drive mobile base is characterized by two nonholonomic constraints: no lateral slip and pure rolling motion. The constraint matrix 𝑨c(𝒒)\boldsymbol{A}_{c}(\boldsymbol{q}) can be represented as follows:

𝑨c(𝒒)𝒒˙=0\displaystyle\begin{array}[]{l}\boldsymbol{A}_{c}(\boldsymbol{q})\dot{\boldsymbol{q}}=0\end{array} (41)

where 𝑨c(𝒒)n𝜻×n\boldsymbol{A}_{c}(\boldsymbol{q})\in\mathbb{R}^{n_{\boldsymbol{\zeta}}\times n}. The transformation matrix 𝑺(𝒒)\boldsymbol{S}(\boldsymbol{q}) is in the null space of the constraint matrix 𝑨c(𝒒)\boldsymbol{A}_{c}(\boldsymbol{q}) [24].

𝒒˙=𝑺(𝒒)𝜼\displaystyle\begin{array}[]{l}\dot{\boldsymbol{q}}=\boldsymbol{S}(\boldsymbol{q})\boldsymbol{\eta}\end{array} (43)

where 𝑺(𝒒)n×(nb×nm)\boldsymbol{S}(\boldsymbol{q})\in\mathbb{R}^{n\times(n_{b}\times n_{m})} satisfies 𝑨c(𝒒)𝑺(𝒒)=0\boldsymbol{A}_{c}(\boldsymbol{q})\boldsymbol{S}(\boldsymbol{q})=0 and 𝜼=[q˙bTq˙mT]Tnb+nm\boldsymbol{\eta}=\begin{bmatrix}\dot{q}^{T}_{b}\quad\dot{q}^{T}_{m}\end{bmatrix}^{T}\in\mathbb{R}^{n_{b}+n_{m}} is the reduced velocity vector of the generalized coordinate of the robot. The Jacobian matrix between the Cartesian velocity space and the reduced joint velocity space can be derived as follows:

𝑱(𝒒)=𝑱~(𝒒)𝑺(𝒒)\displaystyle\begin{array}[]{l}\boldsymbol{J}(\boldsymbol{q})=\tilde{\boldsymbol{J}}(\boldsymbol{q})\boldsymbol{S}(\boldsymbol{q})\end{array} (45)

where 𝑱6×(nb+nm)\boldsymbol{J}\in\mathbb{R}^{6\times(n_{b}+n_{m})}. The nonholonomic constraint can be included in the equation of motion of the mobile manipulator by introducing a Lagrangian multiplier as follows:

𝑴~(𝒒)𝒒¨+𝑪~(𝒒,𝒒˙)𝒒˙+𝒈~(𝒒)=𝝉~input+𝝉~ext+𝑨cT(𝒒)𝝀\displaystyle\begin{array}[]{l}\tilde{\boldsymbol{M}}(\boldsymbol{q})\ddot{\boldsymbol{q}}+\tilde{\boldsymbol{C}}(\boldsymbol{q},\dot{\boldsymbol{q}})\dot{\boldsymbol{q}}+\tilde{\boldsymbol{g}}(\boldsymbol{q})=\tilde{\boldsymbol{\tau}}_{input}+\tilde{\boldsymbol{\tau}}_{ext}+\boldsymbol{A}^{T}_{c}(\boldsymbol{q})\boldsymbol{\lambda}\end{array} (47)

where 𝝉~inputn\tilde{\boldsymbol{\tau}}_{input}\in\mathbb{R}^{n} and 𝝉~extn\tilde{\boldsymbol{\tau}}_{ext}\in\mathbb{R}^{n} are the joint torque input vector and the external disturbance vector, respectively. 𝑴~(q)n×n\tilde{\boldsymbol{M}}(q)\in\mathbb{R}^{n\times n}, 𝑪~(𝒒,𝒒˙)n×n\tilde{\boldsymbol{C}}(\boldsymbol{q},\dot{\boldsymbol{q}})\in\mathbb{R}^{n\times n}, and 𝒈~(𝒒)n\tilde{\boldsymbol{g}}(\boldsymbol{q})\in\mathbb{R}^{n} are the inertia matrix, Coriolis and centrifugal matrix, and gravity vector, respectively. 𝝀n𝜻\boldsymbol{\lambda}\in\mathbb{R}^{n_{\boldsymbol{\zeta}}} is the Lagrangian multiplier for the nonholonomic constraints.

By substituting (43) and its time derivative into (47) and multiplying by 𝑺T(𝒒)\boldsymbol{S}^{T}(\boldsymbol{q}) on both sides of the equation, the nonconstrained equation of motion can be reformulated as follows:

𝑴(𝒒)𝜼˙+𝑪(q,q˙)𝜼+𝒈(𝒒)=𝝉input+𝝉ext\displaystyle\begin{array}[]{l}\boldsymbol{M}(\boldsymbol{q})\dot{\boldsymbol{\eta}}+\boldsymbol{C}(q,\dot{q})\boldsymbol{\eta}+\boldsymbol{g}(\boldsymbol{q})=\boldsymbol{\tau}_{input}+\boldsymbol{\tau}_{ext}\end{array} (49)

where

𝑴(𝒒)=𝑺(𝒒)T𝑴~(𝒒)𝑺(𝒒)\boldsymbol{M}(\boldsymbol{q})=\boldsymbol{S}(\boldsymbol{q})^{T}\tilde{\boldsymbol{M}}(\boldsymbol{q})\boldsymbol{S}(\boldsymbol{q})

𝑪(𝒒,𝒒˙)=𝑺(𝒒)T[𝑴~(𝒒)𝑺˙(𝒒)+𝑪~(q,q˙)S(q)]\boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})=\boldsymbol{S}(\boldsymbol{q})^{T}\begin{bmatrix}\tilde{\boldsymbol{M}}(\boldsymbol{q})\dot{\boldsymbol{S}}(\boldsymbol{q})+\tilde{\boldsymbol{C}}(q,\dot{q})S(q)\end{bmatrix}

𝒈(𝒒)=𝑺(𝒒)T𝒈~(𝒒)\boldsymbol{g}(\boldsymbol{q})=\boldsymbol{S}(\boldsymbol{q})^{T}\tilde{\boldsymbol{g}}(\boldsymbol{q})

𝝉input=𝑺(𝒒)T𝝉~\boldsymbol{\tau}_{input}=\boldsymbol{S}(\boldsymbol{q})^{T}\tilde{\boldsymbol{\tau}}

𝝉ext=𝑺(𝒒)T𝝉~ext\boldsymbol{\tau}_{ext}=\boldsymbol{S}(\boldsymbol{q})^{T}\tilde{\boldsymbol{\tau}}_{ext}

Consequently, the 3-DOF rigid body motion is eliminated from the state vector of the reformulated equation, enabling the use of the nonholonomic constraints.

III-B QP-Based Whole-Body Controller

Due to the redundancy of the whole-body robot, quadratic programming (QP) optimization can be introduced to develop a multipriority control framework. The redundancy is due to the difference between the number of joints and the number of Cartesian DOFs, and the Jacobian can be introduced to address this redundancy. The QP optimization can be formulated as follows:

min𝜼˙𝒘j𝜼˙𝜼˙d12+𝒘t𝒙¨d1J1𝜼˙J˙1𝜼2s.t.𝜼˙=𝜼˙d0𝒙¨d0=J0𝜼˙+J˙0𝜼η¯l<𝜼˙<𝜼¯u\displaystyle\begin{array}[]{l}\displaystyle\min_{\dot{\boldsymbol{\eta}}}\quad\boldsymbol{w}_{j}\lVert\dot{\boldsymbol{\eta}}-\dot{\boldsymbol{\eta}}_{d1}\rVert_{2}+\boldsymbol{w}_{t}\lVert\boldsymbol{\ddot{x}}_{d1}-J_{1}\dot{\boldsymbol{\eta}}-\dot{J}_{1}\boldsymbol{\eta}\rVert_{2}\\ s.t.\quad\dot{\boldsymbol{\eta}}=\dot{\boldsymbol{\eta}}_{d0}\\ \quad\quad\,\,\boldsymbol{\ddot{x}}_{d0}=J_{0}\dot{\boldsymbol{\eta}}+\dot{J}_{0}\boldsymbol{\eta}\\ \quad\quad\,\,\underline{$\boldsymbol{\eta}$}_{l}<\dot{\boldsymbol{\eta}}<\bar{\boldsymbol{\eta}}_{u}\end{array} (54)

where 𝒘𝒋\boldsymbol{w_{j}} and 𝒘𝒕\boldsymbol{w_{t}} are the weight vectors in the cost function. 𝜼˙d1\dot{\boldsymbol{\eta}}_{d1} and 𝒙¨d1\boldsymbol{\ddot{x}}_{d1} are the joint and Cartesian space desired accelerations introduced in the cost function, and 𝜼˙d0\dot{\boldsymbol{\eta}}_{d0} and 𝒙¨d0\boldsymbol{\ddot{x}}_{d0} are the desired accelerations introduced in the constraints of the QP formulation. 𝜼¯l\underline{$\boldsymbol{\eta}$}_{l} and 𝜼¯u\bar{\boldsymbol{\eta}}_{u} are the lower and upper limits of the joint acceleration, respectively.

If the solution of (54) exits, it should satisfy the equality and inequality constraints. On the other hand, the goal of the optimization problem is to find a solution that minimizes the resultant cost of the loss function under the given constraints. Consequently, it is not necessary for each term in the loss function to converge to zero. With this perspective, the tasks designated by the constraints must be satisfied, and they have the highest priority (priority = 0), while the tasks in the loss function have lower priority (priority = 1). In summary, the QP optimization process can be formulated by selecting tasks based on the candidates, such as priority 0 tasks (𝜼˙d0,𝒙¨d0),(𝜼l,𝜼u)(\dot{\boldsymbol{\eta}}_{d0},\boldsymbol{\ddot{x}}_{d0}),(\boldsymbol{\eta}_{l},\boldsymbol{\eta}_{u}) and/or priority 1 tasks (𝜼˙d1,𝒙¨d1\dot{\boldsymbol{\eta}}_{d1},\boldsymbol{\ddot{x}}_{d1}).

In the joint space, the desired acceleration can be obtained through a PD controller as follows:

𝜼˙di=[𝒒¨b𝒒¨m]𝑲P𝜼([𝒒b𝒒m][𝒒b𝒒m]d)𝑲D𝜼([𝒒˙b𝒒˙m][𝒒˙b𝒒˙m]d)\displaystyle\begin{array}[]{l}\dot{\boldsymbol{\eta}}_{di}=\begin{bmatrix}\boldsymbol{\ddot{q}}_{b}\\ \boldsymbol{\ddot{q}}_{m}\end{bmatrix}-\boldsymbol{K}_{P\boldsymbol{\eta}}(\begin{bmatrix}\boldsymbol{q}_{b}\\ \boldsymbol{q}_{m}\end{bmatrix}-\begin{bmatrix}\boldsymbol{q}_{b}\\ \boldsymbol{q}_{m}\end{bmatrix}_{d})\\ -\boldsymbol{K}_{D\boldsymbol{\eta}}(\begin{bmatrix}\dot{\boldsymbol{q}}_{b}\\ \dot{\boldsymbol{q}}_{m}\end{bmatrix}-\begin{bmatrix}\dot{\boldsymbol{q}}_{b}\\ \dot{\boldsymbol{q}}_{m}\end{bmatrix}_{d})\end{array} (57)

where i=0,1i=0,1 denotes the priorities. 𝑲P𝜼\boldsymbol{K}_{P\boldsymbol{\eta}} and 𝑲D𝜼\boldsymbol{K}_{D\boldsymbol{\eta}} are the proportional and derivative gains in joint space, respectively, and [𝒒bT𝒒mT]dT\begin{bmatrix}\boldsymbol{q}^{T}_{b}&\boldsymbol{q}^{T}_{m}\end{bmatrix}^{T}_{d} denotes the desired joint angle. Additionally, the desired acceleration in Cartesian space can be obtained as follows:

𝒙¨d1=𝒙¨𝑲P𝒙(𝒙𝒙d)𝑲D𝒙(𝒙˙𝒙˙d)\displaystyle\begin{array}[]{l}\ddot{\boldsymbol{x}}_{d1}=\ddot{\boldsymbol{x}}-\boldsymbol{K}_{P\boldsymbol{x}}(\boldsymbol{x}-\boldsymbol{x}_{d})-\boldsymbol{K}_{D\boldsymbol{x}}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})\end{array} (59)

where 𝑲P𝒙\boldsymbol{K}_{P\boldsymbol{x}} and 𝑲D𝒙\boldsymbol{K}_{D\boldsymbol{x}} are the proportional and derivative gains in Cartesian space, respectively, and 𝒙d,𝒙˙d\boldsymbol{x}_{d},\dot{\boldsymbol{x}}_{d} are the desired pose and velocity, respectively. In this paper, instead of (59), we adopt a Cartesian space object-aware impedance controller.

According to the solution shown in (54), 𝜼˙\dot{\boldsymbol{\eta}}^{*}, the desired input torque for the real robot, can be obtained with the feedback linearization technique as follows:

𝝉input=𝑴(𝒒)𝜼˙+𝑪(𝒒,𝒒˙)𝜼+𝒈(𝒒)𝝉ext\displaystyle\begin{array}[]{l}\boldsymbol{\tau}_{input}=\boldsymbol{M}(\boldsymbol{q})\dot{\boldsymbol{\eta}}^{*}+\boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})\boldsymbol{\eta}+\boldsymbol{g}(\boldsymbol{q})-\boldsymbol{\tau}_{ext}\end{array} (61)
Refer to caption
Figure 2: Mobile manipulator system and assembly jig for the human-robot collaborative task scenario.

III-C Object-Aware Impedance Controller Formulation

For a human-robot collaborative task with an object, the external force is determined by the real object dynamics as follows:

𝝉ext=𝑱T𝑭ext=𝑱T𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕreal\displaystyle\begin{array}[]{l}\boldsymbol{\tau}_{ext}=\boldsymbol{J}^{T}\boldsymbol{F}_{ext}=\boldsymbol{J}^{T}\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\boldsymbol{\phi}_{real}\end{array} (63)

By substituting (37) into (49), the equation of motion considering the object dynamics can be represented as follows:

𝑴(𝒒)𝜼˙+𝑪(𝒒,𝒒˙)𝜼+𝒈(𝒒)=𝝉input+𝑱T{𝑮hr𝑭h+𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕeff}\displaystyle\begin{array}[]{l}\boldsymbol{M}(\boldsymbol{q})\dot{\boldsymbol{\eta}}+\boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})\boldsymbol{\eta}+\boldsymbol{g}(\boldsymbol{q})=\\ \quad\quad\quad\quad\boldsymbol{\tau}_{input}+\boldsymbol{J}^{T}\{{}^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}+\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\boldsymbol{\phi}_{eff}\}\end{array} (66)

The external forces acting on the robot include the human wrench, which is exerted through the object, and the effective object dynamics that the robot endures under the specific configuration. If there is no object, 𝑮hr{}^{r}\boldsymbol{G}_{h} becomes an identity matrix, reducing the external forces to 𝑭ext=r𝑮h𝑭h=𝑭h\boldsymbol{F}_{ext}=^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}=\boldsymbol{F}_{h}, signifying that the intended human motion is applied directly to the robot. On the other hand, in the absence of human-robot collaboration, in which only the robot performs the task, the robot bears all the object dynamics. Hence, (37) becomes 𝑭ext=𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^eff=𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^real\boldsymbol{F}_{ext}=\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{eff}=\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{real}, representing that the real object dynamics are manifested as external forces.

For human-robot collaborative tasks, the target impedance can be designed as follows:

𝑴e(𝒙¨𝒙¨d)+𝑪e(𝒙˙𝒙˙d)+𝑲e(𝒙𝒙d)=𝑭h\displaystyle\begin{array}[]{l}\boldsymbol{M}_{e}(\ddot{\boldsymbol{x}}-\ddot{\boldsymbol{x}}_{d})+\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})+\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d})=\boldsymbol{F}_{h}\end{array} (68)

The crucial point is that the external force in the desired impedance is 𝑭h\boldsymbol{F}_{h} and not 𝑮hr𝑭h{}^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}. When the human and the robot are cooperatively holding an object, the desired impedance represents the relationship between the robot motion and the intended human motion, excluding the influence of the object. Note that if the stiffness of the desired impedance is set to zero, the robot can actively follow the human operator based on the estimated human motion.

To implement the impedance controller using the QP formulation, the Cartesian space task 𝒙¨d0\ddot{\boldsymbol{x}}_{d0} with priority 0 can be expressed as follows:

𝒙¨d0=𝒙¨d𝑴e1𝑪e(𝒙˙𝒙˙d)𝑴e1𝑲e(𝒙𝒙d)+𝑴e1𝑭h\displaystyle\begin{array}[]{l}\ddot{\boldsymbol{x}}_{d0}=\ddot{\boldsymbol{x}}_{d}-\boldsymbol{M}^{-1}_{e}\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})-\boldsymbol{M}^{-1}_{e}\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d})\\ +\boldsymbol{M}^{-1}_{e}\boldsymbol{F}_{h}\end{array} (71)

The above equation is similar to (59); however, instead of a PD controller, an impedance controller including the desired inertia and external force is employed.

By assigning the impedance controller as the highest priority task and solving the QP problem, the object-aware impedance controller can be represented as follows:

𝜼˙=𝑱0+{𝒙¨d+𝑴e1𝑭h𝑴e1(𝑪e(𝒙˙𝒙˙d)+𝑲e(𝒙𝒙d))𝑱˙0𝜼˙}\displaystyle\begin{array}[]{l}\dot{\boldsymbol{\eta}}=\boldsymbol{J}^{+}_{0}\{\ddot{\boldsymbol{x}}_{d}+\boldsymbol{M}^{-1}_{e}\boldsymbol{F}_{h}\\ \quad\,\,\,-\boldsymbol{M}^{-1}_{e}(\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})+\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d}))-\dot{\boldsymbol{J}}_{0}\dot{\boldsymbol{\eta}}\}\end{array} (74)

where 𝑱0+\boldsymbol{J}^{+}_{0} is the pseudoinverse of 𝑱0\boldsymbol{J}_{0}. Without loss of generality, the existence of a feasible solution for the impedance controller can be guaranteed due to the its highest priority. Therefore, the desired input torque can be represented by substituting (74) into (66) as follows:

𝝉input=𝑴𝑱0+{𝒙¨d+𝑴e1𝑭h+𝑴e1(𝑪e(𝒙˙𝒙˙d)𝑲e(𝒙𝒙d))𝑱˙0𝒒˙}+𝑪(𝒒,𝒒˙)𝜼+𝒈(𝒒)𝑱T{r𝑮h𝑭h+𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^eff}\displaystyle\begin{array}[]{l}\boldsymbol{\tau}_{input}=\\ \quad\boldsymbol{M}\boldsymbol{J}^{+}_{0}\{\ddot{\boldsymbol{x}}_{d}+\boldsymbol{M}^{-1}_{e}\boldsymbol{F}_{h}\\ \quad+\boldsymbol{M}^{-1}_{e}(-\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})-\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d}))-\dot{\boldsymbol{J}}_{0}\dot{\boldsymbol{q}}\}\\ \quad+\boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})\boldsymbol{\eta}+\boldsymbol{g}(\boldsymbol{q})\\ \quad-\boldsymbol{J}^{T}\{^{r}\boldsymbol{G}_{h}\boldsymbol{F}_{h}+\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{eff}\}\end{array} (80)

Alternatively, by gathering the terms related to FhF_{h}, (80) can be represented as follows:

𝝉input=𝑴𝑱0+{𝒙¨d+𝑴e1(𝑪e(𝒙˙𝒙˙d)𝑲e(𝒙𝒙d))𝑱˙0𝒒˙}+𝑱0T𝑴F𝑭h+𝑪(𝒒,𝒒˙)𝜼+𝒈(𝒒)𝑱T𝑨(𝒂,𝝎˙,𝝎,𝒈)ϕ^eff\displaystyle\begin{array}[]{l}\boldsymbol{\tau}_{input}=\\ \quad\boldsymbol{M}\boldsymbol{J}^{+}_{0}\{\ddot{\boldsymbol{x}}_{d}\\ \quad+\boldsymbol{M}^{-1}_{e}(-\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})-\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d}))-\dot{\boldsymbol{J}}_{0}\dot{\boldsymbol{q}}\}\\ \quad+\boldsymbol{J}^{T}_{0}\boldsymbol{M}_{F}\boldsymbol{F}_{h}\\ \quad+\boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})\boldsymbol{\eta}+\boldsymbol{g}(\boldsymbol{q})-\boldsymbol{J}^{T}\boldsymbol{A}(\boldsymbol{a},\dot{\boldsymbol{\omega}},\boldsymbol{\omega},\boldsymbol{g})\hat{\boldsymbol{\phi}}_{eff}\end{array} (86)

where 𝑴F=(𝑴x𝑴e1𝑮hr)\boldsymbol{M}_{F}=(\boldsymbol{M}_{x}\boldsymbol{M}^{-1}_{e}-{{}^{r}\boldsymbol{G}_{h}}) and 𝑴x=𝑱0+T𝑴𝑱0+\boldsymbol{M}_{x}=\boldsymbol{J}^{+T}_{0}\boldsymbol{M}\boldsymbol{J}^{+}_{0}, which is the inertia in Cartesian space [25]. For a general impedance controller, external force feedback can be avoided when inertia shaping is not performed, as expressed by 𝑴e=𝑴𝒙\boldsymbol{M}_{e}=\boldsymbol{M}_{\boldsymbol{x}} [26]. However, in scenarios in which an object is present between the robot and human, avoiding the external forces feedback requires specific inertia shaping with considering object geometry, which is expressed as 𝑴e=𝑮h1r𝑴x\boldsymbol{M}_{e}={}^{r}\boldsymbol{G}^{-1}_{h}\boldsymbol{M}_{x}, thereby resulting in 𝑴F=0\boldsymbol{M}_{F}=0. Accordingly, the desired impedance is formulated as follows:

h𝑮r𝑴x(𝒙¨𝒙¨d)+𝑪e(𝒙˙𝒙˙d)+𝑲e(𝒙𝒙d)=𝑭h\displaystyle\begin{array}[]{l}^{h}\boldsymbol{G}_{r}\boldsymbol{M}_{x}(\ddot{\boldsymbol{x}}-\ddot{\boldsymbol{x}}_{d})+\boldsymbol{C}_{e}(\dot{\boldsymbol{x}}-\dot{\boldsymbol{x}}_{d})+\boldsymbol{K}_{e}(\boldsymbol{x}-\boldsymbol{x}_{d})\\ =\boldsymbol{F}_{h}\end{array} (89)

where 𝑮h1r=𝑮rh{}^{r}\boldsymbol{G}^{-1}_{h}={}^{h}\boldsymbol{G}_{r}. However, understanding the intended human motion, denoted as 𝑭h\boldsymbol{F}_{h}, necessitates the use of external force measurements. Consequently, ifurther shaping of the desired inertia is employed to enable efficient interactions between the human and the robot through the object as follows:

𝑴e=(r𝑮h+𝑷d)1𝑴x𝑴F=𝑷d\displaystyle\begin{array}[]{l}\boldsymbol{M}_{e}=(^{r}\boldsymbol{G}_{h}+\boldsymbol{P}_{d})^{-1}\boldsymbol{M}_{x}\longrightarrow\boldsymbol{M}_{F}=\boldsymbol{P}_{d}\end{array} (91)

where 𝑷d6×6\boldsymbol{P}_{d}\in\mathbb{R}^{6\times 6} is the desired weighting matrix of the external wrenches, which is determined in the subsequent section.

Refer to caption
Figure 3: Schematic diagram of the unknown object parameter estimation process and object-aware impedance controller.

IV Experiments

The mobile manipulator was realized, as illustrated in Fig. 2, to experimentally verify the performance of the proposed method in human-robot collaborative task scenarios by integrating a 7-DOF torque-controlled manipulator, Research3, manufactured by Fanka Emika, and a velocity-controlled four-wheel mobile base, Husky, manufactured by Clearpath Robotics. The mobile base is operated by a differential drive maneuver, and the left and right pairs of wheels are dependent on each other, resulting in 2-DOF motion. A 1-DOF two-fingered gripper, 2F-85 by Robotiq, is attached to the end-effector of the robot arm. The control algorithms are implemented on an Intel Core i7 4.7 GHz with 16 GB RAM with a robot operating system (ROS) interface and PREEMPT_RT kernel.

A schematic diagram of the proposed human-robot collaborative task, including the unknown object parameter estimation process and the Cartesian space object-aware impedance controller, is shown in Fig. 3. The parameter estimation process is carried out during the estimation phase, and after the parameters are estimated, the object dynamics, 𝑨ϕ^eff\boldsymbol{A}\hat{\boldsymbol{\phi}}_{eff}, can be transmitted to the controller. To enhance the estimation precision, a perturbation signal is introduced into the null space of xdx_{d}, as detailed in the next section. The measurements are the state of the manipulator, 𝒒\boldsymbol{q}, 𝒒˙\dot{\boldsymbol{q}}, and mobile base, 𝒒b\boldsymbol{q}_{b}, 𝒒˙b\dot{\boldsymbol{q}}_{b}, and the position and velocity of the mobile manipulator, 𝜻\boldsymbol{\zeta}, 𝜻˙{\dot{\boldsymbol{\zeta}}}, are provided by localization algorithm [27]. The external force, 𝑭ext\boldsymbol{F}_{ext}, exerted on the end effector is also estimated based on the torque sensors equipped at each joint of the manipulator. The desired torque input computed by our controller is directly transmitted to the robot arm, while for the velocity-controlled mobile base, the torque inputs are transformed into velocity inputs with the admittance interface  [28]. The manipulator is controlled in real time with a 1 kHz control frequency, and the mobile base is controlled by ROS topic communication with a 10 Hz sampling rate.

IV-A Unknown Object Parameter Estimation Results

Refer to caption
Figure 4: Object parameter estimation results. The duration of the estimation process is set to 20 sec. The estimation results of the mass and center of mass converge within 10 sec. The robot generates perturbative motion, as depicted by the linear and angular velocity signals of robot end-effector motion, while the linear velocity signal of the human motion remains static. The signals are expressed in the robot end effector frame {r}.

For parameter estimation, the development of estimation methods such as least squares or adaptive algorithms is crucial. However, to enhance accuracy and precision, the design of a probing signal is also important [29]. The object should be perturbed to satisfy the persistent excitation condition, which is achieved by noncolinear angular motion. However, the input perturbation signal should not affect human motion during collaborative tasks to guarantee safety. Pivoted angular motion with respect to the human grasp point enables the object to be perturbed without significantly affecting the human motion.

Similar to (8), the kinematic relation between the robot and hand grasping frames can be represented as follows:

𝒙˙h=𝑮rTh𝒙˙r\displaystyle\begin{array}[]{l}\dot{\boldsymbol{x}}_{h}={}^{h}\boldsymbol{G}^{T}_{r}\dot{\boldsymbol{x}}_{r}\end{array} (93)

The pivoted perturbation motion can be realized with reduced kinematics considering only translational motion as follows:

𝒙˙h,trans=𝑮r,tTh𝒙˙r\displaystyle\begin{array}[]{l}\dot{\boldsymbol{x}}_{h,trans}={}^{h}\boldsymbol{G}^{T}_{r,t}\dot{\boldsymbol{x}}_{r}\end{array} (95)

where 𝑮r,tTh=[𝑰3|h𝒑r|×]{}^{h}\boldsymbol{G}^{T}_{r,t}=\begin{bmatrix}\boldsymbol{I}_{3}&|^{h}\boldsymbol{p}_{r}|_{\times}\end{bmatrix}. The robot end-effector motion in Cartesian space had redundancy in terms of making only translational motion of human grasping frame, thereby allowing the creation of the desired robot motion as follows:

𝒙˙r=𝑮r,tT+h𝒙˙h,t+{𝑰6𝑮r,tT+h𝑮r,tTh}𝒛id\displaystyle\begin{array}[]{l}\dot{\boldsymbol{x}}_{r}={}^{h}\boldsymbol{G}^{T+}_{r,t}\dot{\boldsymbol{x}}_{h,t}+\{\boldsymbol{I}_{6}-{{}^{h}\boldsymbol{G}^{T+}_{r,t}}{{}^{h}\boldsymbol{G}^{T}_{r,t}}\}\boldsymbol{z}_{id}\end{array} (97)

where 𝒛id6\boldsymbol{z}_{id}\in\mathbb{R}^{6} is an arbitrary vector projected in the null space of 𝑮r,tTh{}^{h}\boldsymbol{G}^{T}_{r,t}. By introducing angular perturbation motion to the vector 𝒛id\boldsymbol{z}_{id}, collaborative transportation (translational motion) and estimation with object perturbations can be performed simultaneously, with the human grasping point maintained as a pivot [9]. In this paper, the angular motions are chosen as 𝒛id=[𝟎1×3𝝎idT]T\boldsymbol{z}_{id}=\begin{bmatrix}\boldsymbol{0}_{1\times 3}&\boldsymbol{\omega}^{T}_{id}\end{bmatrix}^{T}, where 𝝎id=0.2cos(2πFft)×𝑰3\boldsymbol{\omega}_{id}=-0.2\cos(2\pi F_{f}t)\times\boldsymbol{I}_{3} [rad/s], with Ff=0.4F_{f}=0.4 [Hz]. The object to be estimated is a 1.5 m long aluminum object with additional attached weights, and the true parameters are represented in Table I. The coordinates representing the center of mass are illustrated in Fig. 1; the y-axes of the human and robot frames are aligned with the longitudinal direction of the object, expressed as 𝒑hr=[0,1.5,0]T[m]{}^{r}\boldsymbol{p}_{h}=[0,-1.5,0]^{T}[m].

TABLE I: Object parameter estimation results.
massmass [kg] comxcom_{x} [mm] comycom_{y} [mm] comzcom_{z} [mm]
True 2.23 -683 0 49
Effective 1.115 0 0 49
Estimated 1.131 8 -7 44

In this paper, the object length is assumed to be known before the object is held by the human and robot; well-known artificial intelligence (AI)-based algorithms such as the region-based convolutional neural network (R-CNN) can be used to estimate the object length.

The effective object parameters depend on the human-robot grasping configuration. In this experiment, the human and the robot grasp both ends of the object as rigidly as possible; additionally, the human participant attempts to hold the object as parallel to the robot as possible. Accordingly, the effective parameters of the object are presented in Table I. Additional sensors are not employed to estimate human information, such as the human wrench or motion.

The objective of this paper is to effectively eliminate the object dynamics during collaborative tasks and not to estimate the true object parameters; thus, the moment of inertia of the object is not estimated because this parameter is less important than the mass and center of mass if motion with large acceleration does not occur during the task. Moreover, perturbing the object with large accelerations is undesirable because such motion can potentially affect human safety. In this section, the human and the robot remain static to accurately verify the parameter estimation performance, and in the following section, the estimation is performed during the collaborative transportation task.

The initial values of the estimator are 𝜽^(0)=𝟎10\hat{\boldsymbol{\theta}}(0)=\boldsymbol{0}_{10} and 𝑷(0)=𝑰10\boldsymbol{P}(0)=\boldsymbol{I}_{10}. The weighting matrix of the estimator is set to 𝑸=105×diag(1.0,0.01,1.0,0.001,0,0,0,0,0,0)\boldsymbol{Q}=10^{-5}\times diag(1.0,0.01,1.0,0.001,0,0,0,0,0,0) and 𝑹=diag(200,1000,1000,1000,1000,50)\boldsymbol{R}=diag(200,1000,1000,1000,1000,50). The results are represented in Fig. 4 and Table I. Since angular perturbation motion 𝒛id\boldsymbol{z}_{id} was realized in the null space of 𝑮r,tTh{}^{h}\boldsymbol{G}^{T}_{r,t}, the robot also showed transitional motion. On the other hand, the hand motion, estimated with (95), was well pivoted, as the linear motion remained static, including only some fluctuations, which may be introduced due to the difficulty of humans holding the object consistently during the duration of the estimation. The angular velocities of the human, robot and object are the same, as they are connected rigidly during the task.

For the estimation, most unmodeled bias and disturbance terms are subtracted from the calibration motion based on the measured external wrench, resulting in the calibrated external wrench depicted in Fig. 4. The result of m^o\hat{m}_{o} converges within t=5[sec]t=5[sec] to m^o=1.131[kg]\hat{m}_{o}=1.131[kg], with a relative error and an effective mass of 0.016[kg]0.016[kg]. The results of 𝒑^or{}^{r}\hat{\boldsymbol{p}}_{o} converge within t=8[sec]t=8[sec] to 𝒑^or=[8,7,44]T[mm]{}^{r}\hat{\boldsymbol{p}}_{o}=[8,-7,44]^{T}[mm], with a relative error and an effective mass of [8,7,5]T[mm][-8,7,-5]^{T}[mm]. Because the estimation results for the mass and the center of mass both converge within 10 seconds, the required estimation time for our collaborative task scenario is set to 10 seconds.

IV-B Human-Robot Collaborative Task Results

A scenario was selected to demonstrate the effectiveness of our proposed methods, including the online parameter estimation strategy and the Cartesian impedance controller considering object dynamics. The scenario involves human-robot collaborative transportation and assembly tasks with an unknown object. A jig, as depicted in Fig. 2, is utilized for the object assembly task. The specific steps of the scenario are as follows:

Refer to caption
Figure 5: Demonstration of human-robot collaborative transportation and assembly tasks with the Cartesian object-aware impedance controller. (1) Object lifting; (2) Object parameter estimation during the collaborative transportation task; (3) continuation of the collaborative transportation task; (4) applying estimated object dynamics; the intended human motion following control mainly on the (5) y-axis; (6) x-axis; and (7) z-axis; (8) object assembly onto the jig. The yellow arrows in the snapshots represent the directions of the intended human motion to align the object with the assembly jig. During the transportation task, the mobile manipulator is controlled with a reference tracking control scheme; after compensating for the object dynamics, an object-aware impedance control scheme is applied to follow the intended human motion. The signals are expressed in the robot end effector frame {r}.
  1. 1.

    Object lifting: The human and the robot lift an object together.

  2. 2.

    Object parameter estimation during the collaborative transportation task: The human and robot transport the object and approach the jig. During the transportation task, the parameter estimation is performed while inducing an object perturbation signal in the null space of the desired robot motion.

  3. 3.

    Continuation of the collaborative transportation task: Once the estimation process is completed, the perturbation signal is no longer applied, regardless of whether the transportation task was completed. In this scenario, the transportation task continues even after the estimation is completed.

  4. 4.

    Object dynamics compensation: After the transportation phase, the object dynamics are compensated for to complete the design of the object-aware impedance controller, as represented in (86).

  5. 5.

    Following the intended human motion: The human intention following control, also called as follow-me control [5], is implemented by setting the stiffness of the desired impedance, represented in (68), to zero. This enables active tracking of the intended human motion to locate the end of the object on the robot grasping side at the assembly point of the jig.

  6. 6.

    Object assembly onto the jig: The human accomplishes the assembly task by securely attaching both ends of the object to the jig. Meanwhile, the robot returns to its initial position.

As reflected in the above scenario, we attempted to minimize the duration of the transportation task by conducting the parameter estimation during this task employing a nonthreatening perturbation signal. Therefore, by incorporating the object dynamics into the impedance controller in real time, the intended human motion can be transmitted to the robot without any additional action during the scenario. Note that the human worker performs the task with one hand holding the object, and their other hand is used to manipulate the joystick for the step transition. Therefore, the human can successfully perform the scenario without additional control PCs or coworkers. The whole scenario is available at https://www.youtube.com/watch?v=bGH6GAFlRgA, where the task was captured in a single continuous take.

With several practice of collaborative task, it was observed that transmitting intended human motion to the robot through the object becomes more challenging as the longer the object. The results showed that the use of rotational wrench at the robot side is more efficient to understand the translational human motion. To incorporate this observation into the weighting matrix of the external wrenches, 𝑴𝑭\boldsymbol{M}_{\boldsymbol{F}}, presented in (86), we defined 𝑷d\boldsymbol{P}_{d} as follows:

𝑷d=[[100000000][000005040]0303]\displaystyle\begin{array}[]{l}\boldsymbol{P}_{d}=\begin{bmatrix}\begin{bmatrix}1&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}&\begin{bmatrix}0&0&0\\ 0&0&-5\\ 0&-4&0\end{bmatrix}\\ 0_{3}&0_{3}\end{bmatrix}\end{array} (99)

With the coordinate system illustrated in Fig. 1, the translational motion of the robot along the +y and +z directions can be adjusted based on the rotational motion of the human along the -z and -y axes. Moreover, the robot’s +x transitional motion can be adjusted based on the +x translation motion of the human due to the alignment of the x-axis of the robot’s end effector, the human grasping point and the object.

The overall results of the scenario are presented in Fig. 5. The object parameter estimation results during the nonstationary transportation task are mass=1.154[kg]mass=1.154[kg] and 𝒑hr=[0.026,0.03,0.057]T(m){}^{r}\boldsymbol{p}_{h}=[-0.026,-0.03,0.057]^{T}(m). Comparatively, the results exhibited larger errors than those obtained in the stationary task. The bias in the results may be influenced by the human’s inability to maintain the alignment between the robot’s grasping position and the object during the transportation phase; especially, the negative bias of x and y-axes center of gravity indicates that human participant tends to walk rapidly and more raise the object than the robot based on the coordinate system in Fig. 1. Nevertheless, the estimation errors are acceptable as the robot could maintain its posture without tilting downward during human intention following control.

During the transportation phase, an impedance controller with nonzero stiffness and damping, 𝑲e=diag[100,100,100,400,400,600]T\boldsymbol{K}_{e}=diag[100,100,100,400,400,600]^{T}, 𝑪e=0.5×(2𝑲e)\boldsymbol{C}_{e}=0.5\times(2\sqrt{\boldsymbol{K}_{e}}), was employed to move the robot’s end effector from the initial position, 𝒑initial=[0,0,0]T\boldsymbol{p}_{initial}=[0,0,0]^{T}, 𝒒initial=[0,0,0,1]T\boldsymbol{q}_{initial}=[0,0,0,1]^{T}, to the final position, 𝒑final=[0,1.5,0]T\boldsymbol{p}_{final}=[0,1.5,0]^{T}, 𝒒final=[0,0,0,1]T\boldsymbol{q}_{final}=[0,0,0,1]^{T}, by tracking the desired trajectory. In the whole-body controller defined in (54), high-priority tasks (priority 0) include the trajectory tracking control in the Cartesian space and limiting the input torques, while low-priority tasks (priority 1) include maintaining the initial robot arm configuration. Notably, in the tracking control task, the emphasis lies that the human is allowed to track the desired trajectory, whereas the end effector does not exactly follow the trajectory due to the presence of the identification input. After the transportation phase is completed, the input force and torques were modulated due to the feedforward incorporation of the object dynamics, as shown in Fig. 5.

To attach the object to the jig, precise alignment is necessary. To achieve this, the intended human motion following the control strategy was executed by configuring the highest priority task of the whole-body controller to maintain the initial Cartesian position using an impedance controller with zero translational stiffness. The priority 1 task attempts to maintain the initial configuration of the robot arm, as in the previous experiment. The results show that the external wrench 𝑭ext\boldsymbol{F}_{ext} does not significantly increase during the object alignment, indicating that the robot successfully follows the intended human motion. Moreover, the x-, y-, and z-axes of the object position are adjusted by the x-axis of the external force and the z- and y-axes of the external torques, as represented in (99). Note that some vibrations appear in 𝑭ext\boldsymbol{F}_{ext} during the transportation phase due to the motion of the mobile base, and these vibrations can be eliminated by tuning the velocity controller.

Consequently, the proposed human-robot collaborative method can realize transportation and assembly tasks in a single operation.

V Conclusions

In this paper, an approach for physical human-robot interactions that accounts for unknown object dynamics is proposed, including an online estimation method and a Cartesian impedance controller. The EKF estimator with an object perturbation input aims to estimate effective object parameters in a specific configuration involving a human, robot, and object. The object-aware impedance control strategy incorporates the object dynamics in real time, allowing exact transmission of the intended human motion.

Notably, a key feature of this approach is that the following control scheme is realized through physical contact with the object rather than direct contact between the robot and the human. To achieve this, the direction of the intended human motion is transformed from rotational to linear motion by shaping the desired inertia matrix. To demonstrate the feasibility of the proposed method in real-world scenarios, transportation and assembly tasks were performed. The results showed that safe and accurate interactions with long objects can be successfully achieved.

In future works, we will focus on enhancing the parameter estimation performance by effectively removing the uncertainties of human motion and the bias in the external force estimators with physics informed neural network (PINN).

References

  • [1] D. Sirintuna, A. Giammarino, and A. Ajoudani, “An Object Deformation-Agnostic Framework for Human-Robot Collaborative Transportation,” arXiv preprint arXiv:2207.13427, 2022.
  • [2] D. Sirintuna, A. Giammarino, and A. Ajoudani, “Human-Robot Collaborative Carrying of Objects with Unknown Deformation Characteristics,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
  • [3] A. De Santis, B. Siciliano, A. De Luca, and A. Bicchi, “An atlas of physical human-robot interaction,” Mech. Mach. Theory, vol. 43, no. 3, pp. 253–270, 2008.
  • [4] Alexander Mörtl, Martin Lawitzky, Ayse Kucukyilmaz, Metin Sezgin, Cagatay Basdogan and Sandra Hirche, “The role of roles: Physical cooperation between humans and robots,” International Journal of Robotics Research, vol. 31, no. 13, pp. 1657–1675, 2012.
  • [5] M. Leonori, J. M. Gandarias, and A. Ajoudani, “MOCA-S: A Sensitive Mobile Collaborative Robotic Assistant exploiting Low-Cost Capacitive Tactile Cover and Whole-Body Control,” IEEE Robot. Autom. Lett., pp. 1–8, 2022.
  • [6] L. Rapetti, C. Sartore, M. Elobaid, Y. Tirupachuri, F. Draicchio, T. Kawakami, T. Yoshiike and D. Pucci, ”A Control Approach for Human-Robot Ergonomic Payload Lifting,” arXiv preprint arXiv:2305.08499, 2023.
  • [7] K. I. Alevizos, C. P. Bechlioulis, and K. J. Kyriakopoulos, “Physical Human-Robot Cooperation Based on Robust Motion Intention Estimation,” Robotica, vol. 38, no. 10, pp. 1842-1866, 2021.
  • [8] K. I. Alevizos, C. P. Bechlioulis, and K. J. Kyriakopoulos, “Bounded Energy Collisions in Human-Robot Cooperative Transportation,” IEEE Trans. Mechatronics, 2021.
  • [9] Denis Ćehajić, Pablo Budde gen. Dohmann, and Sandra Hirche, ”Estimating unknown object dynamic in human-robot manipulation tasks,” IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 1730-1737.
  • [10] Denis Ćehajić, Sebastian Erhart, and Sandra Hirche, “Grasp pose estimation in human-robot manipulation tasks using wearable motion sensors,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1031–1036, 2015.
  • [11] D. Torielli, L. Muratore, A. D. Luca and N. Tsagarakis, “A Shared Telemanipulation Interface to Facilitate Bimanual Grasping and Transportation of Objects of Unknown Mass,” IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), 2022.
  • [12] T. Asfour et al., “ARMAR-6: A Collaborative Humanoid Robot for Industrial Environments,” IEEE-RAS Int. Conf. Humanoid Robot., vol. 2018-Novem, pp. 447–454, 2019.
  • [13] Martin Lawitzky, Alexander Mörtl and Sandra Hirche, “Load sharing in human-robot cooperative manipulation,” 19th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2010, pp. 185–191.
  • [14] F. Benzi, C. Mancus, and C. Secchi, “Whole-Body Control of a Mobile Manipulator for Passive Collaborative Transportation,” arXiv preprint arXiv:2211.13680 2022.
  • [15] J. Delpreto and D. Rus, “Sharing the Load: Human-Robot Team Lifting Using Muscle Activity,” IEEE International Conference on Robotics and Automation (ICRA), 2019.
  • [16] M. Kimmel and S. Hirche, “Invariance control for safe human-robot interaction in dynamic environments,” IEEE Trans. Robot., vol. 33, no. 6, pp. 1327–1342, 2017.
  • [17] X. Liu, G. Li, and G. Loianno, “Safety-Aware Human-Robot Collaborative Transportation and Manipulation with Multiple MAVs,” arXiv preprint arXiv:2210.05894, 2022.
  • [18] A. Kurdas, M. Hamad, J. Vorndamme, N. Mansfeld, S. Abdolshah, and S. Haddadin, “Online Payload Identification for Tactile Robots Using the Momentum Observer,” IEEE International Conference on Robotics and Automation (ICRA), pp. 5953–5959, 2022.
  • [19] Antoine Bussy, Abderrahmane Kheddar, A. Crosnier, and F. Keith, “Human-Humanoid Haptic Joint Object Transportation Case Study,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3683–3638, 2012.
  • [20] Antonio Franchi, Antonio Petitti, and Alessandro Rizzo, “Decentralized parameter estimation and observation for cooperative mobile manipulation of an unknown load using noisy measurements,” IEEE International Conference on Robotics and Automation (ICRA), 2015.
  • [21] V. Wüest, V. Kumar, and G. Loianno, “Online Estimation of Geometric and Inertia Parameters for Multirotor Aerial Vehicles,” IEEE International Conference on Robotics and Automation (ICRA), 2019.
  • [22] J. Jang and J. H. Park, ”Parameter Identification of an Unknown Object in Human-Robot Collaborative Manipulation,” 20th International Conference on Control, Automation and Systems (ICCAS), 2020.
  • [23] S. Kim, K. Jang, S. Park, Y. Lee, S-Y. Lee and J. Park, ”Whole-body Control of Non-holonomic Mobile Manipulator Based on Hierarchical Quadratic Programming and Continuous Task Transition,” IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), 2019.
  • [24] R. Dhaouadi and A-A. Hatab, ”Dynamic Modelling of Differential-Drive Mobile Robots using Lagrange and Newton-Euler Methodologies: A Unified Framework,” Advances in Robotics & Automation, vol. 2, no. 2, 2013.
  • [25] K. Bussmann, A. Dietrich and C. Ott, ”Whole-Body Impedance Control for a Planetary Rover with Robotic Arm: Theory, Control Design, and Experimental Validation,” IEEE International Conference on Robotics and Automation (ICRA), 2018.
  • [26] Christian Ott, ”Cartesian Impedance Control of Redundant and Flexible-Joint Robots,” Springer, 2008.
  • [27] http://wiki.ros.org/robots/husky.
  • [28] A. Dietrich, K. Bussmann, F. Petit, P. Kotyczka, C. Ott, B. Lohmann and A. Albu-Schäffer, ”Whole-body impedance control of wheeled mobile manipulators,” Autonomous Robots, vol. 40, no. 3, pp. 505-517, 2016.
  • [29] J. Park and Y. Park, ”Optimal Input Design for Fault Identification of Overactuated Electric Ground Vehicles,” IEEE Transactions on Vehicular Technology, vol. 65, no. 4, pp. 1912-1923, 2016.
[Uncaptioned image] Jinseong Park (Member, IEEE) received B.S., M.S. and Ph.D. degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea, in 2008, 2010 and 2016, respectively. He was a senior researcher at Hyundai Robotics Co., Ltd., Ulsan, Republic of Korea, from 2016-2017. He has been a senior researcher at the Korea Institute of Machinery and Materials (KIMM), Daejeon, Republic of Korea, since 2017. His research interests include optimal control, active vibration control, and fault diagnosis, with an emphasis on mobile manipulators, vehicles, and magnetic levitation systems.
[Uncaptioned image] Young-Sik Shin (Member, IEEE) received a B.S. degree in electrical engineering from Inha University, Incheon, South Korea, in 2013, and M.S. and Ph.D. degrees in civil and environmental engineering with a dual degree from the Robotics Program, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2015 and 2020, respectively. He is currently a senior researcher with the Korea Institute of Machinery and Materials (KIMM), Daejeon. His research interests include robust simultaneous localization and mapping and navigation using heterogeneous sensors.
[Uncaptioned image] Sanghyun Kim (Member, IEEE) received a B.S. degree in mechanical engineering and a Ph.D. degree in intelligent systems from Seoul National University, South Korea, in 2012 and 2020, respectively. He was a postdoctoral researcher at the University of Edinburgh, U.K., in 2020, and a senior researcher at the Korea Institute of Machinery and Materials (KIMM), South Korea, from 2020 to 2023. He is currently an assistant professor of mechanical engineering at Kyung Hee University. His main research interests include the optimal control of mobile manipulators.