This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

EHC-MM: Embodied Holistic Control for Mobile Manipulation

Jiawen Wang1, Yixiang Jin1, Jun Shi1, Yong A1, Dingzhe Li1, Bin Fang2†, Fuchun Sun3 1The authors are with Samsung R&D Institute China-Beijing [email protected], {yixiang.j, jun7.shi, yong.a, dingzhe.li }@samsung.com2The author is with Beijing University of Posts and Telecommunications {fangbin1120}@bupt.edu.cn3The author is with Tsinghua University.Bin Fang is the corresponding author.
Abstract

Mobile manipulation typically entails the base for mobility, the arm for accurate manipulation, and the camera for perception. It is necessary to follow the principle of Distant Mobility, Close Grasping(DMCG) in holistic control. We propose Embodied Holistic Control for Mobile Manipulation(EHC-MM) with the embodied function of sig(ω)\textbf{sig}(\omega): By formulating the DMCG principle as a Quadratic Programming (QP) problem, sig(ω)\textbf{sig}(\omega) dynamically balances the robot’s emphasis between movement and manipulation with the consideration of the robot’s state and environment. In addition, we propose the Monitor-Position-Based Servoing (MPBS) with sig(ω)\textbf{sig}(\omega), enabling the tracking of the target during the operation. This approach allows coordinated control between the robot’s base, arm, and camera. Through extensive simulations and real-world experiments, our approach significantly improves both the success rate and efficiency of mobile manipulation tasks, achieving a 95.6% success rate in the real-world scenario and a 52.8% increase in time efficiency.

I INTRODUCTION

Mobile manipulation is expected to play a significant role in industrial and domestic. However, in many practical robot applications, the planning for the mobile base and robot arm is often separated. This separation in planning processes can lead to an unnatural approach overall, resulting in the loss of optimal solutions[1]. Consequently, this approach significantly increases the failure rate of planning and proves unsuitable for complex, unstructured environments. For mobile manipulation, Holistic Control, also known as Whole Body Control is essential and indispensable. Presently, in the context of mobile manipulation, two distinct challenges persist concerning the implementation of Holistic Control in physical robotic motions:

Lack of proprioception. Previous works mostly treat the entire robot as a single entity without considering the functional differences among its joints[1, 2, 3, 4]. While the robot is aware of the states of individual joints, it lacks the ability of self-awareness. Therefore, it’s crucial to coordinate control between the base and arm by considering the mobility and accuracy of each joint. The mobile base has greater mobility but lacks accuracy; the robot arm offers higher accuracy in manipulation but have limited workspace. That is why, for tasks involving movement and manipulation, we aim for the robot to the principle of Distant Mobility, Close Grasping (DMCG): prioritizing base movement when at a distance and focusing on arm grasping when close to the target [5], as shown in Fig. 1. Given inevitable perception and motion errors in real-world robots, integrating closed-loop systems with real-time planning and servoing is essential[5, 6].

Refer to caption
(a) Real-world scenarios
Refer to caption
(b) Simulation
Figure 1: EHC-MM. The function sig(ω)\textbf{sig}(\omega) of EHC-MM dynamically balances the robot’s emphasis between movement and manipulation with the consideration of the robot’s state and environment. It coordinates all joints of the robot simultaneously and has been validated both in simulation and real-world scenarios.

Perception-motion coordination. Traditionally, visual sensing and robotic manipulation are combined in an open-loop fashion by first ‘perceiving’ and then ‘moving’[7, 8]. For unstructured unknown environments, visual perception relies entirely on cameras mounted on mobile manipulation robots[9, 10]. However, when camera is mounted at the end of the robot arm like Fig. 1 or on the top of the robot, there exists a deficiency in effective perception and motion coordination mechanisms.

For the existing challenges, we propose the Embodied Holistic Control for Mobile Manipulation(EHC-MM), as shown in Fig. 2.

Our main contributions include:

  • We formulate the DMCG problem in mobile manipulation as a quadratic programming problem, incorporating both obstacle avoidance and joint angle constraints. This enables efficient simultaneous planning of all robot joints.

  • We designed sig(ω)\textbf{sig}(\omega) to achieve embodied control. Specifically, sig(ω)\textbf{sig}(\omega) accounts for both the reachability of the robot’s joints and the task objectives, allowing the robot to balance between movement and manipulation. Additionally, we developed a monitor-position-based servoing (MPBS) function to prevent the robot from losing track of the target during its operation.

  • We conduct comprehensive comparative experiments and ablation studies in both simulation and real-world scenarios. The results validate that our approach achieves higher success rates and efficiency in grasping.

II RELATED WORK

II-A Mobile Manipulation

Mobile manipulation is extensively explored across various domains in recent years. Notably, Chitta et al. [11] utilize visual, tactile, and proprioceptive sensor data to execute pick-and-place tasks with the PR2 robot. In addition, HERB [12] enables a robotic agent to identify grasping targets and navigate through complex environments. These contributions lay the foundation for mobile manipulation, emphasizing the segmentation of manipulation and navigation tasks. However, this partitioning in the planning domain is inherently artificial and introduces unnecessary complexity into the planning space.

To address the challenge of whole-body control in mobile grasping, Haviland et al. [5] adapt NEO [13] from stationary to mobile manipulators, proposing a novel taskable reactive mobile manipulation system. They treat the arm and base degrees of freedom as a unified structure, significantly enhancing the speed and fluidity of motion.

Drawing inspiration from the Virtual Kinematic Chain (VKC) concept [14], Jiao et al. [4, 1] develop a Virtual Kinematic Chain framework for mobile manipulation tasks. This approach integrates the kinematics of the mobile base, robot arm, and manipulated object, simplifies task planning by defining abstract actions, and reduces complexity in describing intermediate poses. Implementing a task planner using PDDL with VKC shows significant improvements in planning efficiency, with experiments demonstrating the effectiveness of VKC-based planning in generating feasible motion plans and trajectories.

II-B Embodied Holistic Control

Previous work has introduced the concept of Embodied Control into optimization control of quadrupedal robots[15], swarm robots[16], soft robots[17], and human-robot interaction[18, 19]. This approach has streamlined control system design, enhancing robot adaptability and stability.

Regarding the distinct roles of different robot components in mobile manipulation, each part serves a specific function. Typically, the mobile base exhibits lower precision, on the order of centimeters, but facilitates spatial mobility. In contrast, the robot arm offers higher precision, operating at the millimeter level, albeit within a limited workspace, primarily for object manipulation tasks, as shown in Fig. 1

Refer to caption
Figure 2: Proposed framework for mobile manipulation.

III ARCHITECTURE of EHC-MM

We present an architectural overview in Fig. 2. We divide the entire mobile manipulation into four parts: Real Robot, Target Perception, Motion Planning and Controller which are commonly observed in most robot frameworks. Our enhancements primarily focus within the blue rectangular box depicted in Fig. 2. The specific responsibilities of each component are elaborated below.

  • Embodied Weight Function: Enabling the robot to prioritize specific components of its motion at different distances.

  • Monitor and Position-based Servoing(MPBS) Controller: Allowing the robot to select the target end-effector pose at varying distances to avoid losing the target object.

  • GS-Net*: For generating a more stable grasping pose.

III-A Kinematic Model and Control of the Mobile Manipulation

According to the chain rule in robot kinematics, the pose of the robot’s end-effector can be represented as:

𝐓eo=𝐓vo(x,y,θ)𝐓bv𝐓ab𝐓eaSE(3){}_{e}^{o}\mathbf{T}={}_{v}^{o}\mathbf{T}(x,y,\theta)\cdot{}_{b}^{v}\mathbf{T}\cdot{}_{a}^{b}\mathbf{T}\cdot{}_{e}^{a}\mathbf{T}\in\textbf{SE}(3) (1)

where {o} is the world reference frame. The matrix 𝐓vo{}_{v}^{o}\mathbf{T} denotes the transformation of the virtual base frame {v} to the world frame, through two perpendicular prismatic joints and a revolute joint. (x,y,θ)(x,y,\theta) represents the mobile base’s position and orientation in the 2-D plane. 𝐓bv{}_{b}^{v}\mathbf{T} represents a constant transformation of the mobile base {b} to the virtual base frame on the robot. 𝐓ab{}_{a}^{b}\mathbf{T} signifies a constant relative transformation from the base frame to the base of the manipulator arm {a}, and 𝐓ea{}_{e}^{a}\mathbf{T} represents the forward kinematics of the arm where the end-effector frame is {e}. This configuration is illustrated in Fig. 2.

The resulting expression could also be written as the following differential kinematics,

𝝂e0\displaystyle{}^{0}\boldsymbol{\nu}_{e} =𝐉e0(x,y,θ,𝒒)(𝒒˙base𝒒˙arm)\displaystyle={}^{0}\mathbf{J}_{e}(x,y,\theta,\boldsymbol{q})\left(\begin{array}[]{c}\dot{\boldsymbol{q}}_{base}\\ \dot{\boldsymbol{q}}_{arm}\end{array}\right) (2)

For obstacle avoidance, building on previous works[13, 20], a minimum distance dd exists between a point LL on the robot’s link and a point OO on the obstacle in 3D space. The goal is to ensure that dSd\geq S, where SS represents a predefined safety threshold. If dd falls below SS, a collision with the obstacle is considered to have occurred. Thus, the collision-free criterion is defined as follows:

𝒑𝑳(𝜽(t))𝒑𝑶(t)2S\left\|\boldsymbol{p_{L}}(\boldsymbol{\theta}(t))-\boldsymbol{p_{O}}(t)\right\|_{2}\geq S (3)

where 𝒑𝑳(𝜽(t))3\boldsymbol{p_{L}}(\boldsymbol{\theta}(t))\in\mathbb{R}^{3} and 𝒑𝑶(t)3\boldsymbol{p_{O}}(t)\in\mathbb{R}^{3} represent the coordinates of point LL and point OO, respectively. Evidently, inequality 3 is equivalent to

(𝒑𝑳(𝜽(t))𝒑𝑶(t))(𝒑𝑳(𝜽(t))𝒑𝑶(t))S2(\boldsymbol{p_{L}}(\boldsymbol{\theta}(t))-\boldsymbol{p_{O}}(t))^{\top}(\boldsymbol{p_{L}}(\boldsymbol{\theta}(t))-\boldsymbol{p_{O}}(t))\geq S^{2} (4)

Building upon the model introduced by Jesse Haviland et al. [5, 13], we have enhanced the controller for mobile manipulation tasks by formulating it as a quadratic programming (QP) problem:

minxf(𝒙)\displaystyle\min_{x}\quad f(\boldsymbol{x}) =12𝒙𝒬𝒙+𝒞𝒙\displaystyle=\frac{1}{2}\boldsymbol{x}^{\top}\mathcal{Q}\boldsymbol{x}+\mathcal{C}^{\top}\boldsymbol{x} (5)
subject to 𝒥𝒙\displaystyle\text{ subject to }\quad\mathcal{J}\boldsymbol{x} =𝝂e𝜹(t)\displaystyle=\boldsymbol{\nu}_{e}-\boldsymbol{\delta}(t)
G(𝒙)\displaystyle G(\boldsymbol{x}) 𝒓I\displaystyle\leq\boldsymbol{r}_{I}

where

G(𝒙)\displaystyle G(\boldsymbol{x}) =[H(𝒙)H(𝒙)𝒙𝒙]k\displaystyle=\begin{bmatrix}-H(\boldsymbol{x})^{\top}H(\boldsymbol{x})\\ \boldsymbol{x}\\ -\boldsymbol{x}\end{bmatrix}\in\mathbb{R}^{k} (6)
𝒓I\displaystyle\boldsymbol{r}_{I} =[S2𝒳+𝒳]k\displaystyle=\begin{bmatrix}-S^{2}\\ \mathcal{X}^{+}\\ -\mathcal{X}^{-}\end{bmatrix}\in\mathbb{R}^{k}
H(𝒙)\displaystyle H(\boldsymbol{x}) =(𝒑𝑳(𝜽(0)+0t𝒙𝑑t)𝒑𝑶𝒓𝒆𝒍(t))\displaystyle=(\boldsymbol{p_{L}}(\boldsymbol{\theta}(0)+\int_{0}^{t}\boldsymbol{x}dt)-\boldsymbol{p_{O}^{rel}}(t))
𝒑𝑶𝒓𝒆𝒍(t)\displaystyle\boldsymbol{p_{O}^{rel}}(t) =βψ(𝐓1vo(x,y,θ)𝐓obso)\displaystyle=\beta\psi\left({}^{o}_{v}\mathbf{T}^{-1}\left(x,y,\theta\right)\cdot{}_{obs}^{\ \ o}\mathbf{T}\right)

𝒙=𝜽˙(t)\boldsymbol{x}=\dot{\boldsymbol{\theta}}(t)^{\top} is the decision variable, 𝜹(t)\boldsymbol{\delta}(t) represents the difference between the desired and actual end-effector velocity – relaxing the trajectory constraint. 𝒬\mathcal{Q} incorporates joint velocity between the mobile base and the robot arm, 𝒞\mathcal{C} incorporates manipulability maximisation and auxiliary performance tasks, 𝒥\mathcal{J} incorporates an augmented manipulator Jacobian. 𝒑𝑶𝒓𝒆𝒍(t)\boldsymbol{p_{O}^{rel}}(t) represents the relative position of the obstacle point O to the virtual base. 𝐓obsoSE(3){}_{obs}^{\ \ o}\mathbf{T}\in\textbf{SE}(3) is the obstacle in the virtual base frame. β+\beta\in\mathbb{R}^{+} is a gain term, and ψ():4×46\psi(\cdot):\mathbb{R}^{4\times 4}\mapsto\mathbb{R}^{6} is a function which converts a homogeneous transformation matrix to a spatial twist.

III-B EHC Function

For mobile manipulation, the accuracy and motion abilities of each joint exhibit variations. Generally, robot arms exhibit higher accuracy but have limited workspace and higher computational complexity. Conversely, while the mobile base extends the operational workspace significantly, it generally introduces a greater degree of motion inaccuracy compared to robot arms. Hence, we designed the Embodied Holistic Control(EHC) Function, as shown below:

𝒬\displaystyle\mathcal{Q} =(diag(sig(ω))𝟎nbase×narm𝟎narm×nbase𝐈narm)\displaystyle=\left(\begin{array}[]{cc}\operatorname{diag}\left(\textbf{sig}(\omega)\right)&\mathbf{0}_{n_{base}\times n_{arm}}\\ \mathbf{0}_{n_{arm}\times n_{base}}&\mathbf{I}_{n_{arm}}\\ \end{array}\right) (7)
𝒞\displaystyle\mathcal{C} =(𝐉m𝟎6×1)\displaystyle=\left(\begin{array}[]{c}\mathbf{J}_{m}\\ \mathbf{0}_{6\times 1}\end{array}\right)
𝒬\displaystyle\mathcal{Q} (n+6)×(n+6),𝒞(n+6)\displaystyle\in\,\mathbb{R}^{(n+6)\times(n+6)},\ \mathcal{C}\in\,\mathbb{R}^{(n+6)}

where

ω\displaystyle\omega =etd(𝜽(t),e(t))2e(t)2\displaystyle=\frac{\left\|{e}_{td}^{*}(\boldsymbol{\theta}(t),e(t))\right\|_{2}}{\left\|e(t)\right\|_{2}} (8)
etd(𝜽(t),e(t))\displaystyle{e}_{td}^{*}(\boldsymbol{\theta}(t),e(t)) =argminp𝒆td(𝜽(t))(p,e(t))\displaystyle=\mathop{\arg\min}_{p\in\boldsymbol{e}_{td}(\boldsymbol{\theta}(t))}\angle(p,e(t))
𝒆td(𝜽(t))\displaystyle\boldsymbol{e}_{td}(\boldsymbol{\theta}(t)) ={pSE(3)|R(p,𝜽(t))>0.95}\displaystyle=\{p\in\textbf{SE}(3)|R(p,\boldsymbol{\theta}(t))>0.95\}

sig(ω)\textbf{sig}(\omega) dynamically balances the emphasis between the movement and manipulation, allowing the robot to switch its focus between efficient movement and precise manipulation as required by the task at hand. In particular, e(t)SE(3)e(t)\in\textbf{SE}(3) represents the end-effector’s pose relative to the target from perception in Fig. 2. 𝒆td(𝜽(t))\boldsymbol{e}_{td}(\boldsymbol{\theta}(t)) represents the set of end-effector poses with reachability greater than 95% for the given joint angles 𝜽(t)\boldsymbol{\theta}(t), while etd(𝜽(t),e(t)){e}_{td}^{*}(\boldsymbol{\theta}(t),e(t)) denotes the element within this set that is most closely aligned with the direction of the current task error e(t)e(t), serving as a threshold for motion emphasis under the current task objective. In addition, R()R(\cdot) represents the reachability of the robot arm’s end-effector. The activation function sig()sig(\cdot) is incorporated into the robot control function, amplifying the emphasis on the specific motion. This enhancement improves the overall efficiency of motion planning. 𝐉m\mathbf{J}_{m} is the manipulability Jacobian.

By employing this approach, the robot can achieve the EHC: When the robot is positioned at a considerable distance from the target object, the mobile base primarily handles the task of approaching the target. As the robot nears the target and within an embodied threshold ω\omega accounting for the robot’s joint states 𝜽(t)\boldsymbol{\theta}(t), the robot arm takes over as the primary actor in the manipulation process. Notably, this strategy enables the simultaneous and integrated planning of both locomotion and manipulation tasks, representing a significant advancement over traditional two-stage methods.

III-C Monitoring-Position-Based Servoing

As illustrated in the Fig. 3(a), current Position-Based Servoing(PBS)[5] methods face the challenge of target loss during the reaching, which is particularly detrimental for reactive tasks. This is because, during the process of mobile manipulation, when the grasping orientation is not aligned with the forward-facing direction, such as when the orientation is downward or sideways, the end-effector of the robot arm changes its orientation prematurely during the reaching phase, resulting in a loss of visual contact with the target. Burgess et al.[21] propose improvements for grasping objects on the move. However, in their work, the object’s movement information is available to the robot. In real-world complex scenarios, the object’s positional information is only accessible to the robot through sensors. This necessitates that the robot should maintain the camera’s focus on the target during moving.

Refer to caption
(a) Controller upon PBS. The robot lost the target during the process of reaching.
Refer to caption
(b) Controller upon MPBS. The robot remains focused on the target throughout the process of reaching.
Figure 3: A comparison of the mobile manipulation between PBS and MPBS.

We utilize a controller that builds upon PBS and incorporates Monitoring-Position-Based Servoing (MPBS). This controller is designed to be parallel with the EHC function that dynamically balances the emphasis between monitoring and manipulation, based on sig(ω)\textbf{sig}(\omega). The MPBS is formulated as Equation 9, according to Rodrigues’ theorem.

𝐓eb\displaystyle{}_{e}^{b}\mathbf{T}^{*} =sig(ω)𝐓eb+(1sig(ω))𝐓mb\displaystyle=\textbf{sig}(\omega)\cdot{}^{b}_{e}\mathbf{T}+(1-\textbf{sig}(\omega))\cdot{}_{m}^{\ b}\mathbf{T} (9)
𝐓mb\displaystyle{}_{m}^{\ b}\mathbf{T} =[𝑹𝒅=eKθ𝒕𝒆𝟎1]\displaystyle=\begin{bmatrix}\boldsymbol{R_{d}}=e^{K\theta}&\boldsymbol{t_{e}}\\ \mathbf{0}&1\end{bmatrix}
𝐊\displaystyle\mathbf{K} =[0kzkykz0kxkykx0]\displaystyle=\left[\begin{array}[]{ccc}0&-k_{z}&k_{y}\\ k_{z}&0&-k_{x}\\ -k_{y}&k_{x}&0\end{array}\right]

where the 𝐓eb𝐒𝐄(3){}_{e}^{b}\mathbf{T}^{*}\in\mathbf{SE}(3) is the end-effector target pose and 𝐓mb𝐒𝐄(3){}_{m}^{\ b}\mathbf{T}\in\mathbf{SE}(3) is the pose for monitoring. sig(ω)\textbf{sig}(\omega) represents the threshold value for switching between the grasping pose and the monitoring pose, as shown in Fig. 1. 𝐊\mathbf{K} represents the rotation of axis zz towards the target, and θ\theta represents the angle of rotation.

IV EXPERIMENTS

First, we validate the effectiveness of our method compared to the baselines in simulation. Then, we compare the continuous grasping performance of our method and the baselines in real-world experiments. Subsequently, we verify the effectiveness of MPBS for improving grasping objects with different poses through ablation.

IV-A Hardware Setups

We utilize a RealMan compound robot to conduct experiments. This robot integrates a Water2 non-holonomic mobile base, featuring two drive wheels and four omni-directional wheels, enabling both in-place rotation and straightforward linear motion. The manipulator employed is an RM65-B six-degree-of-freedom robot arm. For visual perception, an Intel Realsense D435 camera is mounted at the end effector of the manipulator. The end effector gripper has a maximum aperture of 70mm and can exert a maximum gripping force of 1.5 kg. The computation is performed on a Alienware laptop, equipped with an Intel(R) Core(TM) i9-9900K CPU and a Nvidia GeForce RTX 2080 GPU.

IV-B Grasp Pose Generation

To obtain a high-quality grasp pose, we enhanced GS-Net [22] that only considers the grasping problem of fixed arms in static scenes, and named the improved version GS-Net*.

Refer to caption
Figure 4: Top-10 and top-1 grasp poses for GS-Net and our GS-Net*

GS-Net* improves mobile manipulation stability by generating grasp poses aligned with the object’s center of gravity and parallel to the object. After the approaching net phase, view scores are adjusted based on the angle to the center of gravity, with smaller angles receiving higher weights. Post-processing aligns the grasp poses with the object’s point cloud along the y-axis. Fig. 4 demonstrates that GS-Net* produces more stable, centered grasp poses. Compared with the original GS-Net, GS-Net* significantly increases the success rate by 28.9% for the same tasks.

IV-C Evaluation Metrics and Baselines

Evaluation Metric.

  • Time for Grasping(s): In simulation, it refers to the average time taken to achieve the target pose. In real-world experiments, it represents the time taken to successfully grasp the target, considered only for successful trials.

  • Time for Monitoring(%): The ratio of the time the target object remains within the camera’s field of view to the total operation time, referred to as TfM.

  • Joint Velocity(rad/s for rotation, m/s for translation): For the base, which consists of two virtual joints, translation and rotation are considered, denoted as Base-T and Base-R, respectively. For the arm, it represents the combined velocity of the six rotational joints.

  • Success Rate(%): In the simulation, it refers to the percentage of trials in which the end-effector successfully reached the target pose. In real-world experiments, it refers to the percentage of trials in which the end-effector successfully grasps the object.

Baselines. Our improvements are primarily built on the NEO Controller[5, 13], thus, their approach serves as the baseline. The specific configurations are outlined below:

  1. 1.

    TSMM: Traditional Two-Stage Mobile Manipulation. The robot first moves to a position dd away from the target object’s position, measured from the end-effector. Then the robot proceeds to grasp the object. In real-world experiments, dd is set to 0.2.

  2. 2.

    NEO(e): The methodology of NEO[5], where the weight of the base is simply set to 1e(t)2\tfrac{1}{\left\|e(t)\right\|_{2}}.

  3. 3.

    NEO(c): To further demonstrate the validity of our work’s improvement direction, we adjust the weight of the base in the original method to a constant value cc. This adjustment serves to validate the correctness of our method’s motivation.

Refer to caption
Figure 5: Setup of experiment-2. The Fixed Grasping Position in the figure is only set for baseline TSMM.

IV-D Experiment-1: Randomly Reaching

Experimental Protocol. We conduct this performance test in the simulation. We introduce different levels of noise to various joints, as shown in Table I. In this task, the robot is asked to reach the random 50 points in sequence.

Results and Analysis. We compared NEO(c), NEO(e), and EHC. In each set of experiments, the three approaches individually reached the same 50 random points. We conducted a total of 10 sets of experiments, thereby each method approaching 500 random points. The results of the experiment are displayed in the Table II. We find that our approach aligns with the DMCG principle, prioritizing base movement for navigation at a distance and arm manipulation for grasping or approaching tasks at close range. Additionally, due to the uncertainty associated with randomly generated points, some target poses may be challenging to reach. Hence, we set a maximum time limit of 30 seconds for each approach attempt. Otherwise, it is considered as a failure. Our method’s ability to coordinate movements and manipulation enables the robot to reach the desired target pose in a shorter time, resulting in fewer failures.

TABLE I: The Magnitude of Noise Added to Each Joint.
Joints Base-T Base-R Arm
Noise 0.05 0.05 0.002
TABLE II: Randomly Reaching in Simulation
Method Time(s) Distant(Base-T/Base-R/Arm) Close(Base-T/Base-R/Arm) # of Failed (/500)
NEO(c) 11.69 1.644/0.691/5.879 0.091/0.062/0.641 37
NEO(e) 11.81 1.630/0.683/5.870 0.087/0.056/0.607 35
EHC 9.98 1.716/0.733/7.068 0.080/0.042/0.870 18

Notes: bold values indicate the best for the respective metric.

TABLE III: Sequentially Grasping Multiple Objects in Real-World Experiments
Method Time(s) Distant(Base-T/Base-R/Arm) Close(Base-T/Base-R/Arm) Success Rate (%)
TSMM 34.89 NAN/NAN/NAN NAN/NAN/NAN 80.0
NEO(c) 30.03 8.776/0.578/1.971 1.294/0.335/0.345 11.1
NEO(e) 27.80 8.709/0.517/2.284 1.270/0.416/0.638 51.1
EHC 13.54 8.826/0.908/2.594 1.259/0.162/1.369 95.6

Notes: bold values indicate the best for the respective metric.

TABLE IV: Grasping the Object with Different Poses
Method TfM(%)
Forward Downward Sideway
NEO(without MPBS) 100.0 31.6 38.7
EHC(without MPBS) 100.0 34.3 36.2
EHC(ours) 100.0 100.0 100.0

IV-E Experiment-2: Sequentially Grasping Multiple Objects

Experimental Protocol. In the real-world experiment, the robot starts from a fixed location and sequentially grasp three objects of different distances on a table 1.5 metres away. These three objects were placed at the front, middle and back positions on the table, as shown in the Fig. 5. The time is the average time from the start of the robot’s movement, to the completion of the grasp, considering only successful trails.

Refer to caption
Figure 6: The distance between the end-effector pose and the target pose over time

Results and Analysis. We compare TSMM, NEO(c), NEO(e) and EHC. We conduct 15 sets of experiments, each grasping 45 objects. The results of the experiment are displayed in Table III, where NAN denotes that the value was not measured. Since, for the TSMM, measuring joint changes is meaningless as it includes two separate stages. Apart from the TSMM, we set a maximum grasping time of 30 seconds for each object. During real-world experiments, we find that the results in the real-world experiments align with those in simulations. EHC achieves shorter grasping times and higher success rates. we observe that the NEO(c) and NEO(e) still frequently perform major movements of the base even close to the target. Due to the limited precision of base movements, the robot faces difficulties in reaching the target smoothly, sometimes leading to failure. As for the TSMM, the robot frequently fails to grasp the most distant object because of the obstruction of the table. From Fig. 6, we can also visually observe that EHC enables smoother and faster mobile manipulation both in simulation and real-world experiments. Thus, EHC demonstrates excellent performance in real-world scenarios.

IV-F Experiment-3: Grasping the Object with Different Poses

Experimental Protocol. In real-world experiment, the robot starts from a fixed location and grasps a single object positioned 1.5 meters away using different grasp poses, including forward, downward, and sideways orientations. The time is the average time cost for each grasping pose.

Results and Analysis. We compared NEO(without MPBS), EHC(without MPBS) and EHC(ours) as an ablation experiment. We conduct 15 sets of experiments, each involving 5 instances of forward, downward, and sideways grasping. The results of the experiment are displayed in the Table IV. A higher TfM value indicates that the robot is less likely to lose track of the target. We observe that, as to downward and sideways grasping orientations, without MPBS, the robot is more likely to lose track of the target in advance, as shown in Fig. 3(a). That is because the track of the target is not considered carefully as a constraint in the QP problem. In EHC(ours), MPBS ensures that the robot maintains monitoring on the target when the robot is at a distance.

V CONCLUSIONS

In this paper, we propose EHC-MM, leveraging the function of sig(ω)\textbf{sig}(\omega). By formulating the DMCG principle as a quadratic programming (QP) problem, we enable simultaneous planning of the robot’s joints, rather than relying on traditional two-stage mobile manipulation. The function sig(ω)\textbf{sig}(\omega) dynamically balances the robot’s emphasis between movement and manipulation with the consideration of the robot’s state and environment, thereby improving both the success rate and efficiency of manipulation tasks. Additionally, sig(ω)\textbf{sig}(\omega) is integrated into the MPBS, allowing the robot to maintain a balance between monitoring and manipulation, ensuring it does not lose track of the target during operation. Through simulation and real-world validation, we observe improved efficiency in mobile manipulation tasks using EHC. In real-world experiments, it achieves an impressive grasp success rate of 95.6%. We consider this performance to be highly commendable for real-world deployments. As robots become more complex with increasing degrees of freedom, EHC effectively coordinates joint actions within robot systems. In future work, we will further validate the efficiency of EHC across various robot platforms, including but not limited to mobile manipulation, dual-arm, and humanoid robots.

References

  • [1] Z. Jiao, Z. Zhang, W. Wang, D. Han, S.-C. Zhu, Y. Zhu, and H. Liu, “Efficient task planning for mobile manipulation: a virtual kinematic chain perspective,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8288–8294, IEEE, 2021.
  • [2] X. Zhang, Y. Zhu, Y. Ding, Y. Jiang, Y. Zhu, P. Stone, and S. Zhang, “Symbolic state space optimization for long horizon mobile manipulation planning,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 866–872, 2023.
  • [3] Y. Cho, D. Shin, and B. Kim, “ω\omega 2: Optimal hierarchical planner for object search in large environments via mobile manipulation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7888–7895, IEEE, 2022.
  • [4] Z. Jiao, Z. Zhang, X. Jiang, D. Han, S.-C. Zhu, Y. Zhu, and H. Liu, “Consolidating kinematic models to promote coordinated mobile manipulations,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 979–985, IEEE, 2021.
  • [5] J. Haviland, N. Sünderhauf, and P. Corke, “A holistic approach to reactive mobile manipulation,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3122–3129, 2022.
  • [6] M. Mittal, D. Hoeller, F. Farshidian, M. Hutter, and A. Garg, “Articulated object interaction in unknown scenes with whole-body mobile manipulation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2022.
  • [7] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. Aparicio, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in Robotics: Science and Systems XIII, Sep 2017.
  • [8] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, p. 705–724, Apr 2015.
  • [9] K. He, R. Newbury, T. Tran, J. Haviland, B. Burgess-Limerick, D. Kulić, P. Corke, and A. Cosgun, “Visibility maximization controller for robotic manipulation,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8479–8486, 2022.
  • [10] A. Angelopoulos, M. Verber, C. McKinney, J. Cahoon, and R. Alterovitz, “High-accuracy injection using a mobile manipulation robot for chemistry lab automation,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10102–10109, IEEE, 2023.
  • [11] S. Chitta, E. Jones, M. Ciocarlie, and K. Hsiao, “Mobile manipulation in unstructured environments: Perception, planning, and execution,” IEEE Robotics & Automation Magazine, vol. 19, p. 58–71, Jun 2012.
  • [12] S. S. Srinivasa, D. Ferguson, C. J. Helfrich, D. Berenson, A. Collet, R. Diankov, G. Gallagher, G. Hollinger, J. Kuffner, and M. V. Weghe, “Herb: a home exploring robotic butler,” Autonomous Robots, vol. 28, p. 5–20, Jan 2010.
  • [13] J. Haviland and P. Corke, “Neo: A novel expeditious optimisation algorithm for reactive motion control of manipulators,” IEEE Robotics and Automation Letters, p. 1043–1050, Apr 2021.
  • [14] J. Pratt, P. Dilworth, and G. Pratt, “Virtual model control of a bipedal walking robot,” in Proceedings of International Conference on Robotics and Automation, Nov 2002.
  • [15] J. Degrave, K. Caluwaerts, J. Dambre, and F. Wyffels, “Developing an embodied gait on a compliant quadrupedal robot,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4486–4491, 2015.
  • [16] P. J. O’Dowd, A. F. Winfield, and M. Studley, “The distributed co-evolution of an embodied simulator and controller for swarm robot behaviours,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4995–5000, 2011.
  • [17] M. Cianchetti, M. Follador, B. Mazzolai, P. Dario, and C. Laschi, “Design and development of a soft robotic octopus arm exploiting embodied intelligence,” in 2012 IEEE International Conference on Robotics and Automation, pp. 5271–5276, 2012.
  • [18] C. Hieida, K. Abe, M. Attamimi, T. Shimotomai, T. Nagai, and T. Omori, “Physical embodied communication between robots and children: An approach for relationship building by holding hands,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3291–3298, 2014.
  • [19] R. Fujimura, K. Nakadai, M. Imai, and R. Ohmura, “Prot — an embodied agent for intelligible and user-friendly human-robot interaction,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3860–3867, 2010.
  • [20] W. Li, Z. Yi, R. Rao, D. Guo, Y. Pan, and K. Huang, “A quadratic programming solution to position-level repetitive motion planning and obstacle avoidance of joint-constrained robots,” IEEE Transactions on Industrial Electronics, 2024.
  • [21] B. Burgess-Limerick, C. Lehnert, J. Leitner, and P. Corke, “An architecture for reactive mobile manipulation on-the-move,” Dec 2022.
  • [22] C. Wang, H.-S. Fang, M. Gou, H. Fang, J. Gao, and C. Lu, “Graspness discovery in clutters for fast and accurate grasp detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15964–15973, 2021.