BiRP: Learning Robot Generalized Bimanual Coordination using
Relative Parameterization Method on Human Demonstration
Abstract
Human bimanual manipulation can perform more complex tasks than a simple combination of two single arms, which is credited to the spatio-temporal coordination between the arms. However, the description of bimanual coordination is still an open topic in robotics. This makes it difficult to give an explainable coordination paradigm, let alone applied to robotics. In this work, we divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination. Then we propose a relative parameterization method to learn these types of coordination from human demonstration. It represents coordination as Gaussian mixture models from bimanual demonstration to describe the change in the importance of coordination throughout the motions by probability. The learned coordinated representation can be generalized to new task parameters while ensuring spatio-temporal coordination. We demonstrate the method using synthetic motions and human demonstration data and deploy it to a humanoid robot to perform a generalized bimanual coordination motion. We believe that this easy-to-use bimanual learning from demonstration (LfD) method has the potential to be used as a data augmentation plugin for robot large manipulation model training. The corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc.
I Introduction
Humanoid robots with high redundancy are expected to perform complex manipulation tasks with human-like behavior. However, ensuring the coordination between multiple degrees of freedom is still an open problem in robotics. This is often the key to the success of most human daily activities, like stir-frying, pouring water, sweeping the floor, and putting away clothes. Thus, it is necessary to provide an explainable paradigm to describe and learn coordination. Learning the manipulation of a humanoid robot by observing human motion and behavior is a straightforward idea [1], but the technology behind it is still challenging. It needs to understand human motion data and design a bridge connecting humans and robots. In this work, we focus on the learning and generalization of bimanual coordination motions from human demonstration.
Learning from demonstration (LfD) is a type of machine-learning approach that allows robots to learn tasks or skills from human demonstrations. Instead of programming robot motions with explicit instructions that are defined manually for each task [2][3], LfD enables robots to learn skills by observing human performance [4]. It is implemented by the following processes: recording human demonstration data, learning the representation of multiple demonstrations, transferring the data to the workspace of robots, and finally designing a controller for generating the smooth trajectory and its corresponding control commands. LfD has become an increasingly popular approach for training robots, as it can be faster and more efficient than traditional programming methods. It also allows robots to learn tasks that may be difficult to program explicitly, such as those that involve complex movements or interactions with a dynamic environment. Meanwhile, another important feature of LfD is that it enables robots to adapt to new or changing environments [5], as they can learn from demonstrations in different settings and apply that knowledge to new situations.
Bimanual robots are much more complex to learn from demonstration than single-armed robots that can be taught by kinesthetic teaching [6]. Some previous works tried to combine the trajectories taught multiple times to realize the kinesthetic teaching of highly redundant robots [7]. However, this also makes the demonstration data less reliable. Recently, several works proposed feasible frameworks for learning directly from human demonstration. Krebs et al. provided a taxonomy of human bimanual manipulation in daily activities by focusing on different types of coordination [8]. Liu et al. regarded the leader-follower coordination as sequence transduction and designed a coordination mechanism based on Transformer model to achieve a human-level stir-fry task [9]. Besides, offline reinforcement learning algorithms have been used to let robots bimanual coordination tasks from offline demonstration dataset [10], allowing the robot to learn the most efficient and effective ways to coordinate its arms for a given task.
In this work, we aim to propose an explainable paradigm for learning generalized coordination from demonstration. The main contributions can be summarized as follows:
-
•
Coordination parameterization: We propose a relative parameterization method (BiRP) for extracting the coordination relationship from human demonstration and embedding it into the motion generation of each arm.
-
•
Leader-follower motion generation: We provide conditional coordinated motion generation for bimanual tasks with different roles in arms, allowing us to generate the follower’s motion according to the leader.
-
•
Synergistic motion generation: For tasks where there is no obvious role difference between arms, we also provide a motion generation method that enables both arms to adapt to new situations synergistically.
II Construct Bimanual Coordination by Relative Parameterization
The definition of relative parameterization is a way to parameterize the relative relationship between bimanual arms and embed this relationship into the representation of each arm. The relative relationship can have many forms, which depend on task-specific coordination characteristics. For example, if bimanual arms are asked to grasp the same object simultaneously and keep the hold until it is placed, the relative relationship can be the relative displacement of end-effectors. The definitions of symbols are listed in Table I.
Symbol | Definition |
---|---|
State dimensions | |
Order of the controller | |
Number of reference frames | |
Number of arms, refers to left or right arm here | |
Time horizon, | |
Number of Gaussian components in a mixture model | |
Demonstration motion, | |
Motion in frame , | |
Control command, | |
Robot motion, | |
The required tracking precision matrix | |
The cost matrix on control command |
In this section, we first briefly introduce the fundamental learning from demonstration method used in uni-manual scenarios (Sec. II-A), which consists of two parts: demonstration representation and motion reproduction or generation. We add the concept of relative parameterization to these two parts so that both the process of representation learning (Sec. II-B) and the process of control (Sec. II-C) take into account the bimanual coordination characteristics in the demonstration data. These two methods can be used independently or jointly. A feasible weighting approach is also proposed to increase the importance of coordination characteristics in the representation and control (Sec. II-D). The whole framework that is illustrated by a leader-follower example is shown in Fig. 2.
II-A Demonstration Representation and Motion Generation
Learning from demonstration method is a bridge between humans and robots, which is required to have the ability to extract the characteristics of human skills, plan the trajectories, and control the robot to perform similar skills. Thus, it is necessary to combine human skill learning and robot motion planning and control together in the same encoding approach. A popular way is to link them in the form of probability, like Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM). Besides, considering that the application scenarios of service robots are unstructured and need to adapt to changing situations, a class of task-parameterized models is proposed to address this problem [11]. The task parameters are variables describing the task-specific situation, like the position of an object in a pick-and-place task. By contrary, some task-independent information can also be extracted from the demonstration data, which reflects the nature of the skill itself, namely skill parameters. The concept of task-parameterized models is to observe the skill in multiple frames, like from starting points and ending points, and describe the impedance of the systems by variations and correlations with a linear quadratic regulator, which can then be used to control the robot.
Task-parameterized Gaussian Mixture Model (TP-GMM) is a typical method that probabilistically encodes datapoints, and the relevance of candidate frames by mixture models, which has good generalization capability [12]. Formally, if we define the task parameters as , the demonstrations can be observed as in each frame . These transformed demonstrations are then represented as GMM by log-likehood maximization, where refers to prior probability of -th Gaussian component, and refer to mean and covariance matrix of the -th Gaussian in frame . We can regard these Gaussian components in multiple frames as skill parameters that can be transferred following the change of task parameters. For instance, if a new situation is given by task parameters , a new task-specific GMM can be generated by Product of Expert (PoE):
(1) |
where .The result of the Gaussian product is given analytically by
(2) |
For generating robot motion from GMM, optimal control methods like Linear Quadratic Regulator (LQR) and Linear Quadratic Tracking (LQT) can be used as planning and control methods. Here we give the classical form of LQT as follows:
(3) |
where is the mean matrices of the task-specific GMM obtained by the previous PoE process.
Assume that the system evolution is linear,
(4) |
where are coefficients for this system. Then, the relationship between the control command and the robot states can be described in the matrix as , where and are the matrix form combination of . More details can be found in the appendix of [12].
Here we just consider an open loop controller, which solution can be given analytically by
(5) |
with a residual as .
II-B Representation with Relative Parameterization
In the bimanual setting, coordination is reflected at the data level as some characteristics of the relative motion of the arms. For instance, for a bimanual box-lifting task, this characteristic manifests itself as the arms move from free movement to a fixed relative relationship and maintain this relationship for a certain time frame. For a leader-follower task like stir-fry [9], the characteristic refers to the following arm (holding the spoon) motion, and its periodicity is determined with reference to the leading arm (holding the pot). In this work, instead of pre-defining the roles between the arms (as leader or follower), we aim to describe the relative relationship between the arms in a more general way: let the arms parameterize each other.
Formally, we define another frame that takes the trajectory of the other arm as dynamic task parameters and represents the relative relationship as GMM as well. Different from the observation perspectives built with a fixed pose, the transformation matrices are dynamic that change with the motion of the other arm. The relative motion is described as and represented by . For each arm , the task-specific GMM obtained by PoE
(6) |
where .
Such a relative parameterization entangles the representation of bimanual arms together, letting them consider each other by constructing time-varying mutual observing perspectives. This brings two useful functions:
-
•
Generate the motion of one arm based on a given motion of the other one in a leader-follower manner.
-
•
Generate bimanual motions to adapt to new situations simultaneously in a synergistic manner.
For instance, if the left arm motion is pre-defined or adjusted to new situations by other methods like Dynamic Movement Primitive (DMP) in [9], a corresponding right arm motion that considers the spatial-temporal coordination implicit in the demonstration can be generated by gaining the dynamic relative parameters from . Then we can obtain a task-and-coordination-specific GMM of the right arm for further motion generation and control.
For generating bimanual motions synergistically, since the bimanual motions are unknown at the beginning, the relative parameterization cannot be established. Thus, we first use the product of GMMs in other reference systems to generate the independent motions of arms and then use these motions as the relative frame of the other arm to embed learned coordination iteratively.
II-C Control with Relative Parameterization
Coordination relationships can also be embedded when generating trajectories and corresponding control commands from GMM. Let the cost function of the vanilla LQT controller in Equ. 3 be . The composition cost function that takes coordination into account can then be written as
(7) |
By setting a similar linear system like Equ. 4, the composition cost function can rewrite the cost function as
(8) | ||||
where and . and share the similar transformation.
Since there exists multivariate (), we cannot directly change this sum of quadratic error terms into PoE. Thus, we set a unified vector for representing the control command of the whole system, and a binary coordination matrix . For convenience, we set , then we can continue to rewrite the cost function as
(9) | ||||
Set , , , , the composition cost function is simplified as
(10) | ||||
Then we can finally change this sum of quadratic error terms into PoE
(11) | ||||
The result can be written as
(12) | ||||
By using the binary coordination matrix , we can extract the coordinated control commands and motions from .
II-D Weighted Relative Parameterization
A feasible variant of the above methods is to introduce weight coefficients to adjust the influence of coordination relationship in representation and control.
For the GMM representation,
(13) |
For the LQT controller,
(14) | ||||
III Experiments
III-A Setup
The effectiveness of the proposed method is illustrated by learning through both synthetic motions and real demonstration motions. Some pre-designed coordinated motions can show the coordination explicitly, which is meant to demonstrate the performance of the method.
Synthetic motions: The synthetic motions were created via Bézier curves, where bimanual arms depart from a distance and meet at the same point. This kind of motion often occurs in some daily activities that require both arms to grasp, carry or pick up something simultaneously. We provide both two-dimensional and three-dimensional data to show the dimension scalability, as shown in Fig. 3.
Real demonstration motions: We also provide demonstrations of two real tasks to show the effect in bimanual robot manipulation. The palletizing example shown in Fig. 4 represents a class of synergistic coordinated motions and tasks, while the pouring example shown in 2 is a typical bimanual coordination task in the leader-follower manner.
III-B Demonstration collection
The human demonstration data was collected via Optitrack. The demonstrator attached two groups of markers on his hands for detection by Optitrack. Each group of markers contains four individual markers, which are required to determine the pose of each arm. These four markers will be detected via six Optitrack cameras to record two end-effector trajectories with both position and orientation. We chose the poses from the centers of each marker group to reproduce human bimanual demonstration motions. In addition, the box and the two cups each have a set of four markers for recording object motions. The raw data were pre-processed by our open-source toolbox [13] to extract the valuable information and separate it into multiple demonstrations visually. Each demonstration will have seven pose values for each marker group.
III-C Coordination learning performance analysis
The goal of synthetic motions is that bimanual arms should meet in the same pose, whether in 2-dim or 3-dim. As shown in the left column of Fig. 3, we provide three bimanual motions as demonstrations for each synthetic example. These motions start and end from different positions but move in a similar style. The middle column, with multiple small figures, shows the process of using the proposed relative parameterization method. We use three observation frames to parameterize the motions of each arm, from the start points, endpoints, and a dynamic relative observation frame depending on the other arm. We can extract and construct coordination relationships from this parameterization from demonstration data. The parameterized coordination is then used in motion generation and control in new situations with different task parameters. Keeping the same coordination relationship in these generalized motions is required to achieve some specific bimanual tasks. Thus, the generalized motion generation results are shown in the right column of Fig. 3. In the 2-dim example, bimanual motions are required to meet at a new position, . In the 3-dim example, this new meeting point is set to . The generated motions with learned coordination are shown in red and blue, while we also provide a comparison with generated motions without coordination (in light red and blue). By comparison, we find that just regarding bimanual arms as a simple combination of two single arms is insufficient for bimanual tasks. It is necessary to parameterize coordination relationship no matter in a leader-follower or synergistic manner; this is the key to achieving bimanual tasks mostly.
III-D Real robot experiment
We adopt the self-designed humanoid CURI robot for real robot experiments to perform the bimanual motions. Since this work focuses on learning and generalizing coordinated motion, task parameters such as start and end points and object poses are obtained through the Optitrack system. As shown in Fig. 4, we paste four markers on the box to be transported and the box as a destination to facilitate obtaining their poses in the world coordinate system. Meanwhile, four fixed connected markers are also on the back of the CURI robot. The coordinated human hand motions are learned by relative parameterization. Then we use this parameterized coordination model to generate motions that adapt to new object poses and destinations. It is worth mentioning that, unlike the observation frames used in the synthetic data, we set five observation frames to transport this palletizing task, namely from start points, end points, center poses of the transport box, and the center pose of the destination box. This allows the robot to move from an initial pose with its arms outstretched to the sides of the box, carry the box and place it in the target position, and then release the box. Besides, the result of the pouring example can be found in Fig. 2. A self-designed impedance controller supports the execution of the CURI robot, and the trajectories are converted to joint space commands via its inverse kinematics model.
IV Discussion
This work still has some limitations. First, the proposed relative parameterization method is only applied to trajectories in Cartesian space without considering joint space coordination. Learning joint-space bimanual coordination or even whole-body coordination from human demonstrations remains an open problem. Some previous work can be found in [14]. Besides, the method based on the Gaussian mixture model will take a certain amount of time when processing high-frequency sampling demonstration data, which might affect the actual real-time usage. Some improvements using Tensor instead of large sparse matrices can be found in [15].
V Conclusion
In this work, we propose a method for parameterizing coordination in bimanual tasks by probabilistic relative motion relationship of bimanual arms from human demonstration and guiding the robot motion generation in new situations. By embedding relative motion relationship, bimanual motions can be generated in a leader-follower manner and also synergistic manner. We provide a detailed formulation derivation process and demonstrate the effectiveness of the proposed method in coordination learning with some synthetic data with prominent coordination characteristics. We also deploy the method on a real humanoid robot to perform coordination motions to show its generalization in new situations. We believe that this easy-to-use bimanual LfD method can be used as a robust demonstration data augmentation method for training robot large manipulation model [16], and we will do research to show this potential in the future.
References
- [1] K. Yao, D. Sternad, and A. Billard, “Hand pose selection in a bimanual fine-manipulation task,” Journal of Neurophysiology, vol. 126, no. 1, pp. 195–212, 2021.
- [2] J. Lee and P. H. Chang, “Redundancy resolution for dual-arm robots inspired by human asymmetric bimanual action: Formulation and experiments,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6058–6065, IEEE, 2015.
- [3] L. Shi, S. Kayastha, and J. Katupitiya, “Robust coordinated control of a dual-arm space robot,” Acta Astronautica, vol. 138, pp. 475–489, 2017.
- [4] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
- [5] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in 2009 IEEE International Conference on Robotics and Automation, pp. 763–768, IEEE, 2009.
- [6] L. P. Ureche and A. Billard, “Constraints extraction from asymmetrical bimanual tasks and their use in coordinated behavior,” Robotics and autonomous systems, vol. 103, pp. 222–235, 2018.
- [7] E. Gribovskaya and A. Billard, “Combining dynamical systems control and programmingby demonstration for teaching discrete bimanual coordination tasks to a humanoid robot,” in Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, pp. 33–40, 2008.
- [8] F. Krebs and T. Asfour, “A bimanual manipulation taxonomy,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11031–11038, 2022.
- [9] J. Liu, Y. Chen, Z. Dong, S. Wang, S. Calinon, M. Li, and F. Chen, “Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5159–5166, 2022.
- [10] Z. Sun, Z. Wang, J. Liu, M. Li, and F. Chen, “Mixline: A hybrid reinforcement learning framework for long-horizon bimanual coffee stirring task,” in International Conference on Intelligent Robotics and Applications, pp. 627–636, Springer, 2022.
- [11] S. Calinon, T. Alizadeh, and D. G. Caldwell, “On improving the extrapolation capability of task-parameterized movement models,” in 2013 IEEE/RSJ international conference on intelligent robots and systems, pp. 610–616, IEEE, 2013.
- [12] S. Calinon, “A tutorial on task-parameterized movement learning and retrieval,” Intelligent service robotics, vol. 9, no. 1, pp. 1–29, 2016.
- [13] J. Liu, C. Li, D. Delehelle, Z. Li, and F. Chen, “Rofunc: The full process python package for robot learning from demonstration and robot manipulation,” June 2023.
- [14] J. Silvério, S. Calinon, L. Rozo, and D. G. Caldwell, “Bimanual skill learning with pose and joint space constraints,” in 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), pp. 153–159, IEEE, 2018.
- [15] S. Shetty, J. Silvério, and S. Calinon, “Ergodic exploration using tensor train: Applications in insertion tasks,” IEEE Transactions on Robotics, vol. 38, no. 2, pp. 906–921, 2021.
- [16] J. Liu, Z. Li, S. Calinon, and F. Chen, “Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer,” arXiv preprint arXiv:2306.12677, 2023.