This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Uncertainty-Aware Shared Autonomy System with
Hierarchical Conservative Skill Inference

Taewoo Kim, Donghyung Kim, Minsu Jang and Jaehong Kim Taewoo Kim, as a Senior Researcher, is with the Social Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea (e-mail: [email protected]; [email protected])Donghyung Kim, as a Senior Researcher, is with the Field Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea (e-mail: [email protected])Minsu Jang, as a Principal Researcher, is with the Social Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea (e-mail: [email protected])Jaehong Kim, as a Principal Researcher and Director, is with the Social Robotics Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of Korea (e-mail: [email protected])
Abstract

Shared autonomy imitation learning, in which robots share workspace with humans for learning, enables correct actions in unvisited states and the effective resolution of compounding errors through expert’s corrections. However, it demands continuous human attention and supervision to lead the demonstrations, without considering the risks associated with human judgment errors and delayed interventions. This can potentially lead to high levels of fatigue for the demonstrator and the additional errors. In this work, we propose an uncertainty-aware shared autonomy system that enables the robot to infer conservative task skills considering environmental uncertainties and learning from expert demonstrations and corrections. To enhance generalization and scalability, we introduce a hierarchical structure-based skill uncertainty inference framework operating at more abstract levels. We apply this to robot motion to promote a more stable interaction. Although shared autonomy systems have demonstrated high-level results in recent research and play a critical role, specific system design details have remained elusive. This paper provides a detailed design proposal for a shared autonomy system considering various robot configurations. Furthermore, we experimentally demonstrate the system’s capability to learn operational skills, even in dynamic environments with interference, through pouring and pick-and-place tasks. Our code will be released soon.

I INTRODUCTION

Imitation learning has been widely applied across various domains such as autonomous vehicles [1] and robotic tasks [2, 3] as an effective method for learning a target task with guidance from experts. Recently, a shared autonomy process (SAP) [4] has been proposed, based on the imitation learning and the HG-DAGGER [5], where experts and robotic agents share the workspace and experts correct motions. Expanding on this, language-based manipulation skills have been developed through the large-scale demonstration dataset [4]. In this manner, recent research trends, such as large language models (LLMs) [6], demand extensive datasets for training more generalized artificial intelligence. However, in the field of robotics, the diversity of experimental environments has led to a scarcity of publicly available datasets that can be universally utilized. While there are methods to collect datasets through simulation environments, setting up such environments can be time-consuming and costly. Moreover, the domain gap issues between simulations and real-world still exists. Hence, the process of collecting datasets through interaction with real-world robots is inevitable.

Refer to caption
Figure 1: Control loop for a Hierarchical Conservative Skill Network in Uncertainty-Aware Shared Autonomy Process

In state-of-the-art researches, [7, 8], they showed outstanding results in learning robotic manipulation skills through the large-scale demonstrations gathered from the SAP framework. SAP effectively improves the policy actions in unvisited states and the associated compounding errors by allowing user intervention and control acquisition in the event of anticipated robot task failures or hazardous situations during skill demonstrations, followed by motion corrections (Fig. 1) [5, 4]. However, this approach requires prolonged interactions with the robot and continuous supervision, which not only leads to high levels of fatigue for the operator [9] but also does not account for the possibility of errors by the expert [10]. Particularly, in the process of handling robots in real-world environments, it is imperative to consider safety issues. For instance, delayed decision-making regarding operator intervention can result in damage to the robot and the environment or cause injury to the operator. Furthermore, when applied in real-world settings, robots may encounter various dynamic environmental changes. Therefore, it is essential to consider issues related to robot control in uncertain situations. However, Recent SAP-based studies did not specifically address these concerns in the dataset collection and application process [4, 7, 8].

Refer to caption
Figure 2: The comprehensive structure of our SAP system and the configuration of the VR controller.

In this study, we propose an imitation learning approach that enables the robot agent to infer uncertainty and, consequently, perform manipulation skills more conservatively. This approach is aimed at addressing human errors and associated safety issues that may arise during the SAP-based learning process. Modeling robot skills and planning problems solely based on the end-effector trajectory for a specific manipulation task lacks systematicity and scalability. Therefore, we introduce a hierarchical network structure inspired by SPiRL [11] to facilitate learning and inference at the abstracted skill level of the robot. Our hierarchical skill network is divided into high-level and low-level policies. Each is responsible for generating abstracted skill embeddings from environmental input information and subsequently decoding them into actual robot behaviors.

To facilitate uncertainty inference, we applied Monte-Carlo dropout [12] to the high-level policy network, which operates between the environmental state information and the skill embedding, due to its simplicity. Through this approach, we infer uncertainty at the skill level and design a more conservative planning strategy at the skill level based on the degree of uncertainty. Additionally, we apply conservative action inference, accounting for uncertainty, to the final robot action input (Fig. 1). Through the proposed hierarchical structure and conservative skill inference method, we experimentally demonstrated the stability of the learning process based on SAP. As a result, we propose an approach that can increase the tolerance for human errors.

To validate the proposed method, we constructed a SAP-based manipulator teaching system from scratch, utilizing virtual reality (VR) teleoperation. In the RT studies, the SAP system played a crucial role in conducting the study; however, it was described in only a few lines, and only the source code related to the learning was made public. In this paper, we provide a detailed description of the SAP system constructed directly using a VR device and Universal Robot (UR3), and we make all relevant source code publicly available. Furthermore, we outline detailed system designs that consider the multi-configuration modes, specifically both forward and downward, while taking into account the minimal required operation stability, with due consideration to the configuration of UR3.

II Shared Autonomy System

II-A System Overview

Our system consists of a primary processing unit (MPU) and a robot control unit (RCU), as depicted in Fig. 2. The MPU comprises several components, including VR and camera interfaces, teleoperation, a demonstration repository, skill learning, and real-time data exchange (RTDE). The VR interface captures the user’s motion with synchronized scene images from the camera interface and conveys this data to the teleoperation module. In the teleoperation module, user input motion is converted into robot motion commands, then sent to the RCU via the RTDE. After completing a task demonstration, all teleoperation data, including VR motion, images, and proprioceptive robot states (e.g., joint angles), is stored in the demonstration repository. The skill training module then uses these demonstrations to acquire manipulation skills. The RCU handles communication with the MPU through RTDE, processes user commands, and executes robot control.

Refer to caption
Figure 3: Visualization of beta constraint, forward and downward configurations, and corresponding VR controller base postures.

II-B VR Teleoperation Interface

In the MPU, the VR interface processes various user commands including controller motion and button events. We conducted teleoperated demonstrations utilizing only the controller without the head-mounted display (HMD), relying on direct observation of Human eyes. As shown in Fig. 2, four dedicated buttons are used for robot teleoperation. The menu button resets the current demonstration episode, the trigger button gives the operator control of the slave robot, the grip button opens and closes the slave robot gripper, and the trackpad button switches the robot’s configuration between forward and downward during teleoperation. Despite HTC VIVE’s left-handed coordinate system, we converted it to a right-handed system to align the local controller coordinate system.

1 Initialize 𝒟\mathcal{D}, robot and demo count NN
2 for n0n\leftarrow 0 to NN do
3       t0t\leftarrow 0
4       while true do
5             if vrreset==true\mathrm{vr}_{\text{reset}}==\text{true} then
6                   τT{𝒐T,𝒔T,𝒂Tnull,uTtrue\tau_{T}\leftarrow\{\boldsymbol{o}_{T},\boldsymbol{s}_{T},\boldsymbol{a}^{\text{null}}_{T},u^{\text{true}}_{T} }
7                   𝒯n.insert(𝝉)\mathcal{T}_{n}.\mathrm{insert}(\boldsymbol{\tau})
8                   𝒟.insert(𝒯n) and then 𝒯nnull\mathcal{D}.\mathrm{insert}(\mathcal{T}_{n})\text{ and then }\mathcal{T}_{n}\leftarrow null
9                   𝜽𝜽init+ϵ;𝜽˙0\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}_{\text{init}}+\epsilon{;}\quad\dot{\boldsymbol{\theta}}\leftarrow 0
10                   break
11            𝜽˙0,𝒂tnull\dot{\boldsymbol{\theta}}\leftarrow 0,\>\boldsymbol{a}_{t}\leftarrow null
12             if vrtrigger==true\mathrm{vr}_{\mathrm{trigger}}==\text{true} then
13                   atmodevrmode;atgripvrgripa^{\text{mode}}_{t}\leftarrow\mathrm{vr}_{\text{mode}}{;}\quad a^{\text{grip}}_{t}\leftarrow\mathrm{vr}_{\text{grip}}
14                   if atmode!=modeta^{\mathrm{mode}}_{t}!=\mathrm{mode}_{t} then
15                         𝜽ModeToJoint(atmode)\boldsymbol{\theta}\leftarrow\text{ModeToJoint}(a^{\text{mode}}_{t})
16                         RCU(𝜽)\text{RCU}(\boldsymbol{\theta})
17                        
18                  𝒑˙tact𝒑˙tvr;𝒒tact𝒒tvr𝒒ttcp\dot{\boldsymbol{p}}^{\text{act}}_{t}\leftarrow\dot{\boldsymbol{p}}^{\mathrm{vr}}_{t}{;}\quad\boldsymbol{q}^{\text{\text{act}}}_{t}\leftarrow\boldsymbol{q}^{\text{vr}}_{t}\otimes{\boldsymbol{q}^{\text{tcp}}_{t}}^{*}
19                   𝒂tact={𝒑˙tact,𝒒tact,atgrip,atmode}\boldsymbol{a}^{\text{act}}_{t}=\{\dot{\boldsymbol{p}}^{\text{act}}_{t},\>\boldsymbol{q}^{\text{act}}_{t},\>a^{\text{grip}}_{t}\>,a^{\text{mode}}_{t}\}
20                   τt{𝒐t,𝒔t,𝒂tact,utfalse}\tau_{t}\leftarrow\{\boldsymbol{o}_{t},\boldsymbol{s}_{t},\boldsymbol{a}^{\text{act}}_{t},u^{\text{false}}_{t}\}
21                  
22                  𝒑tcp𝒑ttcp+𝒂tact;𝒒tcp𝒒tvr\boldsymbol{p}^{{}^{\prime}}_{\mathrm{tcp}}\leftarrow\boldsymbol{p}^{\mathrm{tcp}}_{t}+\boldsymbol{a}^{act}_{t}{;}\quad\boldsymbol{q}^{{}^{\prime}}_{\mathrm{tcp}}\leftarrow\boldsymbol{q}^{\text{vr}}_{t}
23                   𝒑tcp′′,𝒒tcp′′δ(𝒑tcp,𝒒tcp)\boldsymbol{p}^{{}^{\prime\prime}}_{\mathrm{tcp}},\>\boldsymbol{q}^{{}^{\prime\prime}}_{\mathrm{tcp}}\leftarrow\delta(\boldsymbol{p}^{{}^{\prime}}_{\mathrm{tcp}},\>\boldsymbol{q}^{{}^{\prime}}_{\mathrm{tcp}})
24                   𝜽goalIK(𝒑tcp′′,𝒒tcp′′)\boldsymbol{\theta}_{\text{goal}}\leftarrow\text{IK}(\boldsymbol{p}^{{}^{\prime\prime}}_{\mathrm{tcp}},\>\boldsymbol{q}^{{}^{\prime\prime}}_{\mathrm{tcp}})
25                   𝜽˙(𝜽goal𝜽ttcp)×scale\dot{\boldsymbol{\theta}}\leftarrow(\boldsymbol{\theta}_{\text{goal}}-\boldsymbol{\theta}^{\text{tcp}}_{t})\times scale
26                   tt+1t\leftarrow t+1
27                  
28            RCU(𝜽˙\dot{\boldsymbol{\theta}})
29      
Algorithm 1 Task Demonstration Dataset Collection

II-C Constrained Teleoperation

Conventional robot teleoperation methods inherently entail collision risks, as they directly transmit the master device’s motion to the slave robot without adequate safeguards. To address this concern, we developed a constrained teleoperation method designed to proactively prevent self-collisions and collisions with the floor during teleoperation. This method involves the imposition of virtual motion constraints on the slave robot, which are applied to the desired tool center point (TCP) pose derived from the VR controller motion:

𝒑tcp\displaystyle\boldsymbol{p}^{{}^{\prime}}_{\text{tcp}} =𝒑tcp+𝒑˙vr\displaystyle=\boldsymbol{p}_{\text{tcp}}+\boldsymbol{\dot{p}}_{\text{vr}} (1)
𝒒tcp\displaystyle\boldsymbol{q}^{{}^{\prime}}_{\text{tcp}} =𝒒vr\displaystyle=\boldsymbol{q}_{\text{vr}} (2)

where the desired TCP position, denoted as 𝒑tcp\boldsymbol{p}^{{}^{\prime}}_{\text{tcp}}, is determined by adding the linear velocity of the VR controller, expressed as 𝒑˙vr={x˙vr,y˙vr,z˙vr}\boldsymbol{\dot{p}}_{\text{vr}}=\{\dot{x}_{\text{vr}},\;\dot{y}_{\text{vr}},\;\dot{z}_{\text{vr}}\}, to the current TCP position, which is represented as 𝒑tcp={xtcp,ytcp,ztcp}\boldsymbol{p}_{\text{tcp}}=\{x_{\text{tcp}},\;y_{\text{tcp}},\;z_{\text{tcp}}\}. Similarly, the desired TCP orientation, denoted as 𝒒tcp\boldsymbol{q}^{{}^{\prime}}_{\text{tcp}}, is simply defined to match the current orientation of the VR device itself, represented as 𝒒vr={xvr,yvr,zvr,wvr}\boldsymbol{q}_{\text{vr}}=\{x_{\text{vr}},\;y_{\text{vr}},\;z_{\text{vr}},\;w_{\text{vr}}\}, utilizing quaternion notation in practical implementation. Essentially, this setup implies that the TCP’s positional movement is directly proportional to the master device’s positional speed, while rotational adjustments remain in perfect synchronization with the master device. The positional constraints are calculated using a straightforward delta function:

δ(x,λmin,λmax)\displaystyle\delta(x,\lambda_{\text{min}},\lambda_{\text{max}}) =max(min(x,λmax),λmin)\displaystyle=\max(\min({x,\,\lambda_{\text{max}}}),\,\lambda_{\text{min}}) (3)
p¯tcpx\displaystyle\bar{p}^{x}_{\text{tcp}} =δ(ptcpx,λminx,λmaxx)\displaystyle=\delta(p^{x}_{\text{tcp}},\lambda^{x}_{\text{min}},\lambda^{x}_{\text{max}}) (4)

where ptcpx𝒑tcp={ptcpx,ptcpy,ptcpz}p^{x}_{\text{tcp}}\in\boldsymbol{p}_{\text{tcp}}=\{p^{x}_{\text{tcp}},p^{y}_{\text{tcp}},p^{z}_{\text{tcp}}\} represents the scalar component of the desired TCP position. Additionally, we have λmaxx𝝀max={λmaxx,λmaxy,λmaxz}\lambda^{x}_{\text{max}}\in\boldsymbol{\lambda}_{\text{max}}=\{\lambda^{x}_{\text{max}},\lambda^{y}_{\text{max}},\lambda^{z}_{\text{max}}\} and λminx𝝀min={λminx,λminy,λminz}\lambda^{x}_{\text{min}}\in\boldsymbol{\lambda}_{\text{min}}=\{\lambda^{x}_{\text{min}},\lambda^{y}_{\text{min}},\lambda^{z}_{\text{min}}\}, which correspond to the maximum and minimum thresholds, respectively.

While defining position constraints is straightforward, establishing rotation constraints demands a more intricate approach to ensure precise motion limitations. Specifically, for rotation constraints, we introduced the concept of plane-projected rotational constraints. To implement this, we began by defining projection matrices for three fundamental planes of the xyxy plane, the xzxz plane, and the yzyz plane (Fig. 3):

Pxy=(P^xy(P^xyP^xy)1)P^xyP_{xy}=(\hat{P}_{xy}(\hat{P}_{xy}^{\intercal}\hat{P}_{xy})^{-1})\hat{P}_{xy}^{\intercal} (5)

The formulation of the rotation constraint involves the use of projection matrices for three base planes, including P^xy\hat{P}_{xy}, which is a (3×2)(3\times 2) matrix containing the unit column vectors 𝒙^\hat{\boldsymbol{x}} and 𝒚^\hat{\boldsymbol{y}}. Similar projection matrices for the xzxz and yzyz planes are established in a similar manner.

To implement the rotation constraint, we projected the TCP’s coordinates onto these base planes using the respective projection matrices. This projection allowed us to compute the numerical angles corresponding to roll, pitch, and yaw, effectively constraining rotation. This process exhibits slight variations for each configuration mode, as outlined in equations (6) to (23).

Forward Configuration:

𝒗xz\displaystyle\boldsymbol{v}_{xz} =Pxz𝒗tcpz\displaystyle=P_{xz}\boldsymbol{v}^{z}_{\text{tcp}} (6)
βf\displaystyle\beta_{f} =arccos(𝒗xz𝒙^/(𝒗xz𝒙^))\displaystyle=\arccos(\boldsymbol{v}_{xz}\cdot\hat{\boldsymbol{x}}/(||\boldsymbol{v}_{xz}||\cdot||\hat{\boldsymbol{x}}||)) (7)
βf\displaystyle\beta_{f} =δ(βf×𝒞(vxzz<0),λminβf,λmaxβf)\displaystyle=\delta(\beta_{f}\times\mathcal{C}(v^{z}_{xz}<0),\lambda^{\beta_{f}}_{\text{min}},\lambda^{\beta_{f}}_{\text{max}}) (8)
𝒗xy\displaystyle\boldsymbol{v}_{xy} =Pxy𝒗tcpz\displaystyle=P_{xy}\boldsymbol{v}^{z}_{\text{tcp}} (9)
γf\displaystyle\gamma_{f} =arccos(𝒗xy𝒙^/(𝒗xy𝒙^))\displaystyle=\arccos(\boldsymbol{v}_{xy}\cdot\hat{\boldsymbol{x}}/(||\boldsymbol{v}_{xy}||\cdot||\hat{\boldsymbol{x}}||)) (10)
γf\displaystyle\gamma_{f} =δ(γf×𝒞(vxyy>0),λminγf,λmaxγf)\displaystyle=\delta(\gamma_{f}\times\mathcal{C}(v^{y}_{xy}>0),\lambda^{\gamma_{f}}_{\text{min}},\lambda^{\gamma_{f}}_{\text{max}}) (11)
𝒗yz\displaystyle\boldsymbol{v}_{yz} =Pyz𝒗tcpy\displaystyle=P_{yz}\boldsymbol{v}^{y}_{\text{tcp}} (12)
αf\displaystyle\alpha_{f} =arccos(𝒗yz𝒛^/(𝒗yz𝒛^))\displaystyle=\arccos(\boldsymbol{v}_{yz}\cdot\hat{\boldsymbol{z}}/(||\boldsymbol{v}_{yz}||\cdot||\hat{\boldsymbol{z}}||)) (13)
αf\displaystyle\alpha_{f} =δ(αf×𝒞(vyzy>0),λminαf,λmaxαf)\displaystyle=\delta(\alpha_{f}\times\mathcal{C}(v^{y}_{yz}>0),\lambda^{\alpha_{f}}_{\text{min}},\lambda^{\alpha_{f}}_{\text{max}}) (14)

Downward Configuration:

𝒗xz\displaystyle\boldsymbol{v}_{xz} =Pxz𝒗tcpy\displaystyle=P_{xz}\boldsymbol{v}^{y}_{\text{tcp}} (15)
βd\displaystyle\beta_{d} =arccos(𝒗xz𝒙^/(𝒗xz𝒙^))\displaystyle=\arccos(\boldsymbol{v}_{xz}\cdot\hat{\boldsymbol{x}}/(||\boldsymbol{v}_{xz}||\cdot||\hat{\boldsymbol{x}}||)) (16)
βd\displaystyle\beta_{d} =δ(βd×𝒞(vxzz<0),λminβd,λmaxβd)\displaystyle=\delta(\beta_{d}\times\mathcal{C}(v^{z}_{xz}<0),\lambda^{\beta_{d}}_{\text{min}},\lambda^{\beta_{d}}_{\text{max}}) (17)
𝒗xy\displaystyle\boldsymbol{v}_{xy} =Pxy𝒗tcpy\displaystyle=P_{xy}\boldsymbol{v}^{y}_{\text{tcp}} (18)
γd\displaystyle\gamma_{d} =arccos(𝒗xy𝒙^/(𝒗xy𝒙^))\displaystyle=\arccos(\boldsymbol{v}_{xy}\cdot\hat{\boldsymbol{x}}/(||\boldsymbol{v}_{xy}||\cdot||\hat{\boldsymbol{x}}||)) (19)
γd\displaystyle\gamma_{d} =δ(γd×𝒞(vxyy>0),λminγd,λmaxγd)\displaystyle=\delta(\gamma_{d}\times\mathcal{C}(v^{y}_{xy}>0),\lambda^{\gamma_{d}}_{\text{min}},\lambda^{\gamma_{d}}_{\text{max}}) (20)
𝒗yz\displaystyle\boldsymbol{v}_{yz} =Pyz𝒗tcpz\displaystyle=P_{yz}\boldsymbol{v}^{z}_{\text{tcp}} (21)
αd\displaystyle\alpha_{d} =arccos(𝒗yz𝒛^/(||𝒗yz||||𝒛^||))\displaystyle=\arccos(\boldsymbol{v}_{yz}\cdot-\hat{\boldsymbol{z}}/(||\boldsymbol{v}_{yz}||\cdot||\hat{\boldsymbol{z}}||)) (22)
αd\displaystyle\alpha_{d} =δ(αd×𝒞(vyzy<0),λminαd,λmaxαd)\displaystyle=\delta(\alpha_{d}\times\mathcal{C}(v^{y}_{yz}<0),\lambda^{\alpha_{d}}_{\text{min}},\lambda^{\alpha_{d}}_{\text{max}}) (23)
Refer to caption
Figure 4: Overall architecture of the hierarchical skill network (HSN).

Subsequently, the rotation constraint is finalized by applying rotation clipping thresholds to the actual angle values, which are obtained from the projected TCP coordinates. For instance, in the case of βf\beta_{f}, the process involves projecting the TCP’s zz-axis direction vector 𝒗tcpz\boldsymbol{v}^{z}_{\text{tcp}} onto the xzxz-plane (Eq. (6)). The angle value is then calculated through the dot-product of the projected vector 𝒗xz\boldsymbol{v}_{xz} and the basis vector 𝒙^\hat{\boldsymbol{x}}, followed by the arccos function (Eq. (7)). The sign of this angle is set to positive when vxzzv^{z}_{xz} is less than zero and negative otherwise by the conditional sign function 𝒞(x)=[(𝟙|x=T)or(𝟙|x=F)]\mathcal{C}(x)=\left[(\mathds{1}|x=\text{T})\>\text{or}\>(-\mathds{1}|x=\text{F})\right], where vxzz𝒗xzv^{z}_{xz}\in\boldsymbol{v}_{xz}. Finally, βf\beta_{f} is confined within the range defined by λminβf\lambda^{\beta_{f}}_{\text{min}} and λmaxβf\lambda^{\beta_{f}}_{\text{max}} (Eq. (8)). Additional details regarding the actual threshold values for pose constraints in the forward and downward configurations are presented in Table I.

II-D Human Demonstration Dataset Collection

Human demonstrations for initial skill policy learning were recorded at 30Hz and saved to disk at the end of each episode. Each episode includes TCP trajectories, consisting of observation, robot state, action, and an episode end flag. This data forms the dataset for the target task, denoted as a rollout memory 𝒟={𝒯1,,𝒯N}\mathcal{D}=\{\mathcal{T}_{1},\dots,\mathcal{T}_{N}\}, where each trajectory 𝒯n\mathcal{T}_{n} comprises packets τt={ot,;st,;at,ut}\tau_{t}=\{o_{t},;s_{t},;a_{t},u_{t}\}. A packet includes an RGB image for observation, a 16-dimensional state vector (joint angles, TCP position with quaternion orientation, normalized gripper position, and configuration mode), a 9-dimensional action vector (position, rotation, gripper, and conf. mode), and an episode end flag. The rotation action follows the VR controller’s orientation, with the recorded rotation action determined by the difference between TCP and VR controller orientations:

𝒒tvr\displaystyle\boldsymbol{q}^{\text{vr}}_{t} =𝒒tact𝒒ttcp\displaystyle=\boldsymbol{q}^{\text{act}}_{t}\otimes\boldsymbol{q}^{\text{tcp}}_{t} (24)

where the unknown rotation action 𝒒tact\boldsymbol{q}^{\text{act}}_{t} is calculated by multiplying the conjugate of the TCP orientation 𝒒ttcp*\boldsymbol{q}^{\text{tcp*}}_{t} on both sides of the equation. Algorithm 1 outlines the demonstration dataset collection process.

III Hierarchical Conservative Skill Inference

III-A Hierarchical Skill Network Framework

To learn the manipulation skills from human demonstrations, we designed a hierarchical skill network (HSN) model (Fig. 4) inspired from the SPiRL architecture [11]. HSN consists of a hierarchical structure that infers robot skill embedding from observations of the environment and decodes it into actual robot actions, thereby controlling the real robot. In training phase, HSN learns skill embedding space (pouring and pick and place skills in our case) using recurrent skill encoder q(z|𝕒𝕚)q(z|\mathbb{a_{i}}) and skill decoder pd(𝕒𝕚|z)p_{d}(\mathbb{a_{i}}|z) while the skill prior pa(zt|ot,st)p_{a}(z_{t}|o_{t},s_{t}) is trained to learn skill embedding distributions corresponding to the observations by minimizing the Kullback-Leibler divergence [13] between the predicted prior and the inferred skill posterior 𝔼(s,ai)𝒟DKL(q(z|ai),pa(z|st))\mathbb{E}_{(s,a_{i})\sim\mathcal{D}}D_{KL}(q(z|a_{i}),p_{a}(z|s_{t})). To train a robust skill policy, preprocessing steps are applied to the input image, including random cropping, downsizing, and noise addition (Fig. 4). Subsequently, the skill prior generates skill embedding actions from a concatenated vector comprising image features extracted from ResNet18 [14] and the robot state.

In test phase, the skill encoder is not utilized. Instead, the skill prior infers a 12-dimensional skill action zt=pa(z|ot,st)z_{t}=p_{a}(z|o_{t},s_{t}), which is subsequently decoded by the skill decoder into a H-steps (set to 10) robot action trajectory for application on the physical robot. The decoded robot action (HH by 9) encompasses the relative positional difference {xa,xy,xz}\{\partial{x_{a}},\partial{x_{y}},\partial{x_{z}}\} and quaternion rotational difference {xq,xz,zq,wq}\{x_{q},x_{z},z_{q},w_{q}\} of the end-effector, grip action gag_{a} and configuration change action cac_{a} that facilitates the alteration of joint configurations between the forward and downward base poses by a button press on the VR controller. As a result, a single skill action inference leads to the execution of a series of actual robot actions spanning HH-steps as pd(𝕒𝕚|zt)=[at,,at+h,,at+H]p_{d}(\mathbb{a_{i}}|z_{t})=\left[a_{t},\dots,a_{t+h},\dots,a_{t+H}\right] and zt+1zt+Hz_{t+1}\cong z_{t+H}. The skill encoder is composed of a recurrent layer and two linear layers (256-dim, 128-dim), while the skill prior and skill decoder are constructed using three linear layers (256-dim) with leaky-ReLU [15] activation function.

III-B Uncertainty-Aware Conservative Skill Inference

To address the uncertainties in dynamic environments, we applied Monte-Carlo dropout [12] to the skill prior network of HSN. The skill uncertainty associated with the current observation is determined through the standard deviation of the determinants of K covariance matrices:

ξ=std()\xi=\text{std}\left(\mathbb{H}\right) (25)

where ={|Σ1|,,|Σk|,,|ΣK|}\mathbb{H}=\{|\Sigma_{1}|,\dots,|\Sigma_{k}|,\dots,|\Sigma_{K}|\}, the K is the number of samples in MC-dropout process and the covariance matrices are derived from the sampled skill actions, which conform to a multivariate Gaussian distribution zt𝒩(𝝁,𝚺)z_{t}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma}). We then normalized the inferred skill uncertainty to a value between 0 and 1 as follows:

ξ^=1exp(ϵξ)\hat{\xi}=1-\text{exp}(-\epsilon\>\xi) (26)

ϵ\epsilon is a tunable constant parameter (set to 2e-3). The normalized skill uncertainty is subsequently employed to modulate the level of conservatism in skill planning, encompassing both skill actions and robot actions as

zt^\displaystyle\hat{z_{t}} =(1ξ^)zt+ξ^zt1\displaystyle=(1-\hat{\xi})\>z_{t}+\hat{\xi}\>z_{t-1} (27)
a^t+h\displaystyle\hat{a}_{t+h} =(11+ξ^)at+h\displaystyle=\left(\frac{1}{1+\hat{\xi}}\right)\>a_{t+h} (28)

In the skill embedding space, the conservative skill inference enables a more deliberate skill planning by deducing skill actions that depend on the preceding skill when uncertainty is high. Similarly, according to the last equation, reducing action execution speed by up to 50% of the maximum can enhance robot operation stability in uncertain situations. The detailed process of uncertainty-aware shared autonomy process, including conservative skill inference, is described in Algorithm 2.

TABLE I: Position and orientation constraints in each configuration mode (units: meters and degrees)
Configuration xx yy zz α\alpha β\beta γ\gamma
Fwd min 0.38 -0.2 0.07 -135 -5 -45
max 0.53 0.2 0.3 135 20 45
Dwd min 0.2 -0.2 0.04 -20 -40 -90
max 0.44 0.2 0.13 20 3 90
1 Load pretrained networks pdandπN1pap_{d}\>\text{and}\>\pi_{N_{1}}\leftarrow p_{a}
2 𝒟𝒟BC,[]\mathcal{D}\leftarrow\mathcal{D}_{BC},\>\mathbb{H}\leftarrow[]
3 for epoch i=1:Li=1:L do
4       for rollout j=1:Mj=1:M do
5             for timestep tTof rolloutjt\in T\>\text{of rollout}\>j  do
6                   if expert has control then
7                         𝒟jπE(x)\mathcal{D}_{j}\leftarrow\pi_{E}(x)
8                  else
9                         []\mathbb{H}\leftarrow[]
10                         for drops k=1:Kk=1:K do
11                               zk=pa(ot,st)𝒩(𝝁,𝚺)z_{k}=p_{a}(o_{t},s_{t})\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})
12                               append|𝚺k|ofzkto\text{append}|\boldsymbol{\Sigma}_{k}|\>\text{of}\>z_{k}\>\text{to}\>\mathbb{H}
13                        ξ^1exp(ϵstd())\hat{\xi}\leftarrow 1-\text{exp}(-\epsilon\>\text{std}(\mathbb{H}))
14                         zt^(1ξ^)zt+ξ^zt1\hat{z_{t}}\leftarrow(1-\hat{\xi})z_{t}+\hat{\xi}\>z_{t-1}
15                         a^t+h(1/(1+ξ^))at+h\hat{a}_{t+h}\leftarrow(1/(1+\hat{\xi}))a_{t+h}
16                        
17                  
18            𝒟𝒟𝒟j\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{D}_{j}
19            
20      πNi+1updateπNi\pi_{N_{i+1}}\leftarrow\text{update}\>\pi_{N_{i}}
21      
Algorithm 2 Uncertainty-aware Shared Autonomy Process

IV EXPERIMENTS

IV-A Experimental Setup

In our setup (Fig. 2), we used a UR3 robot with a Robotiq 2F-85 gripper for manipulation, an HTC VIVE Pro2 for teleoperation, and a RealSense D435 camera for observation. We adjusted the TCP origin by adding a 127mm offset to its z-axis. Two cameras were placed, one in front for recording evaluation videos and one on the rear side (Fig. 4). Teleoperation and skill learning were conducted on a desktop system with an i7-Xeon processor and an RTX-3090 GPU. For the pick-and-place task, we utilized a white basket with toy fruit, while the pouring task involved a green plastic cup and a transparent bottle containing red beads.

IV-B Rotation Constraints Validation

Since the position constraints yielded obvious results, we only conducted an evaluation of the orientation constraints. Fig. 5 illustrates the comparative results between the original and constrained rotation motions of the TCP along each orientation axis. The results demonstrate that the proposed rotation constraint algorithm effectively confines the input rotations to the specified limits as detailed in Table I.

Refer to caption
Figure 5: Accumulated plot illustrating the input and constrained TCP coordinates for alpha, beta, and gamma orientations.
Refer to caption
Figure 6: Evaluation of adaptable manipulation skills in dynamic environments where humans manipulate target objects to simulate variations.

IV-C Learning Pouring and Pick and Place Skills

We conducted HSN training over 10K epochs using 308 and 283 demonstration datasets for pouring and pick-and-place tasks. Each demonstration began with randomly positioned target objects within predefined manipulable regions (Fig. 4) for respective tasks. The initial robot configuration was also randomized (forward or downward) with uniform noise. The first half of the dataset was collected via VR teleoperation for initial skill policy training. In the remaining half, correction motion datasets were gathered by user interventions during SAP-based task execution in situations where collisions and task failures (e.g., tipping over a water bottle) were expected. We independently trained pouring and pick-and-place skills and achieved 90% and 80% success rates, respectively, in 10 disturbance-free trials. Failures resulted from minor spatial errors despite correct semantic actions (e.g., approaching the bottle correctly, but failed due to a few spatial errors). This indicates success in semantic learning, with spatial errors expected to decrease with a more extensive and varied dataset. For pick-and-place, occlusion caused by robot hardware made tasks challenging, leading to lower performance compared to pouring, which is relatively simpler. To tackle this, we plan to use multiple cameras for observation in future work.

IV-D Task Performance in Dynamic Environment

We assessed our skills in dynamic environments, where target objects were moved during tasks. Despite deliberate disruptions, the agent achieved success rates of 80% and 70% (Fig. 6, Table II). Our HSN demonstrated the ability to adapt to dynamic changes, even without specific disruption demonstrations in the initial dataset. However, adapting to pose variations, like recovering a fallen bottle, would require additional demonstrations.

IV-E Multi-Skill Learning and Task Transition

We evaluated the HSN’s ability to learn multi-configurable skills. Initially, we trained it for 5K epochs using demonstration datasets from two tasks. Then, we collected 200 additional demonstrations with SAP, totaling 10K epochs of training. The results demonstrated the HSN’s success in performing pouring and pick-and-place tasks, achieving 80% and 90% success rates, respectively, over 10 trials. The agent smoothly transitioned configurations between tasks within a few seconds, achieving a 100% success rate in configuration transitions. The outcomes from both single and multi-skill learning, detailed in Table II, suggest the potential extension of this method to a broader range of skills and objects.

IV-F Conservative Skill Inference

The SAP based on Conservative Skill Inference (CSI) demonstrated a more stable learning process compared to existing methods. After collecting a dataset of 70 demonstrations and training for 1K epochs, the non-CSI policy diverged and caused collisions, whereas the CSI-based policy exhibited a more stable motion by reducing the execution speed based on inferred uncertainty (refer to the video). This contributes to the stability of robot motion not only during the learning process but also in out-of-distribution scenarios during deployment.

TABLE II: Evaluation results for pouring, pick-and-place tasks, and the multi-skill agent in both static and dynamic settings, along with task transition success rates (Unit: %)
Task Condition
Static Dynamic Task Transition
Pouring 90 80 -
Pick and place 90 70 -
Multi-Skill 80 70 100

V DISCUSSION and CONCLUSIONS

In this paper, we propose a learning method within the shared autonomy process, where skills are acquired through human demonstration and correction. This method is based on the uncertainty of manipulation skills, enabling conservative task execution to expand the permissible margin for human errors. Additionally, we implement a shared autonomy system for robot manipulation skill learning, a key component that has shown recent outstanding results but lacks specific public details. Through our proposed system, we experimentally demonstrate the learning of multi-configurable manipulation skills and the ability to perform skill replanning for task completion in dynamic environments with disturbances.

We introduce a hierarchical skill network to infer uncertainty in the current context at an abstract level. We also propose a technique for conservative skill inference using MC-dropout based uncertainty estimation for skill layers and terminal output actions. This approach provides more flexibility in assessing the timing of human intervention and mitigates the potential for errors, out-of-distribution scenarios, and risk-related task failures.

The proposed system enables more stable manipulation skill learning through conservative skill inference. However, because it utilizes features from the entire video as observations, there are instances where uncertainty inference for partial changes becomes uncertain itself. In future research, we plan to incorporate video understanding methods at the patch level, such as Vision Transformer [16], to enhance the performance of skill uncertainty.

ACKNOWLEDGMENT

This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2020-0-00842, Development of Cloud Robot Intelligence for Continual Adaptation to User Reactions in Real Service Environments, 50%) and (No. 2022-0-00951, Development of Uncertainty-Aware Agents Learning by Asking Questions, 50%)

References

  • [1] O. Qureshi, M. N. Durrani, and S. A. Raza, “Imitation learning for autonomous driving cars,” in 2023 3rd International Conference on Artificial Intelligence (ICAI).   IEEE, 2023, pp. 58–63.
  • [2] Y. Wang, C. C. Beltran-Hernandez, W. Wan, and K. Harada, “An adaptive imitation learning framework for robotic complex contact-rich insertion tasks,” Frontiers in Robotics and AI, vol. 8, p. 777363, 2022.
  • [3] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor, “Language-conditioned imitation learning for robot manipulation tasks,” Advances in Neural Information Processing Systems, vol. 33, pp. 13 139–13 150, 2020.
  • [4] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn, “Bc-z: Zero-shot task generalization with robotic imitation learning,” in Conference on Robot Learning.   PMLR, 2022, pp. 991–1002.
  • [5] M. Kelly, C. Sidrane, K. Driggs-Campbell, and M. J. Kochenderfer, “Hg-dagger: Interactive imitation learning with human experts,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 8077–8083.
  • [6] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.
  • [7] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al., “Rt-1: Robotics transformer for real-world control at scale,” arXiv preprint arXiv:2212.06817, 2022.
  • [8] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” arXiv preprint arXiv:2307.15818, 2023.
  • [9] N. A. Bradbury, “Attention span during lectures: 8 seconds, 10 minutes, or more?” 2016.
  • [10] J. Reason, “Human error: models and management,” Bmj, vol. 320, no. 7237, pp. 768–770, 2000.
  • [11] K. Pertsch, Y. Lee, and J. Lim, “Accelerating reinforcement learning with learned skill priors,” in Conference on robot learning.   PMLR, 2021, pp. 188–204.
  • [12] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning.   PMLR, 2016, pp. 1050–1059.
  • [13] I. Csiszár, “I-divergence geometry of probability distributions and minimization problems,” The annals of probability, pp. 146–158, 1975.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [15] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint arXiv:1505.00853, 2015.
  • [16] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.