Trajectory Planning for Teleoperated Space Manipulators Using Deep Reinforcement Learning
Abstract
Trajectory planning for teleoperated space manipulators involves challenges such as accurately modeling system dynamics, particularly in free-floating modes with non-holonomic constraints, and managing time delays that increase model uncertainty and affect control precision. Traditional teleoperation methods rely on precise dynamic models requiring complex parameter identification and calibration, while data-driven methods do not require prior knowledge but struggle with time delays. A novel framework utilizing deep reinforcement learning (DRL) is introduced to address these challenges. The framework incorporates three methods: Mapping, Prediction, and State Augmentation, to handle delays when delayed state information is received at the master end. The Soft Actor Critic (SAC) algorithm processes the state information to compute the next action, which is then sent to the remote manipulator for environmental interaction. Four environments are constructed using the MuJoCo simulation platform to account for variations in base and target fixation: fixed base and target, fixed base with rotated target, free-floating base with fixed target, and free-floating base with rotated target. Extensive experiments with both constant and random delays are conducted to evaluate the proposed methods. Results demonstrate that all three methods effectively address trajectory planning challenges, with State Augmentation showing superior efficiency and robustness.
keywords:
teleoperated space manipulator , time delay , deep reinforcement learning[inst1]organization=Shenzhen International Graduate School, Tsinghua University,[email protected], city=Shenzhen, postcode=518055, country=China \affiliation[inst2]organization=Shenzhen International Graduate School, Tsinghua University,[email protected], city=Shenzhen, postcode=518055, country=China \affiliation[inst3]organization=Research Institute of Tsinghua University in Shenzhen,[email protected], city=Shenzhen, postcode=518057, country=China \affiliation[inst4]organization=Shenzhen International Graduate School, Tsinghua University,[email protected], city=Shenzhen, postcode=518055, country=China \affiliation[inst5]organization=Department of Automation, Tsinghua University,[email protected], city=Beijing, postcode=100000, country=China \affiliation[inst6]organization=Shenzhen International Graduate School, Tsinghua University,[email protected], city=Shenzhen, postcode=518055, country=China
1 Introduction
With the aid of teleoperation technology, space robots significantly enhance astronauts’ capabilities in space operations, playing an increasingly vital role in On-Orbit Servicing (OOS) missions, including tasks such as capturing, refueling, repairing satellites, removing orbital debris, and assembling and maintaining large space infrastructure [1, 2, 3, 4]. The current commonly used teleoperation control methods include teleprogramming control, bilateral control, and virtual predictive control[5]. Teleprogramming control operates in a supervisory mode, where space robots receive operational instructions from the master end and interact with the environment at the slave end, forming a closed-loop system[6, 7]. However, this method relies on the intelligence level of the space robot. Both bilateral control and virtual predictive control fall under the category of direct control. The distinction lies in bilateral control directly receives force feedback information from the remote environment and is suitable for scenarios with small delays. With the aid of appropriate control algorithms, such as passive control[8], robust control[9, 10, 11] and impendence control[12, 13, 14], the force and position information between the operator at the master end and the robot at the slave end is kept consistent. By contrast, virtual predictive control[15, 16] establishes a virtual model at the master end similar to the environment and robot at the slave end, mitigating the impact of large delays on system stability and operational characteristics. All model-based methods demand high model accuracy, while designing complex controllers entails deep domain expertise. However, space robots are intricate dynamic systems, presenting significant modeling challenges due to the complex coupling of dynamics between the base and the manipulator arm, as well as the nonlinearities stemming from factors such as friction and joint flexibility[17]. Furthermore, even with a physics-based model, many crucial parameters often remain unknown. These factors can have a substantial impact on the model and subsequently affect the performance of the controller.
Recently, data-driven model-free deep reinforcement learning (DRL) has demonstrated significant promise in various domains such as games[18, 19], industrial control[20, 21], and large language models[22]. This approach has also been widely employed by scholars in the field of space robotics, predominantly focusing on trajectory planning for robotic arms. For stationary targets, Yan et al.[23] achieved single/dual-arm grasping using Soft Q learning. Wang et al.[24] employed image inputs and the Soft Actor-Critic algorithm to decide joint angular velocities for controlling UR5 robotic arms. Lei et al.[25] employed the Proximal Policy Optimization algorithm to achieve active object tracking with dynamically updated rewards to ensure that the target remains as centered as possible in the camera’s field of view. Wu et al.[26] conducted trajectory tasks effectively with slight target movement utilizing the Deep Deterministic Policy Gradient algorithm on a space robot with freely-floating dual arms. To enhance the efficiency of DRL exploration and achieve more precise and stable results, Wang et al.[27] proposed an improved Proximal Policy Optimization algorithm, integrating methods for decision-making based on input states to obtain the final action. Cao et al.[28] introduced inverse kinematics of fixed-base robots as a priori strategies, guiding the intelligent agent toward the optimal strategy through a hybrid strategy construction process. To avoid the potential collisions among robot arms, bases, and obstacles, Li et al.[29] modified the reward function based on the shortest distances between robot arm links and restrictions on end-effector velocity. Wang et al.[30] divided the entire system into two layers: the high-level policy for collision-free trajectory planning of end-effectors, and the low-level policy for decomposing any given pose into position and orientation sub-tasks. Moreover, Wang et al.[31] addressed the issue of unknown motion states of non-cooperative target grasping points by predicting grasping points based on the object’s point cloud information. Jiang et al.[33] and Yang et al.[32] further investigated the end-to-end control of flexible arms on space robots. While previous studies have demonstrated the potential of DRL for space robotics, they typically assume a high level of autonomy on the remote robot, which is unrealistic given the current state of technology. In practice, the majority of intelligence resides on the master end, which introduces significant challenges for RL-based control due to communication delays and limited bandwidth.
A number of approaches have been proposed to address the issue of delay in reinforcement learning, which can be summarized into three main categories: information augmentation, model prediction, and others. Information augmentation methods primarily involve transforming the original delayed Markov decision process into a new delay-free Markov decision process based on an Information State composed of the most recent observed delay states and action sequences. The feasibility of this algorithm was initially analyzed theoretically by Katsikopoulos et al.[34], while Nath et al.[35] provided empirical evidences. Bouteiller et al.[36] performed partial trajectory resampling in delayed environments to effectively convert offline sub-trajectories into current policy-based sub-trajectories. Furthermore, leveraging information from agents in delay-free environments, such as a small amount of expert trajectories or previously learned expert policies, imitation learning can be employed to tackle tasks with delays[37, 38]. Model prediction methods generally involve two steps: predicting unseen states due to delays and then making final decisions based on the predicted states and standard reinforcement learning algorithms. Therefore, simulating environmental dynamics accurately is crucial for this category of methods. Initially dominated by traditional algorithms, Walsh et al.[39] viewed this process as a deterministic mapping, while Hester et al.[40] used random forests to predict states and rewards. Subsequent developments based on deep neural networks saw Firoiu et al.[41], Derman et al.[42], and Chen et al.[43] utilize recurrent neural networks, feed-forward models, and particle integration methods, respectively, to learn transitions. Additionally, using trajectories in delay-free environments, Xia et al. [44] learned a context representation encoder containing delay information, which was then used for encoding in delayed environments. Alternative approaches encompass two distinct categories: memoryless methods[45], which make decisions solely based on the latest observed state, ignoring any delay effects, and methods[46] that learn transition functions and Q-values in delay-free environments and select actions that may result in the maximum Q-values under predicted states during the decision-making process.
Motivated by the aforementioned methods, this study explores, for the first time, the application of DRL in trajectory planning for the remotely operated space manipulator. The control process consists of three key components: a delay information processing module, a DRL decision module, and a remote environment interaction module. The aim of the delay information processing module is to generate a state that facilitates current decision-making by considering both the current delayed state and historical action sequences. To achieve this, three methods, Mapping, Prediction, and State Augmentation, are designed accordingly. Following the completion of delay information processing, the decision module generates corresponding actions based on the obtained new state. SAC is chosen as the decision-making algorithm due to its robustness in policy generation through cumulative rewards and maximum entropy optimization. The remote environment interaction module operates in the MuJoCo simulation environment, including four single arm scenarios: fixed base and fixed target, fixed base and rotated target, free-floating base and fixed target, and free-floating base and rotated target. Upon receiving the torque information from the master end, the space manipulator at the slave end interacts with the environment, generating the next state and reward, which are then fed back to the master end.
The paper is organized into five sections. Section 2 introduces the research problem, including background knowledge such as the kinematics of space robot, the reinforcement learning setup, and models for both constant and random time delays in teleoperation. Section 3 provides a detailed description of the entire control process, focusing particularly emphasis on the algorithmic design of the delay information processing module and the DRL decision module. Section 4 presents the simulation results under various delay scenarios across four distinct settings, along with an analysis of the outcomes. This paper is concluded in Section 5 with directions for future work.
2 Problem Statement
2.1 Kinematics of space robot
According to Figure 1, a space robotic system typically consists of a spacecraft (base) and a manipulator with degrees of freedom (DOF).

The vector of joint angles of the manipulator is denoted as , and correspondingly, its velocity can be represented as . Additionally, () represent the velocities of the base and the robotic arm, respectively. Referring to [47] , the following equation can be obtained:
(1) |
where and are the Jacobian matrices of the base and the manipulator respectively. Without any external force or torques executed, the space robot is in free-floating mode and its linear momentum and angular momentum are conserved. Therefore, it holds that:
(2) |
where and are inertia matrix and coupling inertia matrix, respectively. Further, assuming the initial values of and are both zero, the velocity of the base can be expressed as:
(3) |
where is the Jacobian matrix of the base. Substituting Eq. (3) into Eq. (1) yields the following expression:
(4) |
where is referred to as the generalized Jacobian matrix, which is not only related to kinematic parameters but also to dynamic parameters.
2.2 Reinforcement learning
Reinforcement learning is a sequential decision-making process grounded in the theoretical framework of Markov decision processes (MDPs). Typically, an MDP is defined by a 6-element tuple , where and represent the state space and action space, respectively; is the state transition function and denotes the reward function while represents the initial distribution of states; signifies the discount factor for future rewards. At time step , the agent observes the environmental state . Subsequently, based on its current policy , it selects an action to apply to the environment. The environment then returns a new state along with a reward . The objective of reinforcement learning is to maximize the cumulative reward to obtain an optimal policy , where is the maximum number of steps the agent interacts with the environment in an episode.
The trajectory planning problem of space robots can be modeled using MDPs, described by a tuple of six elements. Specifically, the setup for the state space, action space, and reward function is as follows:
-
•
State .
Due to the crucial value of the information contained in the state for agent decision-making, it is imperative to consider as many relevant features influencing decisions as possible when designing the state. In the trajectory planning process of space robots, factors such as the joint angles , joint angular velocities , end-effector pose and velocity , target pose , and the distance between the end-effector and the target point all play pivotal roles. Particularly, when the space robot operates in a free-floating mode, the presence of non-holonomic constraints between the base and the manipulator can affect the manipulator’s planning. Therefore, the base pose and velocity should also be included in the state space design. Consequently, for the fixed-base mode, the state is designed as follows:
(5) By contrast, for the free-floating mode, the state is designed as follows:
(6) -
•
Action .
The action is designed as a set of torques applied to the manipulator joints, clipped to a certain range: and .
-
•
Reward function .
The reward function is defined as follows:
(7) where the episode is terminated when the distance is small than the threshold . Each term in Eq. 7 serves a distinct purpose. The first term incentivizes the robotic arm’s end effector to approach the target point as closely as possible. The second term addresses the scenario where the end effector lingers near the target point as the distance nears the threshold without progressing further, ensuring continual advancement towards the target. The third and fourth terms aim to mitigate excessive velocity fluctuations in the base, end-effector, and joints of the robotic arm during trajectory planning, promoting smooth transitions across these variables from a reward standpoint. Lastly, the fifth term functions as a terminal reward, activated only when the distance falls below or equals the threshold. The magnitude of this reward is directly proportional to the remaining steps at the end of an episode, . As a result, it is expected that as the agent completes tasks more swiftly, the reward value associated with this term will increase proportionally.

2.3 Time delay in teleoperation
In the context of space teleoperation, the time delay introduced by the data link disrupts the Markov property of the original decision-making process, significantly compromising the performance of reinforcement learning algorithms. Specifically, shown in Figure 2, once an operator issues an instruction at the master end, there is a delay before this instruction reaches the slave end, which constitutes the action delay as referenced in [34]. Meanwhile, when the space robot at the slave end actually interacts with the environment, the feedback from the environment, containing new states and rewards, is not promptly relayed to the master end, resulting in the observation delay and reward delay discussed in [34]. Since observation delay and action delay exert equivalent influences on the agent’s decision-making process, proven in [34], we solely focus on the observation delay, ensuring that its length aligns with that of the round time delay(RTD).
Remark. At time , the state generated by the slave end is denoted as , while the state actually observed by the master end is denoted by .
In ideal circumstances, RTD remains constant as it is solely determined by the transmission distance. Figure 3 shows a constant delay scenario, assuming an RTD-induced delay of . The state of the robot at the slave end evolves over time, while at the master end, due to the presence of RTD, the first observation is null, and subsequent observations lag behind those at the slave end by one time step, expressed as , where indicates the null, and the function is the Dirac delta function.
During the transmission, network congestion and other environmental factors often introduce randomness into the delay. Building upon the RTD, we assume that the random delay follows a uniform distribution with a parameter , denoted as . Consequently, the total delay in data link is . In Figure 4, a scenario with random delay is illustrated, where and . At , , indicating that the master end does not observe the state at . Instead, it observes at , resulting in and . Similarly, at , , and at , , implying that the master end does not observe the state at , and both and are potentially observed at . However, the old state is discarded, resulting in the observed state at being , namely, . Note that, regardless of a constant or random delay, the focus of this study is on making decisions at each time step based on the observations from the master end. The following sections will delve into a detailed analysis of this aspect.


3 Method
The framework of space teleoperation based on DRL is illustrated in Figure 2, consisting of three components: master end, data link, and slave end. The data link and slave end components operate in a manner akin to traditional teleoperation processes. The data link facilitates data transmission and processing, while the slave end primarily executes commands for the space robotic arm to interact with the environment and subsequently returns the state information and rewards generated by the environment. The design of the master end diverges from traditional methods, incorporating a Delay Information Processing(DIP) module and a DRL module. Within the DIP module, we introduce three distinct methods: Mapping, Prediction, and State Augmentation. The DRL module predominantly utilizes the SAC algorithm. The specific algorithm is outlined as follows.
Remark. The delay value at the master end is defined as follows:
(8) |
3.1 The DIP module
3.1.1 Mapping
Mapping adopts a memoryless strategy that disregards delays and treats the most recently observed state as the environment’s true state for decision-making. When differs from , the master end observes the current state; however, if equals , indicating an unobserved state at the master end, it is substituted with the last observed state , and the instant reward is set to 0. Figure 5 provides a simple illustrative example based on the random delay specified in Figure 4.

For and , the master end can directly observe and , respectively, resulting in and , with the instant reward calculated according to Eq. 7. By contrast, for and , the normal observation is unavailable, i.e., , and the current state is replaced with the last observed state, i.e., and , while the corresponding instant reward is set to . The algorithm is outlined in Algorithm 1.

3.1.2 Prediction
Prediction involves training a forward model using historical trajectory data. To achieve this, we leverage the tuples stored in the Replay Buffer to train a nonlinear function through supervised learning. Here, the inputs are and , while the outputs consist of and , namely, , effectively modeling the dynamics of the environment. Once the forward model is trained, we compute and iteratively using the last observed state and the historical action sequence , as shown in Figure 6. The final decision is then made based on the predicted . The algorithm operates as Algorithm 2.
3.1.3 State augmentation
State augmentation constructs an Information State consisting of delayed state information and historical action sequences to transform the delayed MDP into a delay-free MDP, which is defined as follows:
-
•
State .
For , the new state is defined as follows,
(9) -
•
Action .
It is the same as the definition in Section 2.2.
-
•
Reward .
Based on Eq. 7, the new reward function is as below:
(10) -
•
Initial state distribution .
The new initial state distribution is as below:
(11) where is the original initial state distribution and refers to the random action of the operator without the state from the remote.
The algorithm of delay information processing is as follows:
3.2 The decision module
The objective function at the master end is designed as follows:
(12) |
where is the delayed state after processing in Section 3.1 and is the distribution of previously sampled states and actions, or a replay buffer. The optimal policy is obtained by maximizing the objective function. However, the end-to-end trajectory tracking for space robots poses a significant challenge due to its complex nature and the expansive state space dimension. For example, for Algorithm 1 and Algorithm 2, the dimension is or , while the dimension extends to or for Algorithm 3. Furthermore, optimizing Eq. 12 yields a deterministic policy and often leads to local optima, making it difficult to find policy parameters that achieve high cumulative rewards. To address this, Eq. 12 is modified as follows:
(13) |
where represents the entropy of the policy, and determines the relative importance of the entropy term against the reward. Eq. 13 not only encourages the agent to explore more effectively but also controls the stochasticity of the optimal policy.
In this study, Eq. 13 is optimized based on the SAC algorithm. Similar to traditional SAC algorithms, three neural networks , and are designed to to approximate the state-action value function, state value function, and policy function, respectively. The parameters of the Q-function are updated by minimizing the soft Bellman residual:
(14) |
where represents the target value network for and is updated by the exponential moving average of the value network weights. The parameters of the soft value function are updated by minimizing the squared residual error:
(15) |
The parameters of the policy network are updated by minimizing the KL-divergence:
(16) |
where is the partition function and can be ignored. We utilize the reparameterization trick to minimize Eq. 16 and construct the policy network:
(17) |
where is a noise vector. Consequently, Eq. 16 can be rewritten as:
(18) |
The only difference from the traditional SAC lies in structure of the replay buffer. In traditional methods, a quadruple is stored in the replay buffer after each step. However, in teleoperation, the master end may not have access to the true state. To ensure the reliability of the data stored in the replay buffer, such data cannot be included.
The pseudocode of the complete procedure is specified in Algorithm 4.
4 Experiments
4.1 Simulation settings
We introduce the simulation settings from three aspects: environment setup, neural network architecture, and operating platform.
Environment setup. Based on the 7-DOF redundant robotic arm model[26], illustrated in Figure 7, we construct a single-arm space robot model within the MuJoCo environment, as shown in Figure 8. The kinematic and dynamic parameters are detailed in Table LABEL:tab:_kinematic. In the constructed simulation environment, the length of each simulation time step is set to 0.01 s, with an action executed every 4 time steps. Throughout the experiments, the maximum number of environment steps per episode is , resulting in a maximum simulation duration of seconds. This experiment involves a random grasping task performed in teleoperation mode, aiming to position the end effector of the space manipulator within 5 mm of the target point. Given the end effector’s radius of 0.02 m, we set m, as outlined in Table LABEL:tab:_kinematic.


Link No. | Shape | Inertial properties |
0(Base) | Box, Edge length 0.7m | kg |
1 | Cylinder, | kg |
2 | Cylinder, | kg |
3 | Cylinder, | kg |
4 | Cylinder, | kg |
5 | Cylinder, | kg |
6 | Cylinder, | kg |
7 | Cylinder, | kg |
Env | Base | Target |
SAFBFT | Fixed | Fixed |
SAFBRT | Fixed | Rotated, or |
SAUBFT | Free-floating | Fixed |
SAUBRT | Free-floating | Rotated, or |
Based on the model in Figure 8, we built four distinct simulation environments, each corresponding to a specific scenario, provided in Table LABEL:tab:_environments.
To enhance the model’s generalization, we conduct uniform sampling within a cube in the workspace range of the robotic arm and treat them as the positions of target during training. Additionally, we introduce noise to the initial joint angles and angular velocities of the robotic arm. Specifically, in the world coordinate system, the position of the base is
(19) |
and the position of the target star is
(20) |
The initial joint angles of the robotic arm without noise are
(21) |
and the noise added to the initial angles is represented as
(22) | ||||
Similarly, the initial angular velocities of the robotic arm without noise are
(23) |
and the noise added to the initial velocities is denoted as
(24) | ||||
Consequently, the initial joint angles of the robotic arm are given by
(25) |
and the initial angular velocities are given by
(26) |
The termination condition for each episode is either reaching the maximum number of simulation steps or the distance between the end effector and the tracking target point being less than or equal to .
The hyperparameters in Eq. 7 are set as follows: -0.01, -1, -0.1, -0.1, and 1.5, respectively.
Neural network architecture. The network architectures and hyperparameters of the generic SAC algorithm used in this paper are presented in Table 3.
Structure of SAC | Actor Network | |||
dim of input | dim of state | Alg. 1 or 2 | 33 or 45 | |
Alg. 3 | 33+ 7 * or 45 + 7 * | |||
dim of action | 7 | |||
dim of hidden size | 256 | |||
activation function | Relu | |||
number of layers | 2 | |||
dim of output | 7 | |||
Critc Network | ||||
dim of input | dim of state | Alg. 1 or 2 | 33 or 45 | |
Alg. 3 | 33+ 7 * or 45 + 7 * | |||
dim of action | 7 | |||
dim of hidden size | 256 | |||
activation function | Relu | |||
number of layers | 2 | |||
dim of output | 1 | |||
Hyper- parameters | 0.99 | |||
learning rate | 0.0003 | |||
0.2 | ||||
batch size | 256 | |||
total training step | 500000 | |||
replay buffer size | 500000 | |||
optimizer | Adam | |||
0.005 |
Forward Moel | Network | ||
dim of input | dim of state | 33 or 45 | |
dim of action | 7 | ||
dim of hidden size | 200 | ||
activation function | Relu | ||
number of layers | 3 | ||
dim of output | 33 or 45 | ||
Hyper- parameters | capacity of training | 50000 | |
optimizer | Adam | ||
learning rate | 0.001 |
When using Algorithm 2, the network architecture and parameters of the forward model are presented in Table 4.
Operating platform. All experiments are conducted on an NVIDIA GeForce RTX 3090 graphics card. The versions of gym, mujoco, and PyTorch used in the experiments are 0.21.0, 2.0.2.8, and 1.11, respectively.
4.2 Simulation result and analysis

For the four environments in Table LABEL:tab:_environments, we conduct experiments with fixed delays and corresponding random delays with for each algorithm. The state augmentation method based on SAC is named as SACAS. We abbreviate constant_Mapping as CM, and similarly constant_Prediction, constant_SACAS, random_Mapping, random_Prediction, and random_SACAS as CP, CS, RM, RP, and RS, respectively. Each data point in Figure 9, 11 - 16, and every entry in Tables LABEL:tab:_delay=1-LABEL:tab:_delay=8 are obtained from experiments with three different random seeds.
4.2.1 Investigation of the training process
Observations from Figure 9 include: 1) In SAFBFT, the curves of CS and RS exhibit consistent patterns, as do those of CM and RM, indicating that whether the delay is random has minimal impact on the algorithms, with CS and RS consistently outperforming CM and RM. The curves of CP and RP decrease significantly with increasing delays. 2) In SAFBRT, CS and RS exhibit the highest rankings in performance, although the effectiveness of SACAS experiences fluctuations due to random delays, as evident in the RS curve. Following closely are the CM and RM curves, with RM slightly edging out CM particularly at a delay of 6. Bringing up the rear are CP and RP. 3) In SAUBFT, CS demonstrates superior performance than others, with the RS curve maintaining a higher standing than the rest, irrespective of a slight but discernible variation with increasing delay. CM, RM, and CP initially perform comparably at delay 1, but their effectiveness notably declines with increasing delay. In contrast, RP noticeably lags behind other curves in terms of performance. 4) In SAUBRT, CS and RS continue to maintain their lead, although with a declining trend. The other curves show significant decline, except for CM, which performs far better than in other environments at delay 2, possibly attributed to its decision state.
Based on the above observations, we can draw the following conclusions:
-
•
In the four environments, the performance of all three algorithms deteriorates as the delay increases, regardless of whether it is fixed or random. Fixed delay yields better results than random delay with the same .
-
•
SACAS consistently demonstrates the highest and most stable performance, except in the most challenging scenario SAUBRT, where its random performance is less remarkable compared to other environments, but is still superior overall.
-
•
Mapping is the second-best performer, showing commendable performance in both fixed-base scenarios and capable of handling situations where the base is floating and the delay is small.
-
•
Prediction performs worst across all environments, exhibiting the most severe performance decline as delay increases. This can be possibly attributed to the substantial discrepancies between the predicted states produced by the forward model over successive iterations and the actual states upon which the decisions are based.
Figure 11 - 16 illustrate the training records of 50,000 environment steps under various delay conditions for the three algorithms. It is evident that the curves of SACAS exhibit greater density, with turning points leaning further to the upper left, indicating faster convergence and smoother target tracking. Furthermore, SACAS performs better in scenarios with free-floating space, rotating targets, and random delays compared with the other two methods.
4.2.2 Investigation of the evaluation process
After training, we evaluate the trained models for 100 test runs within their respective training scenarios, and record the success rates achieved and the total steps taken upon success, as shown in Tables LABEL:tab:_delay=1-LABEL:tab:_delay=8.
From Tables LABEL:tab:_delay=1 and LABEL:tab:_delay=2, it is evident that when confronted with low delays () and operating within a fixed-base space robot environment, irrespective of target rotation or random delays, all three algorithms exhibit comparable success rates. However, in scenarios such as SAUBFT with a floating base, Predication demonstrates notably lower performance compared with the others. Similarly, SAUBRT, Mapping exhibits less favorable outcomes than SACAS and Prediction, while Predication requires more steps to achieve success. Table LABEL:tab:_delay=4 reveals that when moderate delays () are introduced, in the SAFBFT scenario, the success rates of the three algorithms are comparable, but Prediction requires more steps than the others. In the SAFBRT scenario, both Mapping and SACAS demonstrate similar task completion efficiency, while Prediction initially lags behind. In SAUBFT and SAUBRT, with fixed delays, both Mapping and Prediction achieve less than half of SACAS’s success rate, while SACAS maintains a success rate of over 90%. Under random delays, the situation exacerbates, but SACAS still achieves a success rate close to 70%. Tables LABEL:tab:_delay=6 and LABEL:tab:_delay=8 further elucidate that when confronted with large delays () and operating within a fixed-base space robot framework, both Prediction and Mapping achieve a minimum success rate of no less than 60%. However, under floating base conditions, both algorithms may exhibit poor performance, potentially even reaching a success rate of 0%. For SACAS, under fixed base conditions, the success rate surpasses 96%, while the success rate remains consistently above 50% when the target is fixed under floating base conditions and it does not drop below 33% when the target rotates.
In summary, SACAS method demonstrates broad applicability across various scenarios, while Mapping is effective particularly in fixed-base scenarios. However, owing to its requirement to forecast delay states, Predication tends to accumulate accumulates errors in iterative predictions, leading to notable deviations from the true values, thereby impacting decision-making processes, and hence is primarily suitable for scenarios characterized by low delays.
Env | Algorithm | constant | random | ||
success rate | episode steps | success rate | episode steps | ||
SAFBFT | Mapping | 1.0 ± 0.0 | 13.65 ± 2.91 | 1.0 ± 0.0 | 15.13 ± 3.66 |
Prediction | 1.0 ± 0.0 | 14.5 ± 4.04 | 1.0 ± 0.0 | 15.71 ± 5.78 | |
SACAS | 1.0 ± 0.0 | 13.58 ± 2.83 | 1.0 ± 0.0 | 14.45 ± 2.97 | |
SAFBRT | Mapping | 1.0 ± 0.0 | 14.53 ± 4.02 | 1.0 ± 0.0 | 16.25 ± 4.35 |
Prediction | 0.99 ± 0.0 | 15.12 ± 4.43 | 0.99 ± 0.0 | 17.16 ± 5.95 | |
SACAS | 1.0 ± 0.0 | 13.44 ± 2.79 | 1.0 ± 0.0 | 15.57 ± 4.73 | |
SAUBFT | Mapping | 1.0 ± 0.0 | 14.75 ± 3.55 | 0.96 ± 0.03 | 21.02 ± 7.86 |
Prediction | 0.96 ± 0.05 | 17.51 ± 6.05 | 0.88 ± 0.11 | 19.88 ± 11.01 | |
SACAS | 0.99 ± 0.01 | 14.86 ± 4.61 | 0.97 ± 0.01 | 17.6 ± 4.72 | |
SAUBRT | Mapping | 0.55 ± 0.12 | 23.89 ± 17.27 | 0.89 ± 0.04 | 23.68 ± 9.85 |
Prediction | 0.96 ± 0.31 | 34.36 ± 29.69 | 0.91 ± 0.04 | 23.03 ± 10.76 | |
SACAS | 0.94 ± 0.08 | 16.11 ± 5.42 | 0.94 ± 0.06 | 19.24 ± 7.38 |
Env | Algorithm | constant | random | ||
success rate | episode steps | success rate | episode steps | ||
SAFBFT | Mapping | 1.0 ± 0.0 | 16.88 ± 4.24 | 0.99 ± 0.0 | 18.86 ± 6.0 |
Prediction | 0.99 ± 0.01 | 22.87 ± 13.57 | 1.0 ± 0.0 | 18.86 ± 6.74 | |
SACAS | 0.99 ± 0.01 | 15.33 ± 2.97 | 1.0 ± 0.0 | 16.94 ± 4.43 | |
SAFBRT | Mapping | 1.0 ± 0.0 | 17.7 ± 5.36 | 1.0 ± 0.0 | 22.44 ± 9.67 |
Prediction | 0.99 ± 0.01 | 25.37 ± 19.99 | 0.99 ± 0.01 | 24.38 ± 13.79 | |
SACAS | 1.0 ± 0.0 | 16.19 ± 4.41 | 1.0 ± 0.0 | 17.88 ± 4.21 | |
SAUBFT | Mapping | 0.97 ± 0.03 | 21.27 ± 8.2 | 0.85 ± 0.05 | 25.54 ± 9.94 |
Prediction | 0.77 ± 0.14 | 28.01 ± 13.76 | 0.73 ± 0.09 | 25.45 ± 8.53 | |
SACAS | 0.94 ± 0.04 | 18.9 ± 6.66 | 0.96 ± 0.04 | 19.52 ± 5.72 | |
SAUBRT | Mapping | 0.93 ± 0.04 | 22.82 ± 7.56 | 0.8 ± 0.11 | 26.88 ± 10.09 |
Prediction | 0.95 ± 0.18 | 30.54 ± 22.99 | 0.82 ± 0.05 | 32.64 ± 14.84 | |
SACAS | 0.92 ± 0.11 | 22.03 ± 14.09 | 0.9 ± 0.05 | 21.46 ± 11.27 |
Env | Algorithm | constant | random | ||
success rate | episode steps | success rate | episode steps | ||
SAFBFT | Mapping | 1.0 ± 0.0 | 24.68 ± 8.78 | 1.0 ± 0.0 | 30.92 ± 16.09 |
Prediction | 1.0 ± 0.0 | 39.24 ± 28.78 | 0.98 ± 0.03 | 43.57 ± 29.38 | |
SACAS | 1.0 ± 0.0 | 19.63 ± 2.96 | 1.0 ± 0.0 | 23.28 ± 8.31 | |
SAFBRT | Mapping | 0.99 ± 0.01 | 28.9 ± 14.51 | 1.0 ± 0.0 | 35.06 ± 22.51 |
Prediction | 0.83 ± 0.1 | 57.1 ± 46.38 | 0.95 ± 0.02 | 43.3 ± 31.74 | |
SACAS | 1.0 ± 0.0 | 20.44 ± 4.68 | 0.98 ± 0.01 | 29.22 ± 23.53 | |
SAUBFT | Mapping | 0.45 ± 0.19 | 27.57 ± 11.2 | 0.45 ± 0.22 | 33.4 ± 17.18 |
Prediction | 0.42 ± 0.09 | 39.18 ± 22.88 | 0.28 ± 0.08 | 32.76 ± 12.41 | |
SACAS | 0.98 ± 0.01 | 22.55 ± 5.71 | 0.65 ± 0.2 | 26.39 ± 6.24 | |
SAUBRT | Mapping | 0.41 ± 0.11 | 34.2 ± 23.2 | 0.26 ± 0.07 | 39.06 ± 24.25 |
Prediction | 0.48 ± 0.05 | 48.68 ± 26.42 | 0.33 ± 0.12 | 40.48 ± 22.39 | |
SACAS | 0.92 ± 0.07 | 27.91 ± 9.24 | 0.7 ± 0.06 | 33.53 ± 15.37 |
Env | Algorithm | constant | random | ||
success rate | episode steps | success rate | episode steps | ||
SAFBFT | Mapping | 0.99 ± 0.0 | 44.97 ± 29.48 | 0.99 ± 0.01 | 48.31 ± 33.55 |
Prediction | 0.87 ± 0.01 | 76.61 ± 53.1 | 0.87 ± 0.05 | 58.48 ± 44.69 | |
SACAS | 1.0 ± 0.0 | 24.04 ± 3.28 | 1.0 ± 0.0 | 28.3 ± 10.27 | |
SAFBRT | Mapping | 0.97 ± 0.0 | 48.97 ± 29.99 | 0.95 ± 0.03 | 57.43 ± 39.0 |
Prediction | 0.68 ± 0.11 | 83.5 ± 62.9 | 0.82 ± 0.02 | 70.2 ± 49.65 | |
SACAS | 1.0 ± 0.0 | 24.35 ± 3.91 | 0.99 ± 0.01 | 34.24 ± 21.69 | |
SAUBFT | Mapping | 0.42 ± 0.11 | 39.95 ± 19.44 | 0.39 ± 0.17 | 34.45 ± 10.58 |
Prediction | 0.43 ± 0.09 | 51.14 ± 32.13 | 0.35 ± 0.1 | 66.68 ± 47.01 | |
SACAS | 0.90 ± 0.42 | 29.16 ± 8.62 | 0.57 ± 0.11 | 31.89 ± 7.64 | |
SAUBRT | Mapping | 0.2 ± 0.15 | 36.3 ± 17.67 | 0.22 ± 0.12 | 39.15 ± 17.13 |
Prediction | 0.21 ± 0.06 | 49.04 ± 19.84 | 0.13 ± 0.06 | 46.95 ± 27.22 | |
SACAS | 0.72 ± 0.05 | 37.52 ± 14.88 | 0.42 ± 0.1 | 36.71 ± 15.56 |
Env | Algorithm | constant | random | ||
success rate | episode steps | success rate | episode steps | ||
SAFBFT | Mapping | 0.98 ± 0.01 | 61.97 ± 41.78 | 0.93 ± 0.0 | 69.96 ± 47.04 |
Prediction | 0.56 ± 0.13 | 99.4 ± 60.81 | 0.47 ± 0.27 | 94.89 ± 69.96 | |
SACAS | 1.0 ± 0.0 | 28.5 ± 4.32 | 1.0 ± 0.0 | 33.36 ± 12.15 | |
SAFBRT | Mapping | 0.87 ± 0.02 | 70.59 ± 49.27 | 0.78 ± 0.02 | 76.88 ± 53.64 |
Prediction | 0.81 ± 0.09 | 81.2 ± 52.83 | 0.6 ± 0.2 | 79.91 ± 51.07 | |
SACAS | 0.99 ± 0.0 | 29.71 ± 8.42 | 0.96 ± 0.02 | 43.52 ± 25.03 | |
SAUBFT | Mapping | 0.17 ± 0.09 | 43.87 ± 20.37 | 0.12 ± 0.1 | 56.49 ± 29.58 |
Prediction | 0.16 ± 0.07 | 56.81 ± 18.75 | 0.17 ± 0.0 | 48.66 ± 21.69 | |
SACAS | 0.95 ± 0.03 | 35.4 ± 8.53 | 0.5 ± 0.16 | 37.48 ± 9.13 | |
SAUBRT | Mapping | 0.13 ± 0.09 | 57.32 ± 40.68 | 0.04 ± 0.02 | 50.92 ± 20.69 |
Prediction | 0.15 ± 0.06 | 70.9 ± 46.8 | 0.03 ± 0.02 | 56.4 ± 41.99 | |
SACAS | 0.74 ± 0.11 | 38.29 ± 18.25 | 0.38 ± 0.05 | 36.17 ± 10.8 |
4.2.3 Performance comparison
We record the time and memory consumption of different algorithms across various environments during training and calculate the mean and variance of time and memory consumption for each algorithm, as shown in the Figure 10. Figure 10 illustrates Mapping stands out as the most efficient option in terms of both time and space utilization. SACAS, due to its expanded state space, necessitates larger network structures compared with Mapping, resulting in higher memory consumption and longer computation time. Although Prediction maintains a consistent state space, its periodic updates to the forward dynamics model significantly extend both time and memory consumption. Consequently, from the aforementioned analysis, it becomes evident that SACAS emerges as the optimal algorithm, effectively balancing effectiveness and performance.

5 Conclusion
In this study, we endeavor to tackle the intricate challenge of trajectory planning for teleoperated space manipulators. Our approach integrates deep reinforcement learning into the conventional telecontrol framework, representing the first instance of such innovative attempts in this field. Our method focuses on the utilization of delayed state information and historical action buffers. To achieve this, we introduce three innovative techniques: Mapping, Prediction, and State Augmentation. These methods are designed to capitalize on delayed state information and historical actions to augment the decision-making capabilities of the agent, ensuring resilience even in environments characterized by inherent delays. Through experimentation conducted across four distinct environments within the MuJoCo simulation environment, wherein we varied the fixity of both the base and target, we validated the effectiveness of these algorithms. Furthermore, our results highlight the superior efficiency and robustness of the State Augmentation method.
Future work could be pursued along three avenues: 1)The Prediction method, commonly employed in engineering, could be further enhanced through ensemble techniques to predict multi-step trajectories, thereby reducing cumulative errors, akin to the approach described in [48]. 2) The scope of investigation could be broadened to encompass more complex systems, such as multi-arm space manipulator models or flexible space arms, building upon the foundation of the single-arm rigid body model. 3)The research could tackle more challenging problems, such as spatial activities involving force interactions, to further push the boundaries of inquiry and advance our understanding of teleoperated space manipulators.
Appendix A More details on training process






References
- [1] Q. Gao, J. Li, Y. Zhu, S. Wang, J. Liufu, J. Liu, Hand gesture teleoperation for dexterous manipulators in space station by using monocular hand motion capture, Acta Astronautica 204 (2023) 630–639.
- [2] W. Pryor, B. P. Vagvolgyi, A. Deguet, S. Leonard, L. L. Whitcomb, P. Kazanzides, Interactive planning and supervised execution for high-risk, high-latency teleoperation, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, pp. 1857–1864.
- [3] W. Zhang, F. Li, J. Li, Q. Cheng, Review of on-orbit robotic arm active debris capture removal methods, Aerospace 10 (1) (2022) 13.
- [4] W. Doggett, Robotic assembly of truss structures for space systems and future research plans, in: Proceedings, IEEE Aerospace Conference, Vol. 7, IEEE, 2002, pp. 7–7.
- [5] X. Wang, W. Xu, B. Liang, C. Li, General scheme of teleoperation for space robot, in: 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, IEEE, 2008, pp. 341–346.
- [6] X. Zhang, J. Liu, Autonomous trajectory planner for space telerobots capturing space debris under the teleprogramming framework, Advances in Mechanical Engineering 9 (9) (2017) 1687814017723298.
- [7] J. Funda, T. S. Lindsay, R. P. Paul, Teleprogramming: Toward delay-invariant remote manipulation, Presence: Teleoperators & Virtual Environments 1 (1) (1992) 29–44.
- [8] E. Nuño, L. Basañez, R. Ortega, Passivity-based control for bilateral teleoperation: A tutorial, Automatica 47 (3) (2011) 485–495.
- [9] Z. Wang, Z. Chen, B. Liang, B. Zhang, A novel adaptive finite time controller for bilateral teleoperation system, Acta Astronautica 144 (2018) 263–270.
- [10] D.-H. Zhai, Y. Xia, Adaptive finite-time control for nonlinear teleoperation systems with asymmetric time-varying delays, International Journal of Robust and Nonlinear Control 26 (12) (2016) 2586–2607.
- [11] Z. Chen, Y.-J. Pan, J. Gu, Adaptive robust control of bilateral teleoperation systems with unmeasurable environmental force and arbitrary time delays, IET Control Theory & Applications 8 (15) (2014) 1456–1464.
- [12] Z. Chen, B. Liang, T. Zhang, A self-adjusting compliant bilateral control scheme for time-delay teleoperation in constrained environment, Acta Astronautica 122 (2016) 185–195.
- [13] A. Y. Mersha, S. Stramigioli, R. Carloni, On bilateral teleoperation of aerial robots, IEEE Transactions on Robotics 30 (1) (2013) 258–274.
- [14] M. Sharifi, H. Salarieh, S. Behzadipour, M. Tavakoli, Impedance control of non-linear multi-dof teleoperation systems with time delay: absolute stability, IET Control Theory & Applications 12 (12) (2018) 1722–1729.
- [15] A. Kheddar, E.-S. Neo, R. Tadakuma, K. Yokoi, Enhanced teleoperation through virtual reality techniques, Advances in telerobotics (2007) 139–159.
- [16] Z. Deng, M. Jagersand, Predictive display system for tele-manipulation using image-based modeling and rendering, in: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), Vol. 3, IEEE, 2003, pp. 2797–2802.
- [17] X. Liu, H. Li, J. Wang, G. Cai, Dynamics analysis of flexible space robot with joint friction, Aerospace science and technology 47 (2015) 164–176.
- [18] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing atari with deep reinforcement learning, arXiv preprint arXiv:1312.5602 (2013).
- [19] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Küttler, J. Agapiou, J. Schrittwieser, et al., Starcraft ii: A new challenge for reinforcement learning, arXiv preprint arXiv:1708.04782 (2017).
- [20] J. Park, T. Kim, S. Seong, S. Koo, Control automation in the heat-up mode of a nuclear power plant using reinforcement learning, Progress in Nuclear Energy 145 (2022) 104107.
- [21] R. Nian, J. Liu, B. Huang, A review on reinforcement learning: Introduction and applications in industrial process control, Computers & Chemical Engineering 139 (2020) 106886.
- [22] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., Training language models to follow instructions with human feedback, Advances in Neural Information Processing Systems 35 (2022) 27730–27744.
- [23] C. Yan, Q. Zhang, Z. Liu, X. Wang, B. Liang, Control of free-floating space robots to capture targets using soft q-learning, in: 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2018, pp. 654–660.
- [24] S. Wang, Y. Cao, X. Zheng, T. Zhang, An end-to-end trajectory planning strategy for free-floating space robots, in: 2021 40th Chinese Control Conference (CCC), IEEE, 2021, pp. 4236–4241.
- [25] W. Lei, H. Fu, G. Sun, Active object tracking of free floating space manipulators based on deep reinforcement learning, Advances in Space Research 70 (11) (2022) 3506–3519.
- [26] Y.-H. Wu, Z.-C. Yu, C.-Y. Li, M.-J. He, B. Hua, Z.-M. Chen, Reinforcement learning in dual-arm trajectory planning for a free-floating space robot, Aerospace Science and Technology 98 (2020) 105657.
- [27] S. Wang, X. Zheng, Y. Cao, T. Zhang, A multi-target trajectory planning of a 6-dof free-floating space robot via reinforcement learning, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2021, pp. 3724–3730.
- [28] Y. Cao, S. Wang, X. Zheng, W. Ma, X. Xie, L. Liu, Reinforcement learning with prior policy guidance for motion planning of dual-arm free-floating space robot, Aerospace Science and Technology 136 (2023) 108098.
- [29] Y. Li, X. Hao, Y. She, S. Li, M. Yu, Constrained motion planning of free-float dual-arm space manipulator via deep reinforcement learning, Aerospace Science and Technology 109 (2021) 106446.
- [30] S. Wang, Y. Cao, X. Zheng, T. Zhang, Collision-free trajectory planning for a 6-dof free-floating space robot via hierarchical decoupling optimization, IEEE Robotics and Automation Letters 7 (2) (2022) 4953–4960.
- [31] S. Wang, Y. Cao, X. Zheng, T. Zhang, A learning system for motion planning of free-float dual-arm space manipulator towards non-cooperative object, Aerospace Science and Technology 131 (2022) 107980.
- [32] C. Yang, J. Yang, X. Wang, B. Liang, Control of space flexible manipulator using soft actor-critic and random network distillation, in: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2019, pp. 3019–3024.
- [33] D. Jiang, Z. Cai, H. Peng, Z. Wu, Coordinated control based on reinforcement learning for dual-arm continuum manipulators in space capture missions, Journal of Aerospace Engineering 34 (6) (2021) 04021087.
- [34] K. V. Katsikopoulos, S. E. Engelbrecht, Markov decision processes with delays and asynchronous cost collection, IEEE transactions on automatic control 48 (4) (2003) 568–574.
- [35] S. Nath, M. Baranwal, H. Khadilkar, Revisiting state augmentation methods for reinforcement learning with stochastic delays, in: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1346–1355.
- [36] Y. Bouteiller, S. Ramstedt, G. Beltrame, C. Pal, J. Binas, Reinforcement learning with random delays, in: International conference on learning representations, 2021.
- [37] M. Xie, B. Xia, Y. Yu, X. Wang, Y. Chang, Addressing delays in reinforcement learning via delayed adversarial imitation learning, in: International Conference on Artificial Neural Networks, Springer, 2023, pp. 271–282.
- [38] P. Liotet, D. Maran, L. Bisi, M. Restelli, Delayed reinforcement learning by imitation, in: International Conference on Machine Learning, PMLR, 2022, pp. 13528–13556.
- [39] T. J. Walsh, A. Nouri, L. Li, M. L. Littman, Learning and planning in environments with delayed feedback, Autonomous Agents and Multi-Agent Systems 18 (2009) 83–105.
- [40] T. Hester, P. Stone, Texplore: real-time sample-efficient reinforcement learning for robots, Machine learning 90 (2013) 385–429.
- [41] V. Firoiu, T. Ju, J. Tenenbaum, At human speed: Deep reinforcement learning with action delay, arXiv preprint arXiv:1810.07286 (2018).
- [42] E. Derman, G. Dalal, S. Mannor, Acting in delayed environments with non-stationary markov policies, in: International Conference on Learning Representations, 2020.
- [43] B. Chen, M. Xu, L. Li, D. Zhao, Delay-aware model-based reinforcement learning for continuous control, Neurocomputing 450 (2021) 119–128.
- [44] B. Xia, Y. Kong, Y. Chang, B. Yuan, Z. Li, X. Wang, B. Liang, Deer: A delay-resilient framework for reinforcement learning with variable delays, arXiv preprint arXiv:2406.03102 (2024).
- [45] E. Schuitema, L. Buşoniu, R. Babuška, P. Jonker, Control delay in reinforcement learning for real-time dynamic systems: A memoryless approach, in: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2010, pp. 3226–3231.
- [46] M. Agarwal, V. Aggarwal, Blind decision making: Reinforcement learning with delayed observations, Pattern Recognition Letters 150 (2021) 176–182.
- [47] Y. Umetani, K. Yoshida, Resolved motion rate control of space manipulators with generalized jacobian matrix, Ph.D. thesis, Tohoku University (1989).
- [48] Y. Liu, X. Wang, Z. Tang, N. Qi, Probabilistic ensemble neural network model for long-term dynamic behavior prediction of free-floating space manipulators, Aerospace Science and Technology 119 (2021) 107138.