MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning
Abstract
Visual deep reinforcement learning (RL) enables robots to acquire skills from visual input for unstructured tasks. However, current algorithms suffer from low sample efficiency, limiting their practical applicability. In this work, we present MENTOR, a method that improves both the architecture and optimization of RL agents. Specifically, MENTOR replaces the standard multi-layer perceptron (MLP) with a mixture-of-experts (MoE) backbone, enhancing the agent’s ability to handle complex tasks by leveraging modular expert learning to avoid gradient conflicts. Furthermore, MENTOR introduces a task-oriented perturbation mechanism, which heuristically samples perturbation candidates containing task-relevant information, leading to more targeted and effective optimization. MENTOR outperforms state-of-the-art methods across three simulation domains—DeepMind Control Suite, Meta-World, and Adroit. Additionally, MENTOR achieves an average of 83% success rate on three challenging real-world robotic manipulation tasks including Peg Insertion, Cable Routing, and Tabletop Golf, which significantly surpasses the success rate of 32% from the current strongest model-free visual RL algorithm. These results underscore the importance of sample efficiency in advancing visual RL for real-world robotics. Experimental videos are available at mentor.

1 Introduction
Visual deep reinforcement learning (RL) focuses on agents that perceive their environment through high-dimensional image data, closely aligning with robot control scenarios where vision is the primary modality. Despite substantial progress in this field (Kostrikov et al., 2020; Yarats et al., 2021; Schwarzer et al., 2020; Stooke et al., 2021; Laskin et al., 2020a), these methods still suffer from low sample efficiency. As a result, most visual RL pipelines have to be first trained in the simulator and then deployed to the real world, inevitably leading to the problem of sim-to-real gap (Zhao et al., 2020; Salvato et al., 2021).
To bypass this difficulty, one approach is to train visual RL agents from scratch on physical robots, which is known as real-world RL (Dulac-Arnold et al., 2019; Luo et al., 2024; Zhu et al., 2020). Given the numerous challenges of real-world RL, we argue that the fundamental solution lies not in task-specific tweaks, but in developing substantially more sample-efficient RL algorithms. In this paper, we introduce MENTOR: Mixture-of-Experts Network with Task-Oriented perturbation for visual Reinforcement learning, which significantly boosts the sample efficiency of visual RL through improvements in both agent network architecture and optimization.
In terms of architecture, visual RL agents typically use convolutional neural networks (CNNs) for feature extraction from high-dimensional images, followed by multi-layer perceptrons (MLPs) for action output (Yarats et al., 2021; Zheng et al., 2023; Cetin et al., 2022; Xu et al., 2023). However, the learning efficiency of standard MLPs is hindered by intrinsic gradient conflicts in challenging robotic tasks (Yu et al., 2020a; Liu et al., 2023; Zhou et al., 2022; Liu et al., 2021), where the gradient directions for optimizing neural parameters across different stages of the task trajectory or between tasks may conflict. In this work, we propose to alleviate gradient conflicts by integrating mixture-of-experts (MoE) architectures (Jacobs et al., 1991; Shazeer et al., 2017; Masoudnia & Ebrahimpour, 2014) as the backbone to the visual RL framework. Intuitively, MoE architectures can alleviate gradient conflicts due to their ability to dynamically allocate gradients to specialized experts for each input through the sparse routing mechanism (Akbari et al., 2023; Yang et al., 2024).
In terms of optimization, visual RL agents often struggle with local minima due to the unstructured nature of robotic tasks. Recent works have shown that periodically perturbing the agent’s weights with random noise can help escape local minima (Nikishin et al., 2022; Sokar et al., 2023; Xu et al., 2023; Ji et al., 2024). However, the choice of perturbation candidates (i.e., the network weights used to perturb the current agent’s weights) has not been thoroughly explored. Building on this idea, we propose a task-oriented perturbation mechanism. Instead of sampling from a fixed distribution, we maintain a heuristically shifted distribution based on the top-performing agents from the RL history. We sample from this heuristic distribution to perturb the current agent’s weights during training. The intuition is that the distribution gradually formed by the weights of previous top-performing agents may accumulate task-relevant information, leading to more promising optimization directions than purely random noise.
Empirically, we find MENTOR outperforms current state-of-the-art methods (Xu et al., 2023; Yarats et al., 2021; Cetin et al., 2022; Zheng et al., 2023) across all tested scenarios in DeepMind Control Suite (Tassa et al., 2018), Meta-World (Yu et al., 2020b), and Adroit (Rajeswaran et al., 2017). Furthermore, we present three challenging real-world robotic manipulation tasks, shown in Figure 1: Peg Insertion – inserting three kinds of pegs into the corresponding sockets; Cable Routing – maneuvering one end of a rope to make it fit into two non-parallel slots; and Tabletop Golf – striking a golf ball into the target hole while avoiding getting stuck into the trap. In these experiments, MENTOR demonstrates significantly higher learning efficiency, achieving an average success rate of 83%, compared to 32% for the state-of-the-art counterpart (Xu et al., 2023) within the same training time. This confirms the effectiveness of our approach and underscores the importance of improving sample efficiency for making RL algorithms more practical in robotics applications.
Our key contributions are threefold. First, we introduce the MoE architecture to replace the MLP as the agent backbone in model-free visual RL, improving the agent’s learning ability to handle complex robotic environments and reducing gradient conflicts. Second, we propose a task-oriented perturbation mechanism which samples candidates from a heuristically updated distribution, making network perturbation a more efficient and targeted optimization process compared to the random parameter exploration used in previous RL perturbation methods. Third, we achieve state-of-the-art performance in both simulated environments and three challenging real-world tasks, highlighting the sample efficiency and practical value of MENTOR.
2 Preliminary
Mixture-of-Experts (MoE).
The concept of mixture-of-experts (MoE) was first introduced by Jacobs et al. (1991) and Jordan & Jacobs (1994), proposing a simple yet powerful framework where different parts of a model, called experts, specialize in different tasks or different aspects of a task. A sparse MoE layer consists of multiple experts and a router. The router predicts a probability distribution over the experts for a given input. Based on this distribution, only top- experts are activated for processing the input (Shazeer et al., 2017). Assuming there are experts, each of which is a feed-forward network (FFN), the final output of the MoE can be written as
(1) | ||||
(2) |
where is the gating function determining the utilization of the -th expert for input . is the router’s output, producing logits for expert selection, and selects the top experts. normalizes these top- values into probabilities.
Visual Reinforcement Learning.
We employ visual reinforcement learning (RL) to train policies for robotic systems, modeled as a Partially Observable Markov Decision Process (POMDP) defined by the tuple . Here, is the true state space, represents visual observations (a stack of three image frames), is the robot’s action space, defines the transition dynamics, specifies the reward, and is the discount factor. The goal is to learn an optimal policy that maximizes the expected cumulative reward .
Dormant-Ratio-based Perturbation in RL.
The concept of dormant neurons, introduced by Sokar et al. (2023), refers to neurons that have become nearly inactive. It is formally defined as follows:
Definition 1.
Consider a fully connected layer with neurons. Let denote the output of neuron in layer for an input distribution . The score of neuron is given by:
(3) |
A neuron in layer is considered -dormant if its score satisfies .
Definition 2.
In layer , the total number of -dormant neurons is denoted by . The -dormant ratio of a neural network is defined as:
(4) |
As shown by Xu et al. (2023); Ji et al. (2024), the dormant ratio is a critical indicator of neural network behavior and can be leveraged in RL algorithms as an effective metric to improve learning efficiency through parameter perturbation. This process periodically resets the network weights by softly interpolating between the current parameters and randomly initialized values (Ash & Adams, 2020; D’Oro et al., 2022):
(5) |
Here, is the perturbation factor, and are the network weights before and after the reset, respectively, and represents randomly initialized weights (typically drawn from Gaussian noise). The value of is dynamically adjusted based on the dormant ratio as , where is the hyperparameter called the perturbation rate.
3 Method
In this section, we introduce MENTOR, which includes two key enhancements to the architecture and optimization of agents, aimed at improving sample efficiency and overall performance in visual RL tasks. The first enhancement addresses the issue of low sample efficiency caused by gradient conflicts in challenging scenarios, achieved by adopting an MoE structure in place of the traditional MLP as the agent backbone, as detailed in Section 3.1. The second enhancement introduces a task-oriented perturbation mechanism that optimizes the agent’s training through targeted perturbations, effectively balancing exploration and exploitation, as outlined in Section 3.2. The framework of our method is illustrated in Figure 2.

3.1 Architecture: Mixture-of-Experts as the Policy Backbone
In challenging robotic learning tasks, RL agents are often assigned different tasks or subgoals, each associated with a loss function . The goal is to find optimal agent weights that minimize losses across all objectives. In practice, a common approach is to reduce the average loss over all tasks: . If the agent uses a shared set of parameters (e.g., MLP), meaning all parameters must be simultaneously active to function, the optimization process using gradient descent may compromise individual loss optimization. This issue, known as conflicting gradients (Yu et al., 2020a; Liu et al., 2021), hinders the agent’s ability to optimize its behavior when facing complex scenarios effectively.
To address this issue, we propose to utilize an MoE architecture as a substitute for the MLP backbone in RL agents. The MoE structure is characterized by its composition of modular experts, , which allows the agent to activate different experts via a dynamic routing mechanism flexibly. This enables gradients from different tasks or subgoals to correspond to different sets of parameters. Specifically, the parameters of a given expert are updated only by gradients from similar task scenarios, thereby effectively alleviating the gradient conflict problem.
As illustrated in Figure 2, the MoE agent first processes visual inputs using a CNN-based encoder, transforming them into a latent space . The router computes a probability distribution over the experts for a given latent vector . The top- experts are selected based on this distribution, and their softmax weights are computed. The outputs from these top- experts are weighted combined to produce the final output , as shown in Equations 1 and 2. This MoE structure enables the agent to route input visual features to specialized experts based on specific objectives, optimizing its performance in challenging scenarios such as multi-tasking or multi-stage processes.
To better illustrate the important role of dynamic modular expert learning for RL agents, we conduct a multi-task experiment (MT5) in Meta-World (MW) (Yu et al., 2020b), where the agent () is trained to acquire five opposing skills: Open tasks (Door-Open, Drawer-Open, Window-Open) and Close tasks (Drawer-Close, Window-Close). As shown in Figure 3(a), in addition to sharing some experts for handling common knowledge, the Open and Close tasks have their own dedicated experts (Experts 3, 7 for Open and Experts 9, 10 for Close). We evaluate the cosine similarities (Yu et al., 2020a) for both MLP and MoE agents, as shown in Figure 3(b). The MLP’s gradients show significant conflicts between opposing tasks, resulting in a performance gap (100% success for Close tasks, 82% for Open tasks). In contrast, the MoE model demonstrates higher gradient compatibility, achieving 100% success in both task types.


This structural advantage can also be propagated to challenging single tasks, as the dynamic routing mechanism automatically activates different experts to adjust the agent’s behavior throughout the task, alleviating the burden on shared parameters. We illustrate this through training a same-structure MoE agent on a single, highly challenging Assembly task from MW. Figure 4 shows the engagement of the most active experts during task execution, with Expert 15 serving as the shared module throughout the entire policy execution. The other experts vary and automatically divide the task into four distinct stages: Expert 9 handles gripper control for grasping and releasing; Expert 13 manages arm movement while maneuvering the ring; and Expert 14 oversees the assembly process as the ring approaches its fitting location.

3.2 Optimization: Task-oriented Perturbation Mechanism
Neural network perturbation is employed to enhance the exploration capabilities in RL. Two key factors influence the effectiveness of this process . is the perturbation factor controlling the mix between current agent and perturbation candidate weights. represents the perturbation candidate sampled from a distribution , which typically is a fixed Gaussian noise . Previous works (Sokar et al., 2023; Xu et al., 2023; Ji et al., 2024) have investigated the use of the dormant ratio to determine , resulting in improved exploration efficiency (see Section 2). However, the selection of perturbation candidates has not been thoroughly examined. In this work, we propose sampling from a heuristically updated distribution , generated from past high-performing agents, to provide more task-oriented candidates that better facilitate optimization.
We define as a distribution from which the weights of high-performing agents can be sampled. To obtain this distribution, we dynamically maintain a fixed-size set , where denotes an agent with the weights and achieves episode reward . The desired distribution is approximated using a Gaussian distribution, specifically , where and represent the mean and standard deviation of the weights in . As shown in Figure 2, the set is updated during training: at episode , if the agent with weights achieves reward higher than the lowest in , the tuple replaces the one with the lowest reward. This update ensures that accurately reflects the current set of high-performing agents, thus providing improved perturbation candidates for future iterations. The pseudocode is shown in Algorithm 1.
For illustration, we conduct experiments on the Hopper Hop task from the DeepMind Control Suite (DMC), comparing task-oriented perturbation approach to leading model-free visual RL baselines (DrM (Xu et al., 2023) and DrQ-v2 (Yarats et al., 2021)). Our approach solely replaces DrM’s perturbation mechanism with task-oriented perturbations. Both our method and DrM outperform DrQ-v2 due to dormant-ratio-based perturbation, but our method achieves faster skill acquisition and maintains a lower, smoother dormant ratio throughout training (Figure 5a and 5b). By directly testing perturbation candidates as agents in the task (Figure 5c), we observe that candidates sampled from steadily improve throughout training, sometimes even surpassing the performance of the agent they perturb. This demonstrates that progressively captures the optimal weight distribution, rather than simply interpolating from past agents, leading to more targeted optimization. In contrast, perturbation candidates from DrM (initialized with Gaussian noise) consistently yield zero reward, indicating the lack of task-relevant information.

4 Experiments
In this section, we present a comprehensive empirical evaluation of MENTOR. Our experimental setup consists of two parts. In Section 4.1, we demonstrate the effectiveness of our method across three simulation benchmarks: DeepMind Control Suite (DMC) (Tassa et al., 2018), Meta-World (MW) (Yu et al., 2020b), and Adroit (Rajeswaran et al., 2017). These benchmarks feature rich visual features and complex dynamics, demanding fine-grained control. Our method consistently outperforms leading visual RL algorithms across these domains. Moreover, one critical limitation in visual RL research is the over-reliance on simulated environments, which raises concerns about the practical applicability of such methods. To mitigate this gap, in Section 4.2, we go beyond simulations and validate the effectiveness of MENTOR in real-world settings on three challenging robotic learning tasks, highlighting the importance of real-world testing.
4.1 Simulation Experiments
Baselines: We compare MENTOR against four leading model-free visual RL methods: DrM (Xu et al., 2023), ALIX (Cetin et al., 2022), TACO (Zheng et al., 2023), and DrQ-v2 (Yarats et al., 2021). DrM, ALIX, and TACO all use DrQ-v2 as their backbone. DrM periodically perturbs the agent’s weights with random noise based on the proportion of dormant neurons in the neural network; ALIX adds regularization to the encoder gradients to mitigate overfitting; and TACO employs contrastive learning to improve latent state and action representations.
Experimental Settings: We evaluate MENTOR on a diverse set of tasks across three simulation environments with complex dynamics and even sparse reward. The DMC includes challenging tasks like Dog Stand, Dog Walk, Manipulator Bring Ball, and Acrobot Swingup (Sparse), focusing on long-horizon continuous locomotion and manipulation challenges. The MW environment provides a suite of robotic tasks including Assembly, Disassemble, Pick Place, Coffee Push (Sparse), Soccer (Sparse), and Hammer (Sparse), which test the agent’s manipulation abilities and require sequential reasoning. The Adroit environment includes complex robotic manipulation tasks such as Door and Hammer, which involve controlling dexterous hands to interact with articulated objects. Notably, DMC tasks are evaluated using episode reward, while tasks in MW and Adroit are assessed based on success rate. For each method on each task, we conducted experiments using four random seeds; detailed hyperparameters and training settings are provided in Appendix B.
Results: Figure 6 presents performance comparisons between MENTOR and the baselines. In the DMC tasks, Dog Stand and Dog Walk feature high action dimensionality with a 38-dimensional action space representing joint controls for the dog model. These tasks also have complex kinematics involving intricate joint coordination, muscle dynamics, and collision handling, making them challenging to optimize. Our method outperforms the top baseline, achieving approximately 17% and 10% higher episode rewards, respectively. In the MW tasks, the Hammer (Sparse) task stands out. It requires a robotic arm to hammer a nail into a wall, with highly sparse rewards: success yields significantly larger rewards than merely touching or missing the nail. In fact, the reward for failure is only one-thousandth of the success reward, making the task extremely sparse. However, our task-oriented perturbation effectively captures these sparse rewards, reducing the required training frames by 70% compared to the best baseline. In the Adroit tasks, our method achieves nearly 100% success with significantly less training time, while the most competitive counterpart (DrM) requires more frames, and other baselines fail to match performance even after 6 million frames. A key highlight is the Door task, which involves multiple stages of dexterous hand manipulation—grasping, turning, and opening the door. Leveraging the MoE architecture, our method reduces training time to achieve over 80% success by approximately 23% compared to the best baseline. In summary, MENTOR demonstrates superior efficiency and performance compared to the strongest existing model-free visual RL baselines across all 12 tasks,.

4.2 Real-World Experiments
Our real-world RL experiments evaluate the practical applicability of MENTOR in robotic manipulation tasks. We design three tasks to highlight key challenges in real-world robotics: multi-task learning, multi-stage deformable object manipulation, and dynamic skill acquisition.
Experimental Settings: All tasks use a Franka Panda arm for execution and RealSense D435 cameras for RGB visual observations, which include both overall and close-up views to capture global and local information. The reward functions are based on the absolute distance between the current and desired states. To prevent trajectory overfitting, the end-effector’s initial position is randomly sampled from a predefined region at the start of each episode. Tasks are described below and shown in Figure 7. Further details can be found in Appendix C.
Peg Insertion: This task simulates an assembly-line scenario where fine-grained insertion of various objects are required. The agent needs to develop multi-task learning skills to insert pegs with three different shapes (Star, Triangle, and Arrow) into corresponding sockets. Training such agents in simulators is difficult due to the complexities of contact-rich interactions, making this task ideal for real-world reinforcement learning and evaluation.
Cable Routing: Manipulating deformable cables presents significant challenges due to the complexities of modeling and simulating their physical dynamics, making this task ideal for direct, model-free visual RL training in real-world environments. In this scenario, the robot must guide a cable into two parallel slots. Since both slots cannot be filled simultaneously, the agent must perform the task sequentially, requiring long-horizon, multi-stage planning to successfully accomplish the task.
Tabletop Golf: In this task, the robot uses a golf club to strike a ball on a grass-like surface, aiming to land it in a target hole. An automated reset system retrieves the ball when it reaches the hole, enters a mock water hazard, or rolls out of bounds, and randomly repositions it. The agent must learn to approach the ball, control the club’s striking force and direction to guide the ball toward the hole while avoiding obstacles through real-world interaction.

Results: Our policies demonstrate robust performance during evaluation as shown in Figure 7. In Peg Insertion, the agent randomly selects a peg from the shelf and inserts it from varying initial positions. It gradually learns to align the peg shape with the corresponding hole and adjust the angle for accurate insertion. During one execution, as the peg nears the hole, we manually disturb by altering the robot arm’s pose significantly. Despite this interference, the agent successfully completes the task relying solely on visual observations. In Cable Routing, where the cable cannot be placed into parallel slots simultaneously, the agent learns to prioritize routing it into the farther slot first, then into the closer one. This second step requires careful handling to avoid dislodging the cable from the first slot. During execution, if the cable is randomly removed from the slot, the agent can visually detect this issue and re-route it back into position. In Tabletop Golf, the agent must master two key skills: striking the ball with the correct direction and force, and repositioning the club to follow the ball after the strike. Due to a "water hazard", the ball cannot be struck directly toward the target hole from its starting position. The agent learns to angle its shots to bypass the hazard and guide the ball into the hole. No interference is applied during this task, as the ball’s rolling on the grass-like surface introduces sufficient variability.
Method | Peg Insertion (Subtasks) | Cable Routing | Tabletop Golf | ||
Star | Triangle | Arrow | |||
MENTOR w/ pretrained encoder | 1.0 | 1.0 | 1.0 | 0.9 | 0.8 |
MENTOR | 1.0 | 1.0 | 1.0 | 0.8 | 0.7 |
MENTOR w/o MoE | 1.0 | 0.7 | 0.6 | 0.45 | 0.55 |
DrM | 0.5 | 0.2 | 0.1 | 0.2 | 0.5 |
Ablation Study: We conduct a detailed ablation study to demonstrate the effectiveness of MENTOR in improving sample efficiency and performance, as shown in Table 1.
The first two rows reveal that utilizing the pretrained visual encoder (Lin et al., 2024) instead of a CNN trained from scratch results in an average performance improvement of 9%. However, no significant performance gain is observed in simulation benchmarks with this substitution. This discrepancy may arise from the gap between simulation and real-world environments, where real scenes offer richer textures more aligned with the pretraining domain.
Furthermore, the results confirm the effectiveness of our technical contributions. When the MoE structure is removed from the agent (i.e., replaced with an MLP, as in MENTOR w/o MoE), overall performance drops by nearly 30%. Additionally, further switching the task-oriented perturbation mechanism to basic random perturbation (as in DrM) leads to an additional performance decline of approximately 30%. We further extend the training process of the DrM baseline to reach the same performance level as MENTOR, with the training time comparison shown in Figure 8, which demonstrates an average 37% improvement in time efficiency for our method. These findings underscore the importance of each component in achieving superior results.

5 Related work
Visual reinforcement learning.
Visual reinforcement learning (RL), which operates on pixel observations rather than ground-truth state vectors, faces significant challenges in decision-making due to the high-dimensional nature of visual inputs and the difficulty in extracting meaningful features for policy optimization (Ma et al., 2022; Choi et al., 2023; Ma et al., 2022). Despite these challenges, there has been considerable progress in this area. Methods such as Hafner et al. (2019; 2020; 2023); Hansen et al. (2022) improve visual RL by building world models. Other approaches (Yarats et al., 2021; Kostrikov et al., 2020; Laskin et al., 2020b), use data augmentation to enhance learning robustness from pixel inputs. Contrastive learning, as in Laskin et al. (2020a); Zheng et al. (2023), aids in learning more informative state and action representations. Additionally, Cetin et al. (2022) applies regularization to prevent catastrophic self-overfitting, while DrM (Xu et al., 2023) enhances exploration by periodically perturbing the agent’s parameters. Despite recent progress, these methods still suffer from low sample efficiency in complex robotic tasks. In this paper, we propose enhancing the agent’s learning capability by replacing the standard MLP backbone with an MoE architecture. This dynamic expert learning mechanism helps mitigate gradient conflicts in complex scenarios.
Neural network perturbation in RL.
Perturbation theory has been explored in machine learning to escape local minima during gradient descent (Jin et al., 2017; Neelakantan et al., 2015). In deep RL, agents often overfit and lose expressiveness during training (Song et al., 2019; Zhang et al., 2018; Schilling, 2021). To address this issue, Sokar et al. (2023) identified a correlation where improved learning capability is often accompanied by a decline in the dormant neural ratio in agent networks. Building on this insight, Xu et al. (2023); Ji et al. (2024) introduced parameter perturbation mechanisms that softly blend randomly initialized perturbation candidates with the current ones, aiming to reduce the agent’s dormant ratio and encourage exploration. However, previous works have not fully explored the choice of perturbation candidates. In this work, we uncover the potential of targeted perturbation for more efficient policy optimization by introducing a simple yet effective task-oriented perturbation mechanism. This mechanism samples perturbation candidates from a time-variant distribution formed by the top-performing agents collected throughout RL history.
6 Conclusion
In this paper, we present MENTOR, a state-of-the-art model-free visual RL framework that achieves superior performance in challenging robotic control tasks. MENTOR enhances learning efficiency through two key improvements in both agent network architecture and optimization. Replacing the traditional multi-layer perceptron (MLP) backbone with mixture-of-experts (MoE) structure enables the agent to dynamically allocate learning gradients to modular experts via a sparse routing system, mitigating gradient conflicts in complex scenarios. Additionally, the introduction of a task-oriented perturbation mechanism helps refine the agent’s weights toward optimal solutions by sampling perturbation candidates from a heuristically updated distribution formed by the top-performing agents during RL training. MENTOR consistently outperforms the strongest baselines across 12 tasks in three simulation benchmark environments. Furthermore, we extend our evaluation beyond simulations, demonstrating the effectiveness of MENTOR in real-world settings on three challenging self-designed robotic manipulation tasks, which highlights its sample efficiency and practical value. We believe MENTOR is a capable visual RL algorithm with the potential to push the boundaries of RL application in real-world robotic tasks.
References
- Akbari et al. (2023) Hassan Akbari, Dan Kondratyuk, Yin Cui, Rachel Hornung, Huisheng Wang, and Hartwig Adam. Alternating gradient descent and mixture-of-experts for integrated multimodal perception. Advances in Neural Information Processing Systems, 36:79142–79154, 2023.
- Ash & Adams (2020) Jordan Ash and Ryan P Adams. On warm-starting neural network training. Advances in neural information processing systems, 33:3884–3894, 2020.
- Cetin et al. (2022) Edoardo Cetin, Philip J Ball, Steve Roberts, and Oya Celiktutan. Stabilizing off-policy deep reinforcement learning from pixels. arXiv preprint arXiv:2207.00986, 2022.
- Chen et al. (2023) Zitian Chen, Yikang Shen, Mingyu Ding, Zhenfang Chen, Hengshuang Zhao, Erik G Learned-Miller, and Chuang Gan. Mod-squad: Designing mixtures of experts as modular multi-task learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11828–11837, 2023.
- Choi et al. (2023) Hyesong Choi, Hunsang Lee, Seongwon Jeong, and Dongbo Min. Environment agnostic representation for visual reinforcement learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 263–273, 2023.
- D’Oro et al. (2022) Pierluca D’Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, and Aaron Courville. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In Deep Reinforcement Learning Workshop NeurIPS 2022, 2022.
- Dulac-Arnold et al. (2019) Gabriel Dulac-Arnold, Daniel Mankowitz, and Todd Hester. Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901, 2019.
- Fedus et al. (2022) William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39, 2022.
- Hafner et al. (2019) Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019.
- Hafner et al. (2020) Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193, 2020.
- Hafner et al. (2023) Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023.
- Hansen et al. (2022) Nicklas Hansen, Xiaolong Wang, and Hao Su. Temporal difference learning for model predictive control. arXiv preprint arXiv:2203.04955, 2022.
- Jacobs et al. (1991) Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991.
- Ji et al. (2024) Tianying Ji, Yongyuan Liang, Yan Zeng, Yu Luo, Guowei Xu, Jiawei Guo, Ruijie Zheng, Furong Huang, Fuchun Sun, and Huazhe Xu. Ace: Off-policy actor-critic with causality-aware entropy regularization. arXiv preprint arXiv:2402.14528, 2024.
- Jin et al. (2017) Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In International conference on machine learning, pp. 1724–1732. PMLR, 2017.
- Jordan & Jacobs (1994) Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181–214, 1994.
- Kostrikov et al. (2020) Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020.
- Laskin et al. (2020a) Michael Laskin, Aravind Srinivas, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. In International Conference on Machine Learning, pp. 5639–5650. PMLR, 2020a.
- Laskin et al. (2020b) Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. Advances in neural information processing systems, 33:19884–19895, 2020b.
- Lepikhin et al. (2020) Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
- Lin et al. (2024) Xingyu Lin, John So, Sashwat Mahalingam, Fangchen Liu, and Pieter Abbeel. Spawnnet: Learning generalizable visuomotor skills from pre-trained network. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 4781–4787. IEEE, 2024.
- Liu et al. (2021) Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878–18890, 2021.
- Liu et al. (2023) Siao Liu, Zhaoyu Chen, Yang Liu, Yuzheng Wang, Dingkang Yang, Zhile Zhao, Ziqing Zhou, Xie Yi, Wei Li, Wenqiang Zhang, et al. Improving generalization in visual reinforcement learning via conflict-aware gradient agreement augmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23436–23446, 2023.
- Luo et al. (2024) Jianlan Luo, Zheyuan Hu, Charles Xu, You Liang Tan, Jacob Berg, Archit Sharma, Stefan Schaal, Chelsea Finn, Abhishek Gupta, and Sergey Levine. Serl: A software suite for sample-efficient robotic reinforcement learning. arXiv preprint arXiv:2401.16013, 2024.
- Ma et al. (2022) Guozheng Ma, Zhen Wang, Zhecheng Yuan, Xueqian Wang, Bo Yuan, and Dacheng Tao. A comprehensive survey of data augmentation in visual reinforcement learning. arXiv preprint arXiv:2210.04561, 2022.
- Masoudnia & Ebrahimpour (2014) Saeed Masoudnia and Reza Ebrahimpour. Mixture of experts: a literature survey. Artificial Intelligence Review, 42:275–293, 2014.
- Neelakantan et al. (2015) Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
- Nikishin et al. (2022) Evgenii Nikishin, Max Schwarzer, Pierluca D’Oro, Pierre-Luc Bacon, and Aaron Courville. The primacy bias in deep reinforcement learning. In International conference on machine learning, pp. 16828–16847. PMLR, 2022.
- Rajeswaran et al. (2017) Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
- Salvato et al. (2021) Erica Salvato, Gianfranco Fenu, Eric Medvet, and Felice Andrea Pellegrino. Crossing the reality gap: A survey on sim-to-real transferability of robot controllers in reinforcement learning. IEEE Access, 9:153171–153187, 2021.
- Schilling (2021) Malte Schilling. Avoid overfitting in deep reinforcement learning: Increasing robustness through decentralized control. In Artificial Neural Networks and Machine Learning–ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 14–17, 2021, Proceedings, Part IV 30, pp. 638–649. Springer, 2021.
- Schwarzer et al. (2020) Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. arXiv preprint arXiv:2007.05929, 2020.
- Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
- Shen et al. (2023) Yikang Shen, Zheyu Zhang, Tianyou Cao, Shawn Tan, Zhenfang Chen, and Chuang Gan. Moduleformer: Learning modular large language models from uncurated data. arXiv preprint arXiv:2306.04640, 2023.
- Sokar et al. (2023) Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, and Utku Evci. The dormant neuron phenomenon in deep reinforcement learning. arXiv preprint arXiv:2302.12902, 2023.
- Song et al. (2019) Xingyou Song, Yiding Jiang, Stephen Tu, Yilun Du, and Behnam Neyshabur. Observational overfitting in reinforcement learning. arXiv preprint arXiv:1912.02975, 2019.
- Stooke et al. (2021) Adam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. In International conference on machine learning, pp. 9870–9879. PMLR, 2021.
- Tassa et al. (2018) Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
- Xu et al. (2023) Guowei Xu, Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Zhecheng Yuan, Tianying Ji, Yu Luo, Xiaoyu Liu, Jiaxin Yuan, Pu Hua, et al. Drm: Mastering visual reinforcement learning through dormant ratio minimization. arXiv preprint arXiv:2310.19668, 2023.
- Yang et al. (2024) Longrong Yang, Dong Sheng, Chaoxiang Cai, Fan Yang, Size Li, Di Zhang, and Xi Li. Solving token gradient conflict in mixture-of-experts for large vision-language model. arXiv preprint arXiv:2406.19905, 2024.
- Yarats et al. (2021) Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021.
- Yu et al. (2020a) Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33:5824–5836, 2020a.
- Yu et al. (2020b) Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning, pp. 1094–1100. PMLR, 2020b.
- Zhang et al. (2018) Chiyuan Zhang, Oriol Vinyals, Remi Munos, and Samy Bengio. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893, 2018.
- Zhao et al. (2020) Wenshuai Zhao, Jorge Peña Queralta, and Tomi Westerlund. Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE symposium series on computational intelligence (SSCI), pp. 737–744. IEEE, 2020.
- Zheng et al. (2023) Ruijie Zheng, Xiyao Wang, Yanchao Sun, Shuang Ma, Jieyu Zhao, Huazhe Xu, Hal Daumé III, and Furong Huang. Taco: Temporal latent action-driven contrastive loss for visual reinforcement learning. arXiv preprint arXiv:2306.13229, 2023.
- Zhou et al. (2022) Shiji Zhou, Wenpeng Zhang, Jiyan Jiang, Wenliang Zhong, Jinjie Gu, and Wenwu Zhu. On the convergence of stochastic multi-objective gradient manipulation and beyond. Advances in Neural Information Processing Systems, 35:38103–38115, 2022.
- Zhu et al. (2020) Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, and Sergey Levine. The ingredients of real-world robotic reinforcement learning. arXiv preprint arXiv:2004.12570, 2020.
Appendix
Appendix A Algorithm Details
We illustrate the overview framework of MENTOR in Section 3, where we employ two enhancements in terms of agent structure and optimization: substituting the MLP backbone with MoE to alleviate gradient conflicts when learning complex tasks, and implementing a task-oriented perturbation mechanism to update the agent’s weights in a more targeted direction by sampling from a distribution formed by the top-performing agents in training history. The detailed implementation of task-oriented perturbation is shown in Algorithm 1, and the implementation of using MoE as the policy backbone is described as follows:
Algorithm 2 illustrates how MENTOR employs the MoE architecture as the backbone of its policy network. In addition to the regular training process, using MoE as the policy agent requires adding an additional loss to prevent MoE degradation during training—where a fixed subset of experts is consistently activated. The MoE layer computes the output action while simultaneously calculating an auxiliary loss for load balancing (Lepikhin et al., 2020; Fedus et al., 2022). Specifically, we extract the distribution over experts produced by the router for each input. By averaging these distributions over a large batch, we obtain an overall expert distribution, which we aim to keep uniform across all experts. To achieve this, we introduce an auxiliary loss term—the negative entropy of the overall expert distribution (Chen et al., 2023; Shen et al., 2023). This loss reaches its minimum value of , where is the number of experts in the MoE, when all experts are equally utilized, thus preventing degradation. This auxiliary loss is added to the actor loss and used to update the actor during the RL process.
Appendix B Simulation Experimental Settings
The hyperparameters employed in our experiments are detailed in Table 2. In alignment with previous work, we predominantly followed the hyperparameters utilized in DrM (Xu et al., 2023).
Parameter | Setting | |
Architecture | Features dimension | 100 (Dog) |
50 (Others) | ||
Hidden dimension | 1024 | |
Number of MoE experts | 4 or 16 or 32 | |
Activated MoE experts (top-) | 2 or 4 | |
MoE experts hidden dimension | 256 | |
Optimization | Optimizer | Adam |
Learning rate | (DMC) | |
(MW & Adroit) | ||
Learning rate of policy network | lr or lr | |
Agent update frequency | 2 | |
Soft update rate | 0.01 | |
MoE load balancing loss weight | 0.002 | |
Perturb | Minimum perturb factor | 0.2 |
Maximum perturb factor | 0.6 (Dog, Coffee Push & Soccer) | |
0.9 (Others) | ||
Perturb rate | 2 | |
Perturb frames | 200000 | |
Task-oriented perturb buffer size | 10 | |
Replay Buffer | Replay buffer capacity | |
Action repeat | 2 | |
Seed frames | 4000 | |
-step returns | 3 | |
Mini-batch size | 256 | |
Discount | 0.99 | |
Exploration | Exploration steps | 2000 |
Linear exploration stddev. clip | 0.3 | |
Linear exploration stddev. schedule | linear(1.0, 0.1, 2000000) (DMC) | |
linear(1.0, 0.1, 3000000) (MW & Adroit) | ||
Awaken exploration temperature | 0.1 | |
Target exploitation parameter | 0.6 | |
Exploitation temperature | 0.02 | |
Exploitation expectile | 0.9 |
Appendix C Real-World Experimental Settings
The training and testing videos are available at mentor. The hyperparameters for the real-world experiments are the same as those used in the simulator, as shown in Table 2. We use 16 experts, with the top 4 experts activated.
C.1 Observation Space
The observation space for all real-world tasks is constructed from information only provided by several cameras. Each camera delivers three 84x84x3 images (3-channel RGB, with a resolution of 84x84), which capture frames from the beginning, midpoint, and end of the previous action.
For the Peg Insertion and Tabletop Golf tasks, the observation space is provided by two cameras: a wrist camera and a side camera. As shown in Figure 9, these two cameras in Tabletop Golf offer different perspectives. The wrist camera is attached to the robot arm’s wrist, capturing close-up images of the end-effector, while the side camera provides a more global view. As previously mentioned, each camera provides three images, resulting in a total of six 3-channel 84x84 images.
In the Cable Routing task, the observation space is constructed using three cameras: a side camera for an overview, and two dedicated cameras for each slot to capture detailed views of the spatial relationship between the slots and the cable. This setup results in a total of nine 3-channel 84x84 images.

C.2 Action Space
The policy outputs an end-effector delta pose from the current pose tracked by the low-level controller equipped in robot arm. Typically, the end-effector of a robotic arm has six degrees of freedom (DOF); however, in our tasks, the action space is constrained to be fewer. The reason for this restriction in DOF is specific to our setting: in our case, we train model-free visual reinforcement learning algorithms directly in the real-world environment from scratch, without any initial demonstrations and prior knowledge toward the tasks. As a result, the exploration process is highly random, and limiting the degrees of freedom is crucial for safeguarding both the robotic arm and the experimental equipment. For instance, in the Peg Insertion task, the use of rigid 3D-printed materials means allowing the end-effector to attempt insertion at arbitrary angles could easily cause damage. Similarly, in the Cable Routing task, an unrestricted end-effector might collide with the slot, posing a risk to the equipment.
Peg Insertion: The end-effector in this task has four degrees of freedom: , , , and . Here, and represent the planar coordinates, represents the height, and denotes the rotation around the -axis. The , , and dimensions are normalized based on the environment’s size, ranging from -1 to 1, while is normalized over a feasible rotation range of .
The action space is a 4-dimensional continuous space , where each action updates the end-effector’s state as:
Cable Routing: In this task, the end-effector is constrained to two degrees of freedom: and . The -axis controls movement almost perpendicular to the cable, while the -axis controls the height. Both dimensions are normalized based on the environment’s size, with values ranging from -1 to 1. Although we restrict the action space to two dimensions, this task remains extremely challenging for the RL agent to master, as it requires inserting cable in both slots sequentially, making it the most time-consuming task among the three, as shown in Figure 8. The difficulty stems largely from the structure and parallel configuration of the two slots: the agent cannot route the cable into both slots simultaneously and must insert one first. However, as shown in Figure 10b, without a hook-like structure to secure the cable in the slot, the cable easily slips out when the agent attempts to route it into the second slot. This task therefore requires highly precise movements, forcing the agent to learn the complex dynamics of soft cables.
The action space is a 2-dimensional continuous space , where each action updates the end-effector’s position as:
Tabletop Golf: The end-effector in this task has three degrees of freedom: , , and . Here, and represent the planar coordinates, and denotes the angle around the normal vector to the plane. The and dimensions are normalized based on the environment’s size, ranging from -1 to 1, while is normalized over a feasible rotation range of .
The action space has four dimensions: three spatial dimensions and a strike dimension, where the values range from -1 to 1. The end-effector’s state is updated as:
and if , the end-effector performs a swing with strength proportional to the value of strike.
C.3 Reward Design
In this section, we describe the reward functions for the three real-world robotic tasks used in our work: Peg Insertion, Cable Routing, and Tabletop Golf. The basic principle behind these functions is to measure the distance between the current state and the target state. These reward functions are designed to provide continuous feedback—though they can be extremely sparse, as seen in Cable Routing—based on the task’s progress, enabling the agent to learn efficient strategies to achieve the goal. Notably, we trained two visual classifiers for the Cable Routing task to determine the relationship between the cables and the slots for reward calculation. Other positional information is obtained through feedback from the robot arm or image processing algorithms. The lower and upper bounds of each dimension in the pose are normalized to -1 and 1, respectively. The coefficients used in the reward functions are listed in Table 3.
Peg Insertion: The reward is computed as the negative absolute difference between the current robot arm pose and the target insertion pose, which varies for each peg.
(6) |
Where:
-
•
and : Represent the goal position and the current position of the robot’s end-effector in the x-y plane.
-
•
: Euclidean distance between the goal and current positions of the end-effector.
-
•
: The height difference between the current and target z positions.
-
•
and : Current and goal angles of the end-effector, respectively.
Cable Routing: To provide continuous reward feedback, we trained a simple CNN classifier to detect whether the cable is correctly positioned in the slot, awarding full reward when the cable is in the slot and zero when it is far outside. The CNN classifier was trained by labeling images to classify the spatial relationship between the cable and the slot into several categories, with different rewards assigned based on the classification. However, when the cable remains in a particular category without progressing to different stages, the agent receives constant rewards, making it difficult for the agent to learn more refined cable manipulation skills.
(7) |
Where:
-
•
: Reward for the first slot, determined by the position of the cable relative to the slot. The possible rewards are:
-
–
Outside the slot:
-
–
On the side of the slot:
-
–
Above the slot:
-
–
Inside the slot:
-
–
-
•
: Reward for the second slot, with more detailed classifications:
-
–
Outside the slot:
-
–
On the side of the slot:
-
–
Partially above the slot:
-
–
Above the slot and at the edge:
-
–
Above the slot and close to the middle:
-
–
Partially inside the slot:
-
–
Fully inside the slot:
-
–
-
•
: Indicator function that activates only if the cable is inserted correctly in the first slot, allowing the agent to receive rewards for the second slot.
Tabletop Golf: The reward consists of two components: the negative absolute distance between the robot arm and the ball, and the negative absolute distance between the ball and the target hole. This encourages the agent to learn how to move the robot arm toward the ball and control the striking force and direction to guide the ball toward the hole while avoiding obstacles. Additional rewards include: (if the ball reaches the hole) and (if the ball goes out of bounds). In this experiment, we deploy two cameras at the middle of two adjacent sides of the golf court. The pixel locations of the ball in both cameras are used to roughly estimate its location to calculate the reward function. Despite using an approximate estimation for the reward, MENTOR still quickly learns to follow the ball and strike it with the appropriate angle and force, demonstrating the effectiveness of our proposed method.
(8) |
Where:
-
•
and : Positions of the robot’s golf club and the ball, respectively.
-
•
: Position of the target hole.
-
•
: Distance between the club and the ball.
-
•
: Distance between the ball and the hole.
-
•
and : Best calculated angle and current angle of the robot’s arm for optimal striking.
-
•
: Indicator function that penalizes unnecessary strikes.
-
•
and : The y-axis is the long side of the golf course. The ball should be hit from the positive to the negative y-axis, so the club should always be on the positive y-side of the ball.
Symbol | Value |
16 | |
6 | |
8 | |
17 | |
3 | |
20 | |
5 | |
4 | |
8 | |
2 | |
10 |
C.4 Auto-Reset Mechanisms
One major challenge in real-world RL is the burden of frequent manual resets during training. To address this, we designed auto-reset mechanisms to make the training process more feasible and efficient.
In the Peg Insertion task, the robot arm is set to frequently switch among different pegs to help the agent acquire multi-tasking skills. To facilitate this, we design a shelf to hold spare pegs while the robot arm is handling one. With the fixed position of the shelf, we pre-programmed a peg-switching routine, eliminating the need for manual peg replacement. After switching, the robot arm automatically moves the peg to the workspace and randomizes its initial position for training.
In the Cable Routing task, manual resets are unnecessary, as the robot arm can auto-reset the cable by simply moving back to its initial position with added randomness.
In the Tabletop Golf task, we design an auto-collection mechanism to reset the task. As shown in Figure 10c, the tabletop golf device has two layers: the top golf court surface and a lower inclined floor. When the ball is hit into the hole or out of bounds, it rolls down to the corner of the lower layer, where a light sensor triggers a motor to return the ball to the court. The variability in the ball’s initial velocity during reset introduces randomness to its starting position.

Appendix D Time Efficiency of MENTOR
We run all simulation and real-world experiments on an Nvidia RTX 3090 GPU and assess the speed of the algorithms compared to baselines. Frames per second (FPS) is used as the evaluation metric for time efficiency.
For simulation, we use the Hopper Hop task to compare time efficiency, as shown in Table 4. While MENTOR demonstrates significant sample efficiency, its time efficiency is relatively lower. This is primarily due to the implementation of a plain MoE version in this work, where input feature vectors are passed to all experts, and only the top- outputs are weighted and combined to generate the final output. In most tasks, the active expert ratio (i.e., top-/total number of experts) is equal to or below 25%. More efficient implementations of MoE could significantly improve time efficiency, which we leave for future exploration.
Task Name | MENTOR | DrM | DrQ-v2 | ALIX | TACO |
Hopper | 37 | 55 | 78 | 49 | 23 |
We also evaluate time efficiency on three real-world tasks, as shown in Table 5. In real-world applications, the primary bottlenecks in improving time efficiency are data collection efficiency and reset speed. Additionally, the sample efficiency of the RL algorithm plays a crucial role. If the algorithm has low sample efficiency, it may take many poor actions over a long training period, leading to frequent auto-resets and ultimately lowering the overall FPS.
As a result, MENTOR and DrM achieve similar levels of efficiency. However, due to its superior learning capability, MENTOR quickly acquires skills and transitions out of the initial frequent-reset phase faster than DrM, leading to slightly better overall time efficiency during training.
Task Name | MENTOR | DrM |
Peg Insertion | 0.46 | 0.40 |
Cable Routing | 0.67 | 0.62 |
Tabletop Golf | 0.52 | 0.47 |
Appendix E MENTOR in Real-World Multi-Tasking Process
Figure 11 shows the utilization of experts in the Peg Insertion task for various plug shapes. Each shape is handled by some specialized experts, which aids in multi-task learning. This specialization helps mitigate gradient conflict by directing gradients from different tasks to specific experts, improving learning efficiency, as discussed in the main text.
