RLEMMO: Evolutionary Multimodal Optimization Assisted
By Deep Reinforcement Learning
Abstract.
Solving multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations. Although existing works strike the balance of exploration and exploitation through hand-crafted adaptive strategies, they require certain expert knowledge, hence inflexible to deal with MMOP with different properties. In this paper, we propose RLEMMO, a Meta-Black-Box Optimization framework, which maintains a population of solutions and incorporates a reinforcement learning agent for flexibly adjusting individual-level searching strategies to match the up-to-date optimization status, hence boosting the search performance on MMOP. Concretely, we encode landscape properties and evolution path information into each individual and then leverage attention networks to advance population information sharing. With a novel reward mechanism that encourages both quality and diversity, RLEMMO can be effectively trained using a policy gradient algorithm. The experimental results on the CEC2013 MMOP benchmark underscore the competitive optimization performance of RLEMMO against several strong baselines.
1. Introduction
Multimodal optimization problems (MMOPs) involve multiple optimal solutions, which are frequently encountered in real-world tasks such as truss structure design (Deb and Gulati, 2001; Luh and Lin, 2011), job scheduling (Pérez et al., 2012), electromagnetic machine design (Yoo et al., 2015), etc. One of the challenging tasks in solving MMOPs is to locate as more as possible optimal solutions within a limited number of function evaluations, which has been extensively discussed in recent literature.
Evolutionary algorithms (EAs) have demonstrated robustness and effectiveness in incorporating meta-heuristics to efficiently locate global optima (Storn and Price, 1997; Shi, 2004). However, classic EAs face challenges when addressing MMOPs due to diversity loss caused by inherent genetic drift and selection pressure. To overcome these bottlenecks, a significant amount of research has focused on enhancing classic EAs to solve MMOPs. Three major branches of study are widely discussed in the literature: 1) Incorporating niching methods (Zhang et al., 2019; Qu et al., 2012) with classic EAs to reinforce selection flexibility (Thomsen, 2004) and maintain diversity (Goldberg et al., 1987; Li et al., 2002) within the population. 2) Exploiting historical search experiences to assist population reproduction through informative sub-space sampling (Huang et al., 2020), reproduction redundancy checking (Epitropakis et al., 2013), and optimum backtracking (Liao et al., 2023). 3) Transforming MMOPs into multi-objective optimization problems (Wang et al., 2014; Cheng et al., 2017) by introducing two or more conflicting objective functions to balance solution diversity and quality. While these enhancements enable classic EAs to adaptively coordinate exploitation and exploration by balancing quality and diversity during the optimization process, they often require expert knowledge to hand-craft these adaptive mechanisms. This can make the approach labor-intensive and inflexible when dealing with MMOPs with different properties.
Recently, works in Meta-Black-Box Optimization (MetaBBO) have successfully utilized neural networks to meta-learn configurations of traditional black-box optimizers, thereby reducing the reliance on deep expert knowledge in existing MMOP algorithms (Ma et al., 2023). However, most existing MetaBBO works are developed for other optimization scenarios, and only a few have been specifically designed for addressing MMOPs. One recent work in this domain is RLEA-SSC (Xia et al., 2021), which meta-learns an adaptive selection pressure mechanism to enhance population diversity as the evolution progresses. However, RLEA-SSC requires re-training for each unseen problem instance, significantly limiting its generalization ability. To develop a generalizable MetaBBO method tailored for MMOPs, in this paper, we propose Evolutionary Multimodal Optimization Assisted by Deep Reinforcement Learning (RLEMMO), an MetaBBO framework that incorporates an RL agent at the meta-level to flexibly control the search behaviors of the population at the lower level optimization process. The RL agent is trained at the meta level to maximize both the quality and diversity in the low-level optimization process, effectively addressing the challenges in MMOPs. Specifically, we construct a comprehensive state representation based on fitness landscape analysis (Pitzer and Affenzeller, 2012) to capture quality and diversity information at both the population and individual levels during the optimization process. We develop an attention-based network structure for efficient feature embedding and search behavior control. Additionally, a clustering-based reward scheme is proposed to effectively meta-train the RL agent and enhance optimization performance. We train RLEMMO using policy gradient methods on a group of MMOPs and then generalize the trained model to unseen problem instances. Experimental results demonstrate that our RLEMMO achieves competitive optimization performance against several strong baselines which are tailored for MMOPs.
We now summarize the contributions in this paper:
-
•
Introduction of RLEMMO, a pioneering MetaBBO framework that effectively solve MMOPs through a meta-learned policy, which enriches the quality and diversity of the lower level optimization process.
-
•
A comprehensive state representation based on fitness landscape analysis that helps RLEMMO generalize to unseen problems, along with a novel clustering-based reward scheme that assists the RL agent in learning effective policies.
-
•
We demonstrate the effectiveness of RLEMMO on well-known MMOP benchmark problems. The results show that RLEMMO is competitive with several strong methods. Furthermore, we provide ablation studies that give in-depth analysis on the core designs in RLEMMO.
2. Related Works
2.1. Evolutionary Multimodal Optimization
Several works have made significant contributions for solving MMOPs by enhancing evolutionary computation from various perspectives.
The first aspect of the improvement involves integrating Evolutionary Computation (EC) with niching methods to enhance the diversity of the population during optimization. Thomsen et al. (Thomsen, 2004) proposed Crowding DE, which combines differential evolution with the crowding method. Each offspring replaces the nearest individuals if it has a better fitness. Qu et al. (Qu et al., 2012) introduced a locally informed particle swarm optimizer to enhance niching PSO by combining several local best solutions in the vicinity of each particle to guide the velocity update. Li et al. (Li, 2009) proposed a ring topology PSO that eliminates the need for an extra parameter. The swarm is divided using a ring topology. Zhang et al. (Zhang et al., 2023) proposed a method for choosing parents based on proximity ranking, where the probabilities of selecting neighbors as parents are determined by their distance from the individual. To adaptively adjust niching radii to align with the dynamical optimization process, Zhang et al. (Zhang et al., 2019) proposed VNCDE, which first constructed a Voronoi neighborhood for each individual and thereby adaptively assigned desired searching strategies within the neighborhood. Luo et al. (Luo et al., 2020) proposed an adaptive nearest-better neighbor clustering method for characterizing basins of attraction, which not only significantly enhances the diversity during the optimization process, but also provides flexible convergence.
The second aspect of the improvement leans utilizing an archive to assist optimization with historical experience. In particular, The dADE method (Epitropakis et al., 2013) identifies redundant individuals for reinitialization using an archive that stores promising solutions. Huang et al. (Huang et al., 2020) proposed storing historical information in an enhanced binary space partitioning tree, dividing the space with the tree to aid optimization. Liao et al. (Liao et al., 2023) developed a history archive-assisted niching differential evolution with variable neighborhood, using the information from the archive to control the size of the neighborhood and change it during the evolution process. Wang et al. (Wang et al., 2020) proposed a parameter-free niching method based on adaptive estimation distribution, where each individual estimates the distribution of the environment based on nearby neighbors and decides the size of the niching.
As for the third aspect of the improvement, several recent works solve MMOPs through transforming them into multi-objective optimization problems (MOOs). These works commonly considered two objectives: the solution quality and the solution diversity of the population (Deb and Saha, 2012; Bandaru and Deb, 2013). Wang et al. (Wang et al., 2014) highlighted the significance of conflicts between the two objectives and proposed a novel method to efficiently construct conflict objectives for problem transformation. Cheng et al. (Cheng et al., 2017) transformed MMOPs into multi-objective problems using multi-objective EAs to obtain an approximate fitness landscape and then detected peaks based on stored historical samples. A local search method was applied to optimize the global optima.
Though promising results have been achieved by the above evolutionary multimodal optimization methods, they often require intricate manual tuning with deep expertise to secure the desirable performance for unseen tasks. The labour-intensive design process of these human-crafted methods hinders both the efficiency and effectiveness of solving MMOPs.
2.2. Meta-Black-Box Optimization
Recent Meta-Black-Box Optimization (MetaBBO) approaches (Ma et al., 2023; Chen et al., 2024; Guo et al., 2024), facilitate a bi-level optimization scheme to meta-learn a parameterized control policy in a data-driven way, which configures the low-level black-box optimizer during the optimization process to mitigate the design efforts required for unseen tasks.The parameterized control policy at the meta level can be optimized in various ways (e.g., supervised learning (Chen et al., 2017; TV et al., 2019; Gomes et al., 2021), reinforcement learning (Sharma et al., 2019; Tan and Li, 2021; Sun et al., 2021), and self-referential searching (Gomes et al., 2021; Lange et al., 2023a, b)). Take MetaBBO with reinforcement learning (MetaBBO-RL) as an example, it models the fine-tuning of the optimizer as a Markov Decision Process (MDP) and trains an agent to make decisions regarding algorithmic configurations at the meta-level. At each step , the agent queries the optimization status , and the policy uses as input to suggest an action for configuring the optimizer at the lower level. The environment then executes to obtain a new status . A reward is generated to measure the meta-performance improvement of the optimizer at the lower level. Subsequently, the agent at the meta-level can be trained based on the rules of reinforcement learning. The action space of MetaBBO-RL can involve dynamically selecting evolutionary algorithms/operators (Guo et al., 2024; Sharma et al., 2019; Tan and Li, 2021), controlling hyper-parameters (Sun et al., 2021; Wu and Wang, 2022), or generating interpretable update rules (Chen et al., 2024). This paper shares the similar ambition with the MetaBBO-RL for selecting algorithms/operators. Along this research branch, Sharma et al. (Sharma et al., 2019) trained a Deep Q-Network (DQN) (Mnih et al., 2015) to dynamically select the mutation strategy for the backbone DE optimizer, for each individual based on the timely optimization status. Guo et al. (Guo et al., 2024), on the other hand, propose leveraging Proximal Policy Optimization (PPO) (Schulman et al., 2017) to train an RL agent which selects suitable optimizer to optimize the problem for different optimization phases, enhancing the final optimization performance. To the best of our knowledge, in the domain of using MetaBBO-RL to solve MMOPs, Xia et al. (Xia et al., 2021) proposed to use Q-Learning to select solutions within the approximated basin of attraction, which balances the quality and diversity of the solutions during the optimization. However, this method needs re-training for unseen problems, lacking of the ability of generalization.
3. Preliminary
3.1. K-nearest Neighbors(KNN)
KNN (Cover and Hart, 1967) identifies the class of point based on its neighbors. Firstly, the Euclidean distance between and other points is calculated as a measure of similarity. The points with the shortest distances are considered the k-nearest neighbors of , and the class of is determined based on its neighbors. In this paper, the KNN method is used to construct the k-nearest neighborhood for each individual. Then, the features about the neighborhoods are calculated to provide local information to each individual, as described in Section 4.2.

3.2. DBSCAN
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al., 1996) is an unsupervised clustering algorithm. It is a non-parametric method based on density. For a set of points, the points that are closely packed together are grouped as clusters, while the points that lie alone in low-density regions are regarded as noise points. In this paper, RLEMMO utilizes DBSCAN to cluster the population in each generation, estimating the number of clusters and the members within each cluster. A novel reward mechanism based on the number of clusters and the best fitness within each cluster is designed to encourage the diversity and quality of the population during the optimization, as described in detail in Section 4.4.
3.3. Policy Gradients
Policy gradients is a method used to refine the Markov Decision Process (MDP) (Sutton and Barto, 1999). Given an MDP with the form , the agent observes a state and selects an action . The state transitions to according to the dynamics of the environment , and the agent receives a reward from the environment. The objective of RL is to find a policy , parameterized by the model weights , that maximizes the cumulative discounted reward , denoted as . Policy gradients aim to maximize the expected value through gradient ascent. The gradient of with respect to , denoted as , can be calculated as follows:
3.4. Attention Mechanism
The attention mechanism (Vaswani et al., 2017) is calculated as follows:
(2) |
where , , and are vectors representing the query, key, and value, respectively. These vectors are computed by weighting the input sequence with , , and . The value of represents the dimension of the key vector and is used for result normalization. In the case of the self-attention mechanism, , , and are derived from the same source sequence.
The multi-head attention mechanism maps vectors to different subspaces to enhance representation. Each attention head independently calculates , , and . The outputs of all heads are then concatenated together to produce the final result of the multi-head attention module:
(3) | |||
In our work, we utilize an attention module to enhance information sharing among the individuals in RLEMMO.
Class | Number | Formula | Description | |||
---|---|---|---|---|---|---|
1 |
|
|||||
2 |
|
|||||
3 |
|
|||||
4 |
|
|||||
5 |
|
|||||
6 |
|
|||||
7 |
|
|||||
8 |
|
|||||
9 |
|
|||||
10 |
|
|||||
11 |
|
|||||
12 |
|
|||||
13 |
|
|||||
14 |
|
|||||
15 |
|
|||||
16 |
|
|||||
17 |
|
|||||
18 |
|
|||||
19 |
|
|||||
20 |
|
|||||
21 |
|
|||||
22 |
|
4. Methodology
4.1. Overview
Our RLEMMO is a Meta-Black-Box Optimization framework with Reinforcement Learning (MetaBBO-RL), which follows a bi-level optimization approach. For a given MMOP, the bi-level optimization is carried out as follows: At the lower level MDP, starting with an initial solutions population , an RL agent (a policy network) dynamically switches diverse searching strategies for each individual in the population at each time step along the lower level optimization process. The solutions population is advanced from to using the strategy sampled from the RL agent, and emits a reward back to the RL agent. At the meta level, the RL agent is trained to maximize the expectation of the accumulated reward (meta objective) over a MMOP distribution. The blueprint of RLEMMO is illustrated in Figure 1. The components of the low-level MDP, including the optimization state representation, the solution-level optimization strategies (action), and the reward design, are detailed in Section 4.2, 4.3, and 4.4 respectively. A step-by-step derivation of the RLEMMO’s workflow is provided in Section 4.5.
4.2. State Representation
In RLEMMO, we incorporate Fitness Landscape Analysis (FLA) (Pitzer and Affenzeller, 2012) and Exploratory Landscape Analysis (ELA) (Mersmann et al., 2011) to profile the optimization status of the solution population during the optimization process, which are commonly adopted in recent MetaBBO approaches (Sharma et al., 2019; TV et al., 2019). In specific, we utilize an individual-level feature to analyze the status of each individual, as well as an additional neighbor-aware population feature , consisting of two parts: profiles the distributional features considering the entire population, and describes the local information around each individual. Feature complements and helps inform the meta-level RL to coordinate exploration and exploitation in the lower-level optimization. Note that we also inject the evolution-path information into the optimization status, such as stagnation and historical best solutions, to reflect the experience of the evolutionary history. The detailed formulas and descriptions of the state representation are presented in Table 1. We leave the per-feature exegesis at Appendix A.

4.3. Action
We provide five optional searching strategies that vary in the extent of exploration and exploitation to enrich the behavioral diversity of the low-level optimization process. Given the optimization state and of the individuals in the current solution population, we allow the RL agent to choose the appropriate strategies for each individual to match the up-to-date optimization status. We use to denote the action (selected strategies) for all individuals, and to denote the five strategies respectively. These strategies are briefly introduced as follows:
-
A1:
Inspired by VNCDE (Zhang et al., 2019), the first strategy is the Gaussian-based local search strategy, which perturbs the individual to search for a local optimum in a narrow range. Its formulation for the -th individual is shown below:
(4) where represents a random vector with values sampled from a Gaussian distribution with a mean of 0 and a standard deviation of . The setting for follows the original VNCDE paper. can aid the convergence of the solution population when the solutions are approaching the global optima.
-
A2:
We propose a KNN-based neighborhood mutation strategy to enhance the exploitation of local optimization information. It is formulated as follows:
(5) where represents the best individual in the KNN-based neighborhood of (denoted as ), while and are two distinct random individuals selected from . denotes the scaling factor.
-
:
Additionally, we propose a KNN-based neighborhood mutation strategy to enhance the exploration of local optimization information. It is formulated as follows:
(6) where represents distinct random individuals selected from . serves as a complement to to ensure comprehensive exploration within the local neighborhood area.
-
:
We propose a global exploratory mutation strategy that reinforces information sharing across different KNN-based neighborhoods.
(7) Using the best individual in a randomly chosen neighborhood as , the strategy allows to jump out of the current area and chase another optimum. The inclusion of two randomly selected individuals, and , makes the process highly exploratory.
-
:
We adopt the well-known ”DE/rand/1” mutation strategy to thoroughly explore the entire search space.
(8) where represents randomly selected individuals from the population. We note that and facilitate an individual’s ability to escape from both local optima and locally best neighbors.
The aforementioned searching strategies exhibit diverse behaviors when applied to the individuals in the solution population. Considering that in RLEMMO, the search behaviour of the population is controlled by the RL agent in a fine-grained way, e.g., at the individual level. This presents an opportunity for RLEMMO to enhance low-level optimization performance through meta-learning a flexible policy.
4.4. Reward Design
In the case of MMOP, providing feedback to the meta-level RL agent during the low-level optimization process poses a challenge. In RLEMMO, the reward feedback should not only inform the RL agent about the potential optimization performance gain achieved by adopting a specific searching strategy for the next optimization step (e.g., DE-DDQN (Sharma et al., 2019), LDE (Sun et al., 2021)), but it should also make the RL agent aware of the diversity within the current solution population, thereby facilitating the discovery of more global optima later on. To address this, we propose a novel reward scheme, denoted as , which synthetically reflects the potential performance gain in both solution quality and diversity based on clustering estimation. Specifically, when the solution population advances from time to , we first cluster the solutions in using the DBSCAN (Ester et al., 1996) method. Let represent the resulting clusters. We then evaluate the solutions in each cluster and denote the objective value of the best-performing solution in the -th cluster as . The clustering-based reward at time step is calculated as . The strikes a balance between the quality and diversity of solutions in a way that if the meta-level RL agent in RLEMMO seeks higher rewards throughout the MDP episode, it should learn a flexible policy that efficiently locates all global optima as quickly and comprehensively as possible. We note that the proposed clustering-based reward scheme aids the efficient convergence of the meta-level RL agent while enhancing the population diversity within the low-level optimization process.
4.5. Workflow of RLEMMO
We will now outline the step-by-step workflow of our RLEMMO approach. Firstly, we introduce the network architecture used in RLEMMO, as illustrated in Figure 2. The network parameters consist of four parts, namely , where: 1) and represent the MLP embedders for population features and individual features , respectively. 2) denotes the strategy decoder (actor in RL) responsible for strategy sampling. It includes a -heads self-attention block with a hidden embedding dimension of and an MLP layer with parameters in . Note that we introduce the self-attention mechanism to enhance communication among individuals. 3) represents the MLP critic network used for estimating return values in RL. For the sake of clarity, we will omit the time stamp in the following derivation.
At each time step of the low-level optimization process, we start by aggregating the population features and individual features of all individuals in the current solution population . Subsequently, we pass these two groups of features through separate embedders to obtain the corresponding feature embeddings:
(9) |
We then obtain the decision embeddings for all individuals by concatenating their respective population feature embeddings () and individual feature embeddings ():
(10) |
We feed the decision embeddings () into the actor network and simultaneously sample the strategies () for all individuals from the soft distributions on its outputs, as defined in Section 4.3:
(11) |
The expectation of the return values () is obtained by feeding the decision embeddings () into the critic network:
(12) |
We present the pseudo code of the training workflow in our RLEMMO in Algorithm 1. In this work, we utilize the Proximal Policy Optimization (PPO) method (Schulman et al., 2017) to meta-train the networks. Once the training process is complete, the trained RLEMMO model can be employed to solve unseen problems. Note that during training, the strategy is sampled from the output distribution, while during inference, the strategy is selected greedily based on the maximum likelihood.
5. Experimental results
Functions | RLEMMO | VNCDE | PNF-PSO | EMO-MMO | NBNC-PSO-ES | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
PR | SR | PR | SR | PR | SR | PR | SR | PR | SR | ||
Training | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
0.871 | 0.060 | 0.577 | 0 | 0.652 | 0 | 0.827 | 0.140 | 0.692 | 0 | ||
0.124 | 0 | 0.059 | 0 | 0.153 | 0 | 0.074 | 0 | 0.165 | 0 | ||
0.211 | 0 | 0.177 | 0 | 0.073 | 0 | 0.309 | 0 | 0.083 | 0 | ||
1 | 1 | 1 | 1 | 0.903 | 0.280 | 1 | 1 | 0.987 | 0.840 | ||
0.895 | 0.360 | 0.740 | 0 | 0.698 | 0.040 | 0.705 | 0 | 0.778 | 0.040 | ||
0.670 | 0 | 0.667 | 0 | 0.643 | 0 | 0.800 | 0.100 | 0.753 | 0.020 | ||
0.263 | 0 | 0.265 | 0 | 0.145 | 0 | 0.143 | 0 | 0.465 | 0 | ||
0.341 | 0 | 0.015 | 0 | 0 | 0 | 0.093 | 0 | 0.335 | 0 | ||
0.155 | 0 | 0 | 0 | 0 | 0 | 0.005 | 0 | 0.225 | 0 | ||
Average | 0.628 | 0.368 | 0.542 | 0.333 | 0.522 | 0.277 | 0.580 | 0.353 | 0.624 | 0.325 | |
Rank | 1.667 | 2.833 | 3.500 | 2.250 | 1.917 | ||||||
Testing | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
0.703 | 0 | 0.644 | 0 | 0.324 | 0 | 0.943 | 0.200 | 0.420 | 0 | ||
0.863 | 0.320 | 0.720 | 0 | 0.707 | 0 | 0.963 | 0.820 | 0.853 | 0.180 | ||
0.667 | 0 | 0.667 | 0 | 0.623 | 0 | 0.640 | 0 | 0.683 | 0 | ||
0.518 | 0 | 0.440 | 0 | 0.285 | 0 | 0.275 | 0 | 0.585 | 0 | ||
0.640 | 0 | 0.667 | 0 | 0.190 | 0 | 0.377 | 0 | 0.653 | 0 | ||
0.150 | 0 | 0.347 | 0 | 0.013 | 0 | 0.107 | 0 | 0.410 | 0 | ||
Average | 0.693 | 0.290 | 0.686 | 0.250 | 0.518 | 0.250 | 0.663 | 0.378 | 0.701 | 0.273 | |
Rank | 2.000 | 2.125 | 3.875 | 2.625 | 1.750 |
5.1. Experiment Setup
5.1.1. Dataset
We evaluate our RLEMMO on the well-known CEC 2013 MMOP benchmark (Li et al., 2013), which consists of multimodal problems with various dimensions and challenging landscape properties. And the number of optima varies from 1 to 216. For detailed definitions of the problems, please refer to Appendix B. We divide the problems into two groups: one for training and the other for testing. We carefully select problems with various dimensions for training, aiming to balance the distribution of problems with different dimension sizes in two datasets, to ensure the learning effectiveness and the generalization performance of our RLEMMO agent. The train-test split is detailed in Table 4, Appendix B.
5.1.2. Metrics
Peak Ratio (PR) and Success Rate (SR) are two metrics commonly used to evaluate the performance of an MMOP algorithm (Li et al., 2013). Given a specified accuracy level (e.g., ), we claim that an optimal of an MMOP is found when a solution is nearing that optimal with the accuracy. PR is calculated as the percentage of the number of found optima out of the total number of true optima. SR is calculated by defining a successful run as one in which all true optima have been found, and then calculating the ratio of the number of successful runs to the total number of runs. The formulas for calculating the two metrics are as follows:
(13) |
where is the number of runs, is the number of found satisfactory solutions in the -th run, is the number of known global optima, and is the number of successful runs.
5.1.3. Baselines
We include several strong MMOP algorithms as baselines and compare our RLEMMO with them on the CEC 2013 benchmark using the PR/SR metrics. Specifically, we adopt VNCDE (Zhang et al., 2019) and NBNC-PSO-ES (Luo et al., 2020) as baselines that incorporate adaptive niching strategies to solve MMOP. We also include PNF-PSO (Huang et al., 2020) as a strong baseline for addressing MMOP through efficient spatial sampling. Additionally, EMO-MMO (Cheng et al., 2017) serves as a baseline that solves MMOP by treating the global optima as independent objectives. We note that although RLEA-SSC (Xia et al., 2021) is an MetaBBO framework that incorporates RL at the meta level to control the selection operator during reproduction, we do not include it as a baseline due to its limitation in generalization. RLEA-SSC requires re-training on unseen problems, while our RLEMMO, once trained, can be directly used to solve unseen problems without the need for additional training.
5.1.4. Settings
For our RLEMMO, the values of and for all actions in the designed action set are set to and respectively. The population size in the experiments is set to . The value of for KNN is set to , while the values of and for DBSCAN are set to and respectively. We train RLEMMO for epochs, with a batch size of (i.e., initializing four populations for a sampled problem in a batch). The learning rate () starts at and linearly decays to . The RL settings follow the original PPO paper (Schulman et al., 2017). To ensure a fair and efficient evaluation, the population size () is set to , and the maximum function evaluations () is set to for all baselines, including our RLEMMO. The algorithmic configurations of the baselines follow their original papers. We evaluate each method for independent runs. The experiments were conducted on a computer with an Intel Gold 6254 CPU, an RTX 4090 GPU, and 64GB of RAM.



Algorithms | RLEMMO | VNCDE | PNF-PSO | EMO-MMO | NBNC-PSO-ES |
Running Time(s) | 8.57 | 16.46 | 4.13 | 0.59 | 17.70 |
5.2. Performance Analysis
We report the optimization performance (averaged over independent runs) and the runtime complexity of our RLEMMO and the considered baselines in Table 2 and Table 3. The results demonstrate that:
1. Our RLEMMO demonstrates its effectiveness in solving MMOP. For the problems in the training set, RLEMMO achieves superior optimization performance compared to the baselines. On average, it locates more global optima (PR) and has more successful runs (SR). The optimization performance of RLEMMO on high-dimensional problems such as , , and (which are challenging due to the large number of optima) demonstrates that the learned policy, which conditions on the up-to-date optimization status to flexibly switch the searching strategy, may be more effective in solving high-dimensional MMOP than the human-crafted rules used in the baselines.
2.The results on the test set highlight the potential generalization ability of RLEMMO. Once trained, RLEMMO is used to solve the problems in the test set without further fine-tuning. Although our algorithm does not rank the first in the comparison, we note that it demonstrates an acceptable level of generalization performance. The problems in the test set have different problem structures and landscape features. Despite this challenging setting, RLEMMO maintains competitive performance by achieving a similar PR performance to the best-performing NBNC-PSO-ES algorithm ( versus ), while demonstrating a more robust searching performance considering the SR ( versus ). Furthermore, we would like to clarify that the generalization ability of RLEMMO could be further improved by training on a problem distribution that includes various MMOPs, which is one of our future works.
3. RLEMMO demonstrates a more robust optimization performance across different landscape properties. We observe that some of the baselines encounter a significant performance drop across different problems, e.g., VNCDE from to , and PNF-PSO from to . Note that and have different problem structures, while and share the same structure but differ in dimensions. In contrast, RLEMMO exhibits a relatively small performance drop in these situations. This may reveal that the human-crafted rules in some baselines would fail to generalize across certain problem classes. RLEMMO, however, meta-learns to adaptively alter its searching pattern for fitting unseen problems.
4. From the results in Table 3, it is evident that RLEMMO exhibits an acceptable running complexity compared to other algorithms.
5.3. Ablation Study
To validate the effectiveness of the core designs in our work, we conducted several ablation studies from different aspects, including the state representation, the action set, and the design of the reward. Figure 3 shows the average PR (blue bar) and SR (orange bar) on the test set at an accuracy level of .
5.3.1. State Representation
To quantify the importance of , we conducted ablation experiments by removing , , or both from our and trained three models under these settings. In Figure 3(a), we present the performance of these three models (denoted as ‘’, ‘’ and ‘’, respectively ) as well as the original model (denoted as ‘’). The results indicate that: 1) Our proposed neighbor-aware population feature plays a key role in RLEMMO, enabling the meta-level RL agent to be fully informed about both the global and local landscape status. 2) The local information may contribute relatively more than the global distributional feature, as it biases the optimization status to provide fine-grained state representations.
5.3.2. Action
To quantify the importance of maintaining diverse searching strategies ( to ) for the final optimization performance of RLEMMO, we conducted ablation experiments by removing global strategies (), niching strategies (), or both from the action set and trained three RLEMMO models under these three settings. Figure 3(b) presents the performance of these three models (denoted as ‘’, ‘’, and ‘’, respectively) as well as the original model (denoted as ‘’). The results demonstrate that: 1) The action set containing both global strategies and niching strategies contributes to locating more optima by promoting optimization and maintaining diversity. 2) The niching strategies are relatively more important than the global strategies in solving MMOPs, underscoring the necessity of maintaining population diversity.
5.3.3. Reward
The reward mechanism in RLEMMO encourages two aspects of optimization: increasing the number of clusters and improving the object values within those clusters. This approach motivates the policy to maintain population diversity while striving for optimal solutions. To assess the importance of these two aspects, we have designed two new rewards that focus on each objective individually. The first reward, denoted as , is calculated using the best objective value in the population at generation , represented as , encouraging the optimization of quality. The second reward, denoted as , is computed based on the number of clusters formed by the population at generation , represented as , considering the diversity of the population. The formulas for these rewards are as follows:
(14) |
The results of the experiment with two new rewards (denoted as ‘’ and ‘’) and the original experiment (denoted as ‘’) are shown in Figure 3(c). The results indicate that: 1) The reward related to both diversity and quality is crucial for locating multiple optima, as it encourages the population to maintain diversity while optimizing for optimal solutions. 2) The reward related to diversity has a relatively greater influence than the one related to quality, enhancing the ability to discover more optimal solutions within a single run.
6. Conclusion
In this paper, we propose RLEMMO, the first generalizable Meta-Black-Box Optimization framework, for solving multimodal optimization problems using reinforcement learning. By incorporating a fitness landscape analysis based state representation informing the accurate local and global optimization status, the RL agent can be efficiently meta-learned through a novel clustering-based reward scheme on a problem distribution. Once trained, RLEMMO can be directly used to solve unseen problems, achieving competitive optimization performance against several strong MMOP solvers, both in quality and diversity. As a pioneering work, RLEMMO has certain improvement space, of which enriching the strategy diversity and a more fine-grained state representation are our immediate future works.
Acknowledgements.
This work was supported in part by the National Natural Science Foundation of China under Grant 62276100, in part by the Guangdong Natural Science Funds for Distinguished Young Scholars under Grant 2022B1515020049, in part by the Guangdong Regional Joint Funds for Basic and Applied Research under Grant 2021B1515120078, and in part by the TCL Young Scholars Program.References
- (1)
- Bandaru and Deb (2013) Sunith Bandaru and Kalyanmoy Deb. 2013. A parameterless-niching-assisted bi-objective approach to multimodal optimization. In 2013 IEEE congress on evolutionary computation.
- Chen et al. (2024) Jiacheng Chen, Zeyuan Ma, Hongshu Guo, Yining Ma, Jie Zhang, and Yue-Jiao Gong. 2024. SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning. In The Twelfth International Conference on Learning Representations.
- Chen et al. (2017) Yutian Chen, Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando Freitas. 2017. Learning to learn without gradient descent by gradient descent. In International Conference on Machine Learning.
- Cheng et al. (2017) Ran Cheng, Miqing Li, Ke Li, and Xin Yao. 2017. Evolutionary multiobjective optimization-based multimodal optimization: Fitness landscape approximation and peak detection. IEEE Transactions on Evolutionary Computation (2017).
- Cover and Hart (1967) Thomas Cover and Peter Hart. 1967. Nearest neighbor pattern classification. IEEE transactions on information theory (1967).
- Deb and Gulati (2001) Kalyanmoy Deb and Surendra Gulati. 2001. Design of truss-structures for minimum weight using genetic algorithms. Finite elements in analysis and design (2001).
- Deb and Saha (2012) Kalyanmoy Deb and Amit Saha. 2012. Multimodal optimization using a bi-objective evolutionary algorithm. Evolutionary computation (2012).
- Epitropakis et al. (2013) Michael G Epitropakis, Xiaodong Li, and Edmund K Burke. 2013. A dynamic archive niching differential evolution algorithm for multimodal optimization. In 2013 IEEE Congress on Evolutionary Computation.
- Ester et al. (1996) Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd.
- Goldberg et al. (1987) David E Goldberg, Jon Richardson, et al. 1987. Genetic algorithms with sharing for multimodal function optimization. In Genetic algorithms and their applications: Proceedings of the Second International Conference on Genetic Algorithms.
- Gomes et al. (2021) Hugo Siqueira Gomes, Benjamin Léger, and Christian Gagné. 2021. Meta learning black-box population-based optimizers. arXiv preprint arXiv:2103.03526 (2021).
- Guo et al. (2024) Hongshu Guo, Yining Ma, Zeyuan Ma, Jiacheng Chen, Xinglin Zhang, Zhiguang Cao, Jun Zhang, and Yue-Jiao Gong. 2024. Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution. arXiv preprint arXiv:2403.02131 (2024).
- Huang et al. (2020) Ting Huang, Yue-Jiao Gong, Wei-Neng Chen, Hua Wang, and Jun Zhang. 2020. A probabilistic niching evolutionary computation framework based on binary space partitioning. IEEE transactions on cybernetics (2020).
- Lange et al. (2023a) Robert Lange, Tom Schaul, Yutian Chen, Chris Lu, Tom Zahavy, Valentin Dalibard, and Sebastian Flennerhag. 2023a. Discovering attention-based genetic algorithms via meta-black-box optimization. In Proceedings of the Genetic and Evolutionary Computation Conference.
- Lange et al. (2023b) Robert Lange, Tom Schaul, Yutian Chen, Tom Zahavy, Valentin Dalibard, Chris Lu, Satinder Singh, and Sebastian Flennerhag. 2023b. Discovering evolution strategies via meta-black-box optimization. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation.
- Li et al. (2002) Jian-Ping Li, Marton E Balazs, Geoffrey T Parks, and P John Clarkson. 2002. A species conserving genetic algorithm for multimodal function optimization. Evolutionary computation (2002).
- Li (2009) Xiaodong Li. 2009. Niching without niching parameters: particle swarm optimization using a ring topology. IEEE Transactions on Evolutionary Computation (2009).
- Li et al. (2013) Xiaodong Li, Andries Engelbrecht, and Michael G Epitropakis. 2013. Benchmark functions for CEC’2013 special session and competition on niching methods for multimodal function optimization. RMIT University, Evolutionary Computation and Machine Learning Group, Australia, Tech. Rep (2013).
- Liao et al. (2023) Zuowen Liao, Xianyan Mi, Qishuo Pang, and Yu Sun. 2023. History archive assisted niching differential evolution with variable neighborhood for multimodal optimization. Swarm and Evolutionary Computation (2023).
- Luh and Lin (2011) Guan-Chun Luh and Chun-Yi Lin. 2011. Optimal design of truss-structures using particle swarm optimization. Computers & structures (2011).
- Luo et al. (2020) Wenjian Luo, Yingying Qiao, Xin Lin, Peilan Xu, and Mike Preuss. 2020. Hybridizing niching, particle swarm optimization, and evolution strategy for multimodal optimization. IEEE Transactions on Cybernetics (2020).
- Ma et al. (2023) Zeyuan Ma, Hongshu Guo, Jiacheng Chen, Zhenrui Li, Guojun Peng, Yue-Jiao Gong, Yining Ma, and Zhiguang Cao. 2023. MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning. In Advances in Neural Information Processing Systems.
- Mersmann et al. (2011) Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and Günter Rudolph. 2011. Exploratory landscape analysis. In Proceedings of the 13th annual conference on Genetic and evolutionary computation.
- Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature (2015).
- Pérez et al. (2012) Elena Pérez, Marta Posada, and Francisco Herrera. 2012. Analysis of new niching genetic algorithms for finding multiple solutions in the job shop scheduling. Journal of Intelligent manufacturing (2012).
- Pitzer and Affenzeller (2012) Erik Pitzer and Michael Affenzeller. 2012. A comprehensive survey on fitness landscape analysis. Recent advances in intelligent engineering systems (2012).
- Qu et al. (2012) Bo-Yang Qu, Ponnuthurai Nagaratnam Suganthan, and Swagatam Das. 2012. A distance-based locally informed particle swarm model for multimodal optimization. IEEE Transactions on Evolutionary Computation (2012).
- Schulman et al. (2015) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International conference on machine learning.
- Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
- Sharma et al. (2019) Mudita Sharma, Alexandros Komninos, Manuel López-Ibáñez, and Dimitar Kazakov. 2019. Deep reinforcement learning based parameter control in differential evolution. In Proceedings of the Genetic and Evolutionary Computation Conference.
- Shi (2004) Yuhui Shi. 2004. Particle swarm optimization. IEEE connections (2004).
- Storn and Price (1997) Rainer Storn and Kenneth Price. 1997. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization (1997).
- Sun et al. (2021) Jianyong Sun, Xin Liu, Thomas Bäck, and Zongben Xu. 2021. Learning adaptive differential evolution algorithm from optimization experiences by policy gradient. IEEE Transactions on Evolutionary Computation (2021).
- Sutton and Barto (1999) Richard S Sutton and Andrew G Barto. 1999. Reinforcement learning: An introduction. Robotica (1999).
- Tan and Li (2021) Zhiping Tan and Kangshun Li. 2021. Differential evolution with mixed mutation strategy based on deep reinforcement learning. Applied Soft Computing (2021).
- Thomsen (2004) Rene Thomsen. 2004. Multimodal optimization using crowding-based differential evolution. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753).
- TV et al. (2019) Vishnu TV, Pankaj Malhotra, Jyoti Narwariya, Lovekesh Vig, and Gautam Shroff. 2019. Meta-learning for black-box optimization. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems (2017).
- Wang et al. (2014) Yong Wang, Han-Xiong Li, Gary G Yen, and Wu Song. 2014. MOMMOP: Multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems. IEEE transactions on cybernetics (2014).
- Wang et al. (2020) Zi-Jia Wang, Yu-Ren Zhou, and Jun Zhang. 2020. Adaptive estimation distribution distributed differential evolution for multimodal optimization problems. IEEE transactions on cybernetics (2020).
- Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning (1992).
- Wu and Wang (2022) Di Wu and G Gary Wang. 2022. Employing reinforcement learning to enhance particle swarm optimization methods. Engineering Optimization (2022).
- Xia et al. (2021) Hai Xia, Changhe Li, Sanyou Zeng, Qingshan Tan, Junchen Wang, and Shengxiang Yang. 2021. A reinforcement-learning-based evolutionary algorithm using solution space clustering for multimodal optimization problems. In 2021 IEEE Congress on Evolutionary Computation (CEC).
- Yoo et al. (2015) Chung-Hee Yoo, Dong-Kuk Lim, and Hyun-Kyo Jung. 2015. A novel multimodal optimization algorithm for the design of electromagnetic machines. IEEE Transactions on Magnetics (2015).
- Zhang et al. (2023) Junna Zhang, Degang Chen, Qiang Yang, Yiqiao Wang, Dong Liu, Sang-Woon Jeon, and Jun Zhang. 2023. Proximity ranking-based multimodal differential evolution. Swarm and Evolutionary Computation (2023).
- Zhang et al. (2019) Yu-Hui Zhang, Yue-Jiao Gong, Ying Gao, Hua Wang, and Jun Zhang. 2019. Parameter-free voronoi neighborhood for evolutionary multimodal optimization. IEEE Transactions on Evolutionary Computation (2019).
Appendix A Per-feature Exegesis
The state representation for the optimization status in RLEMMO consists of a total of features. It contains two aspects: , which describes the environment of each individual, and , which describes the landscape of the entire population and the neighborhood around each individual. Next, we will explain each feature in three parts respectively.
-
•
For , we introduce five simple yet informative features to provide the RL agent with a global view of the population. describes the distance between each pair in the population, and describes the standard deviation of objective values. These two features represent the global distribution and convergence status. provides the remaining portion of generations during the optimization progress. And is the stagnation of the global best solution, reflecting the potential for exploitation. is the average objective value of the population, representing the optimization level of the population. Note that since is a global state, so it is the same for all individuals in the population. From these features, the agent can learn about the knowledge for global optimization.
-
•
for each individual is composed of the information within its neighborhood , which is constructed by KNN (Cover and Hart, 1967), providing the agent with informative local information around each individual. profiles the distance of each pair within , and the standard deviation of objective values within is calculated as . These two feature represent the local distribution status around each individual. is the stagnation of the best objective value within , reflecting the potential for local exploitation. describes the local optimization progress by the average objective value within . An additional feature is introduced to profile the performance of among the whole population by ranking all neighborhoods based on the best fitness in each neighborhood. These features provide the knowledge about landscape for decision about local search.
-
•
describes the information about each individual itself, denoted from to . Firstly, we utilize features related to the distance and objective value difference from the best solutions to coordinate the progress in exploitation. To be specific, and are calculated by the distance and the difference of objective value between and the best solution in the current population, while and provide information between and the historical best solution. These four features guide for the global exploitation for . and inform the relationship between and the best solution in its neighborhood , guiding the local exploitation for . Secondly, to provide the agent with a perspective on exploration, we utilize the information about the pairs within the population. Here, and are the average difference of objective values and the average distances between and each neighbors within , which provides the local environment within . and describe the relationship between and other individuals in the entire population, reflecting the global environment about . These aforementioned features provide the necessary knowledge for the agent to balance exploitation and exploration, whether it is at a local or global level. Also, the stagnation and absolute performance of are included as and .
The aforementioned features provide comprehensive knowledge of each individual about current landscape and evolution-path, assisting the agent to select appropriate search strategies to match the optimization status.
Appendix B Benchmark
Problem Number | Function | r | Peak height | No. global optima | |
Training | (1D) | 0.01 | 200.0 | 2 | |
(1D) | 0.01 | 1.0 | 1 | ||
(2D) | 0.01 | 200.0 | 4 | ||
(2D) | 0.5 | 186.731 | 18 | ||
(3D) | 0.5 | 2709.0935 | 81 | ||
(3D) | 0.2 | 1.0 | 216 | ||
(2D) | 0.01 | -2.0 | 12 | ||
(2D) | 0.01 | 0 | 8 | ||
(2D) | 0.01 | 0 | 6 | ||
(5D) | 0.01 | 0 | 8 | ||
(10D) | 0.01 | 0 | 8 | ||
(20D) | 0.01 | 0 | 8 | ||
Testing | (1D) | 0.01 | 1.0 | 5 | |
(2D) | 0.5 | 1.03163 | 2 | ||
(2D) | 0.2 | 1.0 | 36 | ||
(2D) | 0.01 | 0 | 6 | ||
(3D) | 0.01 | 0 | 6 | ||
(3D) | 0.01 | 0 | 8 | ||
(5D) | 0.01 | 0 | 6 | ||
(10D) | 0.01 | 0 | 6 |
The CEC2013 MMOP Benchmark consists of 20 problems, each generated by a specified function and dimension size. There are functions used to construct the problem set, with the first functions being simple and well-known functions, while the remaining problems are complex composition functions. Each composition function is formulated based on the following equation:
(15) |
where is the number of basic functions, denotes the normalization of the -th basic function, and is the corresponding weight. , and are the new shifted optimum, linear transformation matrix and a parameter used to stretch or compress for each , respectively. Additionally, and are two bias parameters. The pool of basic functions used to construct composition functions is listed as follows:
-
(1)
Sphere function:
(16) -
(2)
Grienwank’s function:
(17) -
(3)
Rastrigin’s function:
(18) -
(4)
Weierstrass function:
(19) where .
-
(5)
Expanded Griewank’s plus Rosenbrock’s function(EF8F2):
(20) (21) (22)
Using these basic functions, several composition functions have been constructed for the benchmark, whose brief information is listed below, following the formulations of the simple functions in the benchmark. More details about the parameters and characters of each function can be referred in (Li et al., 2013).
-
•
: Five-Uneven-Peak Trap
(23) -
•
: Equal Maxima
(24) -
•
: Uneven Decreasing Maxima
(25) -
•
: Himmelblau
(26) -
•
: Six-Hump Camel Back
(27) -
•
: Shubert
(28) -
•
: Vincent
(29) -
•
: Modified Rastrigin - All Global Optima
(30) -
•
: Composition Function 1
-
–
: Grienwank’s function,
-
–
: Weierstrass function,
-
–
: Sphere function.
-
–
-
–
-
–
are identity matrices
-
–
-
•
: Composition Function 2
-
–
: Rastrigin’s function,
-
–
: Weierstrass function,
-
–
: Griewank’s function,
-
–
: Sphere function.
-
–
-
–
-
–
are identity matrices
-
–
-
•
: Composition Function 3
-
–
: EF8F2 function,
-
–
: Weierstrass function,
-
–
: Griewank’s function.
-
–
-
–
-
–
are different linear transformation matrices with condition number one.
-
–
-
•
: Composition Function 4
-
–
: Rastrigin’s function,
-
–
: EF8F2 function,
-
–
: Weierstrass function,
-
–
: Griewank’s function.
-
–
-
–
-
–
are different linear transformation matrices with condition number one.
-
–
Having these functions, problems in the benchmark can be constructed with assigned dimension size. The parameters for each problem are shown in Table 4. Note that we divide the problems into two groups according to the training and testing settings used in our experiments on RLEMMO.