This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

An Efficient Reconstructed Differential Evolution Variant by Some of the Current State-of-the-art Strategies for Solving Single Objective Bound Constrained Problems

1st Sichen Tao Faculty of Engineering
University of Toyama
Toyama, Japan
[email protected]
[email protected]
   2nd Ruihan Zhao School of Mechanical Engineering
Tongji University
Shanghai, China
[email protected]
   3rd Kaiyu Wang Faculty of Engineering
University of Toyama
Toyama, Japan
[email protected]
   4th Shangce Gao Faculty of Engineering
University of Toyama
Toyama, Japan
[email protected]
Abstract

Complex single-objective bounded problems are often difficult to solve. In evolutionary computation methods, since the proposal of differential evolution algorithm in 1997, it has been widely studied and developed due to its simplicity and efficiency. These developments include various adaptive strategies, operator improvements, and the introduction of other search methods. After 2014, research based on LSHADE has also been widely studied by researchers. However, although recently proposed improvement strategies have shown superiority over their previous generation’s first performance, adding all new strategies may not necessarily bring the strongest performance. Therefore, we recombine some effective advances based on advanced differential evolution variants in recent years and finally determine an effective combination scheme to further promote the performance of differential evolution. In this paper, we propose a strategy recombination and reconstruction differential evolution algorithm called reconstructed differential evolution (RDE) to solve single-objective bounded optimization problems. Based on the benchmark suite of the 2024 IEEE Congress on Evolutionary Computation (CEC2024), we tested RDE and several other advanced differential evolution variants. The experimental results show that RDE has superior performance in solving complex optimization problems.

Index Terms:
Evolutionary Computation, Evolutionary Algorithm, Meta-heuristics, Differential Evolution, Optimization Problems

I Introduction

Optimization has become a fundamental tool in applied mathematics, engineering, economics, medical science and other scientific fields, including energy systems, bioinformatics, finance, manufacturing and production, transportation and logistics, telecommunications networks etc[1][2]. Real-world optimization problems are usually related to one or more of the following: nonlinear complex constraints, multimodal, multidimensional, non-differentiable, and noisy search spaces. Traditional deterministic methods often perform poorly in complex optimization problems. In contrast, evolutionary computation methods have been discovered and considered as an excellent alternative to traditional optimization techniques in the past 40 years of research[3]. They can avoid local optima as much as possible within reasonable computing time and find at least sufficiently good solutions[4][5]. Additionally, evolutionary algorithms are simple and easy to use. They are usually based on some simple biological or object concepts without requiring differentiation of problem functions. They have certain generality that can be used to solve various types of optimization problems.

Several classic optimization methods based on evolutionary thinking have been proposed, such as genetic algorithm[6], particle swarm optimization[7], and differential evolution (DE)[8]. Their various advanced variants have also been fully developed, tested, and researched over the past 20 years. In the IEEE Congress on Evolutionary Computation (IEEE CEC) in recent decades, various types of optimization problems have been widely tested and competed. Among them, since SHADE was proposed in 2013[9], new variants of differential evolution algorithms have consistently ranked high in annual competitions. The encoding of these algorithms is not very complex but they exhibit excellent performance on relatively complex space optimization problems and continue to achieve performance improvements. A notable improvement path has emerged from this research. In 2014, LSHADE[10] improved the upper limit performance of DE variants in the CEC2014 standard test set by introducing linear population changes. In 2018, LSHADE-RSP[11] achieved second place in the competition by introducing a rank-based selection pressure strategy. In 2021, an improved LSHADE-RSP (iLSHADE-RSP) using Cauchy perturbation search was shown to further improve the upper limit performance of this series of DEs [12]. Additionally, in 2019 an enhanced mutation strategy called EB mutation strategy [13] was reported to effectively improve the performance of LSHADE and its previous generations’ algorithms as well.

In this study, based on recent research that has brought effective performance improvements to DE advanced variants, we construct and propose a new DE variant called Reconstructed Differential Evolution (RDE). We studied multiple parameter adaptive strategies and developed the RSP strategy. In addition, the EB mutation strategy was also introduced to participate in the optimization process with LSHADE series mutation strategies, and adaptive control of dual-operator population resources was achieved through an evaluation method based on fitness progress. We tested RDE on the benchmark suite of the 2024 IEEE Congress on Evolutionary Computation (CEC2024) and compared it with other advanced evolutionary algorithms. The experimental results show that RDE has excellent performance and development potential in solving single-objective bounded optimization problems.

II Reconstruct Differential Evolution

II-A The Basic DE of RDE

In this section, we elaborate on one of the classic variants of DE and use it as the basis for our research in this paper. In the following sections, we will gradually develop based on this basic DE.

Like other evolutionary algorithms, DE typically generates an initial population of NN individuals based on the following formula.

xi,j=rand(xL,j,xU,j)x_{i,j}=rand(x_{L,j},x_{U,j}) (1)

where the function rand(a,b)rand(a,b) returns a random number within the range of [a,b][a,b]. xL,jx_{L,j} and xU,jx_{U,j} respectively represent the lower bound and upper bound of dimension jj.

The core of differential evolution is the mutation operator. Since the original differential evolution (DE) was proposed in 1997 [14], various improved mutation operators have been introduced, including DE/rand/1, DE/rand/2, DE/best/1, DE/best/2, and DE/current-to-best/1. The most famous one is the DE/current-to-pbest/1 mutation strategy introduced by JADE in 2009 [15]. This strategy is also the most commonly used efficient strategy for LSHADE and its descendant variants. Therefore, we use it as the basis of our research in this paper. Its expression is as follows:

vi,j(k+1)=xi,j(k)+F(xp,j(k)xi,j(k))+F(xr1,j(k)xr2,j(k))v_{i,j}^{(k+1)}=x_{i,j}^{(k)}+F\left(x_{p,j}^{(k)}-x_{i,j}^{(k)}\right)+F\left(x_{r1,j}^{(k)}-x_{r2,j}^{(k)}\right) (2)

where kk represents the number of iterations. xi,jx_{i,j} refers to the value of the jj-th dimension of the ii-th individual. r1r1 and r2r2 are two different random indices used for randomly selecting individuals from the population. pp is a randomly selected index value from the top 100p%100p\% individuals in the population sorted by fitness values, with a range of (0,1](0,1]. FF is a scale factor used to control the size of differential vectors, with a range of [0,1][0,1].

Another major part of LSHADE is crossover, which is controlled by a crossover probability CrCr ranging from 0 to 1. For each dimension of every new trial solution generated after mutation for each individual in the population, there is a probability of (1Cr)(1-Cr) that it will be replaced by the value of the same dimension from its parent. This inheritance comes from earlier genetic algorithms in evolutionary computation [16]. The expression is as follows:

ui,j(k)={vi,j(k),j=jrandorrand<Crxi,j(k),otherwiseu_{i,j}^{(k)}\;=\;\left\{\begin{array}[]{l}v_{i,j}^{(k)},\;\;\;j\;=\;j_{rand}\;\;{\rm or}\;\;rand\;<\;Cr\\ x_{i,j}^{(k)},\;\;\;{\rm otherwise}\end{array}\right. (3)

where ui,ju_{i,j} is the new individual generated by executing crossover strategy between vi,jv_{i,j} and xi,jx_{i,j}. jrandj_{rand} represents a random integer within the range of [0,D][0,D] (where DD is the dimension of the optimization problem). The purpose of jrandj_{rand} is to ensure that at least one dimension adopts the value generated by mutation strategy.

Finally, there is the selection part. LSHADE compares the fitness values of offspring individuals with their corresponding parent individuals and keeps the one with better fitness value. The algorithm will perform this selection operation on each individual in the population separately. Its expression is as follows.

xi(k+1)={ui(k),f(ui(k))f(xi(k))xi(k),otherwisex_{i}^{(k+1)}\;=\;\left\{\begin{array}[]{l}u_{i}^{(k)},\;\;\;f(u_{i}^{(k)})\;\leq\;f(x_{i}^{(k)})\\ x_{i}^{(k)},\;\;\;{\rm otherwise}\end{array}\right. (4)

where f(ui(k))f(u_{i}^{(k)}) and f(xi(k))f(x_{i}^{(k)}) indicates the evaluation function of the individual ui(k)u_{i}^{(k)} and xi(k)x_{i}^{(k)}.

II-B The Reconstructured Differential Evolution

In this section, we will combine different strategies of basic DE with various advanced DE variants proposed in recent years, and develop and propose a reconstructed DE (RDE).

II-B1 External Archive

In the mutation strategy of the basic DE framework we adopt, a random index r2r2 is used to select useful individuals from a joint population consisting of the current population and an external archive. This approach further balances the algorithm’s development and exploratory performance, and enhances its stability. The external archive preserves some parent individuals from history that have already been replaced in the current archive. The use of an external archive has been adopted by a series of advanced DE algorithms starting with SHADE. The size of this archive is defined as ArNAr\cdot N, where NN represents the number of individuals in the population.

II-B2 Introducing the DE/current-to-order-pbest/1 Mutation Strategy

In a study conducted in 2019, a new mutation strategy called DE/current-to-order-pbest/1 was proposed. It sorts and recombines the first, third, and fourth terms of two differential groups in DE/current-to-pbest/1 according to their fitness values, organizes them in order of better, medium, and worse solutions, and generates applicable solutions accordingly. The expression is as follows:

vi,j(k+1)=xi,j(k)+F(xord_pbest,j(k)xi,j(k))\displaystyle v_{i,j}^{(k+1)}=x_{i,j}^{(k)}+F\left(x_{{ord\_pbest},j}^{(k)}-x_{i,j}^{(k)}\right) (5)
+F(xord_median,j(k)xord_worst,j(k))\displaystyle+F\left(x_{{ord\_median},j}^{(k)}-x_{{ord\_worst},j}^{(k)}\right)

xord_pbest,j(k)x_{{ord\_pbest},j}^{(k)}, xord_median,j(k)x_{{ord\_median},j}^{(k)} and xord_worst,j(k)x_{{ord\_worst},j}^{(k)} represent the individuals in the corresponding positions after sorting, which correspond to better, medium and worse fitness values among the three.

II-B3 Hybridization of Two Mutation Strategies

We use both effective mutation strategies mentioned above to update individuals in the population. Here, we propose a simple adaptive strategy to dynamically control the allocation ratio of population resources between the two strategies. We use γ1[0,1]\gamma_{1}\in[0,1] to represent the resource allocation ratio for DE/current-to-order-pbest/1 strategy, and the resource allocation ratio for DE/current-to-pbest/1 is γ2=(1γ1)\gamma_{2}=(1-\gamma_{1}). We use ωm1\omega_{m1} and ωm2\omega_{m2} to represent the average fitness improvement caused by the two mutation strategies respectively, and determine the allocation ratio of next generation based on their proportion. The expression is as follows:

ωm1(k)=i=1N1(fm1,i(k)fm1,i(k1))N1\omega_{m1}^{(k)}=\frac{\sum_{i=1}^{N_{1}}(f_{m1,i}^{(k)}-f_{m1,i}^{(k-1)})}{N_{1}} (6)
ωm2(k)=i=1N2(fm2,i(k)fm2,i(k1))N2\omega_{m2}^{(k)}=\frac{\sum_{i=1}^{N_{2}}(f_{m2,i}^{(k)}-f_{m2,i}^{(k-1)})}{N_{2}} (7)
γ1(k+1)={0.5,ωm1(k)=ωm2(k1)=0ork=1ωm1(k)ωm1(k)+ωm2(k1),otherwise\gamma_{1}^{(k+1)}\;=\;\left\{\begin{array}[]{l}0.5,\;\;\;\omega_{m1}^{(k)}=\omega_{m2}^{(k-1)}=0\;or\;k=1\\ \frac{\omega_{m1}^{(k)}}{\omega_{m1}^{(k)}+\omega_{m2}^{(k-1)}},\;\;\;{\rm otherwise}\end{array}\right. (8)

where when the average fitness improvement caused by both strategies is zero, let γ1=γ2=0.5\gamma_{1}=\gamma_{2}=0.5. When initializing the population, γ1(1)=γ2(1)=0.5\gamma_{1}^{(1)}=\gamma_{2}^{(1)}=0.5.

II-B4 Extened Rank-based Selective Pressure Strategy

In LSHADE-RSP[11], a fitness-based rank pressure method[17] is proposed to correct the probability of different individuals being selected in DE, with rank indicators assigned as follows:

Ranki=kr(Ni)+1Rank_{i}=k_{r}\cdot(N-i)+1 (9)

where the ii here indicates the original rank calculated by the fitness values of different individuals in a population. RankiRank_{i} indicates the corrected rank value of the individual ranked iith. By this equation, the scaling factor krk_{r} is responsible for the greediness of the rank selection.

Then, the probability of selecting corresponding individuals is no longer equal and is modified by the following formula:

pri=Ranki/(Rank1+Rank2++RankN)pr_{i}=Rank_{i}/(Rank_{1}+Rank_{2}+...+Rank_{N}) (10)

pripr_{i} represents the probability of the individual ranked ii being selected in each random selection of the mutation strategy.

The strategy is used in LSHADE-RSP to influence the selection probability of the last two terms in the differential term, namely r1r_{1} and r2r_{2} in Eq. 2. However, we extend it to three random selection behaviors of two differential terms in the new variant proposed in this paper, including pp, r1r_{1}, and r2r_{2} in Eq. 2. Correspondingly, in DE/current-to-order-pbest/1 Mutation Strategy, pre-sorting selection is also based on this Extended Rank-based Selective Pressure Strategy. Note that the selection behavior in the historical archive of LSHADE-RSP did not execute RSP. Here we also make corrections to the selection probability of historical archives using the same method.

II-B5 Success Historical Memory based Parameter Adaptive Strategy

In order to adaptively adjust the values of parameters FF and CrCr, the Success Historical Memory based Parameter Adaptive Strategy in jSO[18] is used in RDE. Firstly, two archives with HH entries are used to store reference values for FF and CrCr (HF=HCr=HH_{F}=H_{Cr}=H), which are updated in real-time. The values stored at position hh, denoted as μF,h\mu_{F,h} and μCr,h\mu_{Cr,h} respectively (h[1,H]h\in[1,H]), are used to determine the value of FiF_{i} and CriCr_{i} for each individual in the population using the following formulae during each generation:

Cri(k)=CauchyRand(μF,h,0.1)Cr_{i}^{(k)}=C\!auchy\!Rand(\mu_{F,h},0.1) (11)
Fi(k)=NormalRand(μCr,h,0.1)F_{i}^{(k)}=N\!ormal\!Rand(\mu_{Cr,h},0.1) (12)
h=mod(kH)h=mod(\frac{k}{H}) (13)

Among them, CauchyRand(a,b)C\!auchy\!Rand(a,b) returns a random number obtained from a Cauchy distribution with median of aa and scale factor of bb. NormalRand(a,b)N\!ormal\!Rand(a,b) returns a random number obtained from a normal distribution with mean of aa and variance of bb. mod(ab)mod(\frac{a}{b}) returns the remainder obtained when dividing aa by bb.

Then, the used archive slot is immediately updated, expressed as follows:

μF,j(k)=n=1|SF(k)|ωn(k)sF,n(k)2n=1|SF(k)|ωn(k)sF,n(k)\mu_{F,j}^{(k)}=\frac{\sum_{n=1}^{|S_{F}^{(k)}|}\omega_{n}^{(k)}{s_{F,n}^{(k)}}^{2}}{\sum_{n=1}^{|S_{F}^{(k)}|}\omega_{n}^{(k)}s_{F,n}^{(k)}} (14)
μCr,j(k)=n=1|SCr(k)|ωn(k)sCr,n(k)2n=1|SCr(k)|ωn(k)sCr,n(k)\mu_{Cr,j}^{(k)}=\frac{\sum_{n=1}^{|S_{Cr}^{(k)}|}\omega_{n}^{(k)}{s_{Cr,n}^{(k)}}^{2}}{\sum_{n=1}^{|S_{Cr}^{(k)}|}\omega_{n}^{(k)}s_{Cr,n}^{(k)}} (15)

where sF,n(k)s_{F,n}^{(k)} and sCr,n(k)s_{Cr,n}^{(k)} represent the nnth values of FF and CrCr of one successful trial in the kkth iteration. |Sr(k)||S_{r}^{(k)}| and |Ss(k)||S_{s}^{(k)}| represent the length of success history vectors Sr(k)S_{r}^{(k)} and Ss(k)S_{s}^{(k)} in the kkth iteration. ωh(k)\omega_{h}^{(k)} is calculated as follows:

ωn(k)=fn(k)fn(k1)g=1|S(k)|fg(k)fg(k1)\omega_{n}^{(k)}=\frac{f_{n}^{(k)}-f_{n}^{(k-1)}}{\sum_{g=1}^{|S^{(k)}|}f_{g}^{(k)}-f_{g}^{(k-1)}} (16)

where ωn\omega_{n} represents the nnth individual’s fitness successful weight for evaluations in Eq. (14) and Eq. (15).

where a strategy used in iLSHADE[19], jSO[18] and LSHADE-RSP[11] is introduced. The last positions of FF and CrCr in the archive are fixed during the optimization process, with their values set to 0.9 as follows:

HF,H=HCr,H=0.9H_{F,H}=H_{Cr,H}=0.9 (17)

In addition, some related constraints have also been adopted. In the early stages, excessively large FF and excessively small CrCr were not allowed. Their formulas are expressed as follows:

Fi(k)=0.7,ifnfes<0.6max_nfesandFi(k)>0.7F_{i}^{(k)}=0.7,~{}~{}~{}i\!f~{}n\!f\!es<0.6\cdot max\_n\!f\!es~{}and~{}F_{i}^{(k)}>0.7 (18)
Cri(k)={0.7,ifnfes<0.25max_nfesandFi(k)<0.70.6,elseifnfes<0.5max_nfesandFi(k)<0.6Cr_{i}^{(k)}\;=\;\left\{\begin{array}[]{l}0.7,\;\;\;~{}~{}~{}i\!f~{}n\!f\!es<0.25\cdot max\_n\!f\!es{}\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}and~{}F_{i}^{(k)}<0.7\\ 0.6,\;\;\;~{}~{}~{}else~{}i\!f~{}n\!f\!es<0.5\cdot max\_n\!f\!es{}\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}and~{}F_{i}^{(k)}<0.6\end{array}\right. (19)

where nfesn\!f\!es and max_nfesmax\_n\!f\!es represent the current fitness evaluation number and the maximum fitness evaluation number.

II-B6 Linear pp Value Reduction

After each generation of iteration kk, the value of pp for ”pbest” selection strategy in Eq. (2) and Eq. (3) is linearly reduced. The formula is expressed as follows:

p(k)=pmax(10.5nfesmax_nfes)p^{(k)}=p_{max}\cdot(1-0.5\cdot\frac{n\!f\!es}{max\_n\!f\!es}) (20)

where pmaxp_{max} represents the initialized pp value. p(k)p^{(k)} denotes the pp value for the ”pbest” selection strategy at the kkth iteration.

II-B7 Linear Population Size Reduction

We introduced the linear population size reduction strategy from LSHADE.

N(k+1)=round((Nmin(k)Nmax(k)max_nfes)nfes+Nmax(k))N^{(k+1)}\;=\;round\;((\frac{N_{min}^{(k)}\;-\;N_{max}^{(k)}}{max\_n\!f\!es})\;\cdot\;n\!f\!es\;+\;N_{max}^{(k)}) (21)

where round()round() returns one rounded integer. NmaxN_{max} and NminN_{min} denote the beginning and the ending size of the population, respectively. The value of NminN_{min} is determined by the minimum number of population individuals that can undergo differential evolution mutation operator.

II-B8 Cauthy Perturbation

A Cauthy perturbation strategy called iLSHADE-RSP, proposed in 2021, is used to replace individuals that undergo parent crossover with a probability of prp_{r}. We introduce this strategy into RDE to appropriately enhance population diversity during the optimization process. The strategy is expressed as follows:

ui,j(k)={vi,j(k),j=jrandorrand<CriCauchyRand(xi,j(k),0.1),elseifrand<prxi,j(k),otherwiseu_{i,j}^{(k)}\;=\;\left\{\begin{array}[]{l}v_{i,j}^{(k)},\;\;\;j\;=\;j_{rand}\;\;{or}\;\;rand\;<\;Cr_{i}\\ C\!auchy\!Rand(x_{i,j}^{(k)},0.1),\;\;\;{else~{}i\!f~{}rand<p_{r}}\\ x_{i,j}^{(k)},\;\;\;{otherwise}\end{array}\right. (22)

where randrand denotes a random value from 0 to 1. prp_{r} represents a parameter to control the Cauthy perturbation strategy here.

Input: Parameters max_nfesmax\_n\!f\!es, DD, Nmax=18DN_{max}=18D, H=5H=5, pmax=0.25p_{max}=0.25, Ar=1Ar=1, μF=0.3\mu_{F}=0.3, μCr=0.8\mu_{Cr}=0.8, γ1=γ2=0.5\gamma_{1}=\gamma_{2}=0.5, kr=3k_{r}=3 and pr=0.2p_{r}=0.2
Output: The obtained best solution
1 Initialization: Randomly generate a population {x1,x2,,xNx_{1},\;x_{2},\;...,\;x_{N}} by Eq. (1), set nfes=0n\!f\!es=0
2while nfes<max_nfesn\!f\!es<max\_n\!f\!es do
3       kk = kk + 1;
4       for i = 1 to NN do
5             Calculate the pp value by Eq. (20);
6            Calculate the Fi(k)F_{i}^{(k)} and Cri(k)Cr_{i}^{(k)} by Eq. (11)-(13), (18) and (19);
7            Calculate the rank-based selective pressure rand indices by Eq. (9) and (10);
8            Generate the mutation trial vectors vi,kv_{i,k} by Eq. (2) and (5);
9            Generate the candidate solution ui,ju_{i,j} uses Eq. (22);
10            Constrain the boundary of individual uiu_{i};
11            Evaluate and record the fitness value of the uiu_{i};
12            nfesn\!f\!es = nfesn\!f\!es + 1;
13            Select the off-spring individual xix_{i} by Eq. (4);
14       end for
15      
16      Update external archive;
17      Update HFH_{F} and HCrH_{Cr} by Eq. (14)-(17);
18      Update sub-population racios γ1(k+1)\gamma_{1}^{(k+1)} and γ2(k+1)\gamma_{2}^{(k+1)} by Eq. (6)-(8);
19      Update the population size N(k+1)N^{(k+1)} by Eq. (21);
20 end while
Algorithm 1 RDE

II-B9 The pseudo-code of RDE

The pseudo-code of RDE is given in Algorithm 1.

III Experimental Results

III-A Benchmark Functions

The performance of the proposed algorithm RDE is tested on the CEC 2024 Competition for Single-Objective Real Parameter Numerical Optimization[20]. The benchmark suite, CEC2024, comprises 29 test functions (with 30D30D) with diverse features, including unimodal functions (f1f2f_{1}-f_{2}), multimodal functions (f3f9f3-f9), hybrid funtions (f10f19f10-f19), and composition functions (f20f29f20-f29). The variables in hybrid functions are randomly divided into subcomponents, and different basic functions are used for each subcomponent. The composition functions utilize basic benchmark functions to create problems with a randomly located global optimum and several randomly located deep local optima, making them more challenging to solve.

III-B Parameter Settings

In this study, the algorithm parameter settings are set as follows:

  • 1)

    population maximum and minimum sizes Nmin=4N_{min}=4 and Nmax=18DN_{max}=18\cdot D;

  • 2)

    success history archive memory size H=5H=5;

  • 3)

    rank greediness factor kr=3k_{r}=3;

  • 4)

    external archive size Ar=1Ar=1;

  • 5)

    differential mutation pp value: pmax=0.25p_{max}=0.25;

  • 6)

    initialized sub-population resource ratios γ1=γ2=0.5\gamma_{1}=\gamma_{2}=0.5;

  • 7)

    initialized scale factor and crossover rate in memory archive μF=0.3\mu_{F}=0.3 and μCr=0.8\mu_{Cr}=0.8;

  • 8)

    Cauthy pertubation strategy rate pr=0.2p_{r}=0.2;

TABLE I: Experimental comparison results between RDE and other competitors on the 29 benchmark functions in CEC2024.
Problem RDE LSHADE-RSP iLSHADE-RSP HSES EBOwithCMAR LSHADE
Mean SD Mean SD W Mean SD W Mean SD W Mean SD W Mean SD W
1 0.00E+00 0.00E+00 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 =
2 0.00E+00 0.00E+00 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 =
3 2.16E+01 1.53E+00 5.86E+01 0.00E+00 + 2.23E+01 8.92E-01 + 2.66E+00 1.90E+00 - 5.65E+01 1.11E+01 + 5.86E+01 3.41E-14 +
4 7.65E+00 1.85E+00 8.43E+00 3.48E+00 = 6.89E+00 1.62E+00 - 8.84E+00 3.76E+00 = 2.78E+00 1.74E+00 - 6.70E+00 1.40E+00 -
5 0.00E+00 0.00E+00 0.00E+00 0.00E+00 = 1.34E-08 4.11E-08 + 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 2.68E-09 1.92E-08 =
6 3.86E+01 1.73E+00 4.12E+01 3.83E+00 + 3.89E+01 1.69E+00 = 3.91E+01 3.20E+00 = 3.35E+01 8.37E-01 - 3.73E+01 1.37E+00 -
7 8.37E+00 1.92E+00 9.29E+00 4.36E+00 = 7.74E+00 1.77E+00 = 7.65E+00 2.74E+00 - 2.02E+00 1.32E+00 - 6.94E+00 1.75E+00 -
8 0.00E+00 0.00E+00 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 =
9 1.43E+03 2.43E+02 2.87E+03 8.64E+02 + 1.68E+03 2.70E+02 + 9.63E+02 3.37E+02 - 1.41E+03 2.15E+02 = 1.41E+03 2.35E+02 =
10 3.93E+00 3.38E+00 3.13E+00 8.50E+00 - 3.30E+00 5.56E+00 = 1.79E+01 2.59E+01 + 4.49E+00 8.77E+00 = 3.20E+01 2.89E+01 +
11 2.50E+02 1.66E+02 9.66E+01 7.48E+01 - 1.19E+02 8.37E+01 - 3.84E+01 1.05E+02 - 4.63E+02 2.63E+02 + 1.13E+03 3.17E+02 +
12 1.44E+01 5.80E+00 1.71E+01 3.61E+00 + 1.81E+01 4.54E+00 + 2.77E+01 1.29E+01 + 1.49E+01 6.25E+00 = 1.51E+01 5.46E+00 =
13 2.02E+01 5.98E+00 2.15E+01 1.15E+00 = 2.16E+01 1.14E+00 = 1.35E+01 9.87E+00 - 2.19E+01 3.84E+00 + 2.07E+01 4.89E+00 =
14 1.46E+00 8.86E-01 8.25E-01 5.36E-01 - 1.05E+00 6.39E-01 = 4.72E+00 4.74E+00 + 3.69E+00 2.15E+00 + 3.05E+00 1.39E+00 +
15 2.72E+01 4.49E+01 1.44E+01 5.46E+00 = 1.65E+01 5.25E+00 + 2.51E+02 2.16E+02 + 4.26E+01 5.69E+01 + 4.87E+01 4.35E+01 +
16 2.94E+01 1.10E+01 3.80E+01 9.81E+00 + 3.55E+01 5.04E+00 + 2.46E+01 3.53E+01 - 2.98E+01 7.50E+00 = 3.32E+01 5.36E+00 =
17 2.03E+01 2.86E+00 2.08E+01 2.32E-01 + 2.00E+01 3.88E+00 + 1.96E+01 4.96E+00 + 2.21E+01 1.09E+00 + 2.19E+01 1.15E+00 +
18 3.35E+00 9.36E-01 3.21E+00 7.47E-01 = 3.34E+00 7.47E-01 = 4.07E+00 2.00E+00 = 8.04E+00 2.28E+00 + 4.92E+00 1.61E+00 +
19 2.57E+01 6.66E+00 2.65E+01 7.16E+00 = 3.22E+01 5.81E+00 + 1.44E+02 3.17E+01 + 3.57E+01 7.50E+00 + 3.20E+01 5.97E+00 +
20 2.08E+02 2.05E+00 2.09E+02 4.07E+00 + 2.08E+02 1.97E+00 = 2.08E+02 3.74E+00 = 1.99E+02 2.02E+01 - 2.07E+02 1.30E+00 =
21 1.00E+02 0.00E+00 1.00E+02 0.00E+00 = 1.00E+02 0.00E+00 = 1.00E+02 0.00E+00 = 1.00E+02 0.00E+00 = 1.00E+02 1.00E-13 +
22 3.46E+02 3.38E+00 3.52E+02 4.14E+00 + 3.51E+02 3.04E+00 + 3.52E+02 8.59E+00 + 3.51E+02 3.51E+00 + 3.50E+02 2.92E+00 +
23 4.23E+02 2.38E+00 4.27E+02 2.68E+00 + 4.26E+02 1.87E+00 + 4.21E+02 3.39E+00 - 4.18E+02 4.55E+01 + 4.26E+02 1.70E+00 +
24 3.79E+02 1.56E-01 3.87E+02 8.02E-03 + 3.79E+02 2.29E-01 + 3.87E+02 2.65E-02 + 3.87E+02 7.56E-01 + 3.87E+02 2.28E-02 +
25 8.94E+02 4.27E+01 9.01E+02 4.03E+01 = 9.24E+02 3.89E+01 + 8.77E+02 2.00E+02 = 5.37E+02 3.06E+02 - 9.17E+02 3.30E+01 +
26 4.73E+02 6.62E+00 4.96E+02 8.00E+00 + 4.77E+02 6.21E+00 + 5.20E+02 9.26E+00 + 5.02E+02 4.03E+00 + 5.03E+02 6.46E+00 +
27 3.18E+02 4.14E+01 3.02E+02 1.60E+01 - 3.04E+02 2.23E+01 - 3.18E+02 3.98E+01 = 3.08E+02 2.88E+01 = 3.22E+02 4.63E+01 +
28 3.88E+02 2.66E+01 4.38E+02 1.84E+01 + 3.92E+02 2.08E+01 + 4.55E+02 5.40E+01 + 4.33E+02 1.13E+01 + 4.32E+02 6.13E+00 +
29 8.48E+02 3.99E+02 1.97E+03 1.04E+01 + 1.01E+03 3.35E+02 + 2.06E+03 5.12E+01 + 1.99E+03 4.21E+01 + 1.98E+03 3.79E+01 +
W/T/L -/-/- 13/13/4 15/12/3 11/12/7 14/11/5 17/10/3

III-C Experimental Settings

According to the guidelines of CEC2024, 25 independent runs are implemented. The maximum evaluation number is set to max_nfes=10000Dmax\_n\!f\!es=10000\cdot D, where the DD is the dimensionality of the optimization problems. The search range is set to [100,100]D[-100,100]^{D}. Fitness errors less than 1×1081\times 10^{−8} are considered to 0. The experiments were performed using the following system: CPU: Intel® Core™i7-12700kf 3.6GHz, 32GB DDR4-3600MHz RAM, Programming Language: C++, and Operator System: Windows 10.

TABLE II: Algorithm Complexity of RDE
T0T_{0} T1T_{1} T^2\hat{T}_{2} (T^2T1)/T0(\hat{T}_{2}-T_{1})/T_{0}
D = 30 27 351 484.6 4.95

III-D Statistical Results

The Wilcoxon rank-sum test is utilized to determine if there is a statistically significant difference between two algorithms in solving a problem, with a significance level of α=0.05\alpha=0.05. Each problem is tested 51 times independently in a row. The symbol “++” denotes the superior algorithm when there is a significant difference between the pair, while “-” indicates the relatively inferior one. “==” signifies that no significant difference exists between the pair of algorithms.

The performance of the RDE was compared to the other compatitors which showed excellent performance on previous CEC benchmark suites, including:

  • 1)

    LSHADE-RSP[11]: LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems;

  • 2)

    iLSHADE-RSP[12]: An improved LSHADE-RSP algorithm with the Cauchy perturbation;

  • 3)

    HSES[21]: Hybrid Sampling Evolution Strategy for Solving Single Objective Bound Constrained Problems;

  • 4)

    EBOwithCMAR[22]: Effective Butterfly Optimizer using Covariance Matrix Adapted Retreat phase;

  • 5)

    LSHADE[10]: Improving the search performance of SHADE using linear population size reduction.

As shown in Table I, RDE demonstrates superiority over competitive algorithms on CEC2024. In addition, RDE also performs well on relatively more complex composition functions (f20f29f20-f29), indicating the effectiveness of recombination research. Compared with LSHADE-RSP, iLSHADE-RSP, HSES, EBOwithCMAR and LSHADE, RDE achieves better solutions on 13, 15, 11, 14 and 17 problems respectively and fails to beat opponents on 4, 3, 7, 5 and 3 problems respectively. Although RDE shows significant advantages over advanced variants of the DE series, the performance gap is not significantly widened compared to HSES.

III-E Algorithm Complexity

Table II shows that the algorithm complexity of the proposed RDE is 4.95. T0T_{0} is the time result by running the assessing test program in [20]. T1T_{1} is the time cost result by running the 18th function and T2T_{2} is the time cost result of averaging 5 runs of the algorithm on f18f_{18}, respectively. Both of them are with 200000 evaluations.

IV Conclusion

In this study, we conducted a combined research on recent advances in advanced DE variants and proposed an effective combination improvement scheme to further promote the performance of DE in complex optimization problems. On the CEC2024 benchmark suite, RDE showed significant performance advantages over some advanced variants of DE. Although the ES variant HSES achieved similar performance in 2018, due to the simplicity, efficiency and stable performance of DE, it seems to have received more attention and development from researchers. Although we also attempted to combine other recent DE improvement strategies besides those shown in this paper, no significant results were obtained. Therefore, various sub-strategies mentioned in this paper may be worth prioritizing for future development of DE. It is worth noting that compared with algorithms such as iLSHADE-RSP, RDE adopts fewer strategies but has better performance.

Acknowledgment

This research was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643, Japan Science and Technology Agency (JST) Support for Pioneering Research Initiated by the Next Generation (SPRING) under Grant JPMJSP2145, and JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation under Grant JPMJFS2115.

References

  • [1] S. Gao, Y. Wang, J. Cheng, Y. Inazumi, and Z. Tang, “Ant colony optimization with clustering for solving the dynamic location routing problem,” Applied Mathematics and Computation, vol. 285, pp. 149–173, 2016.
  • [2] S. Gao, K. Wang, S. Tao, T. Jin, H. Dai, and J. Cheng, “A state-of-the-art differential evolution algorithm for parameter estimation of solar photovoltaic models,” Energy Conversion and Management, vol. 230, p. 113784, 2021.
  • [3] S. Gao, M. Zhou, Y. Wang, J. Cheng, H. Yachi, and J. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 2, pp. 601–614, 2019.
  • [4] S. Gao, Y. Yu, Y. Wang, J. Wang, J. Cheng, and M. Zhou, “Chaotic local search-based differential evolution algorithms for optimization,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 6, pp. 3954–3967, 2021.
  • [5] K. Wang, Y. Wang, S. Tao, Z. Cai, Z. Lei, and S. Gao, “Spherical search algorithm with adaptive population control for global continuous optimization problems,” Applied Soft Computing, vol. 132, p. 109845, 2023.
  • [6] A. Sohail, “Genetic algorithms in the fields of artificial intelligence and data sciences,” Annals of Data Science, vol. 10, no. 4, pp. 1007–1018, 2023.
  • [7] S. Yarat, S. Senan, and Z. Orman, “A comparative study on pso with other metaheuristic methods,” Applying Particle Swarm Optimization: New Solutions and Cases for Optimized Portfolios, pp. 49–72, 2021.
  • [8] M. F. Ahmad, N. A. M. Isa, W. H. Lim, and K. M. Ang, “Differential evolution: A recent review based on state-of-the-art works,” Alexandria Engineering Journal, vol. 61, no. 5, pp. 3831–3872, 2022.
  • [9] R. Tanabe and A. Fukunaga, “Success-history based parameter adaptation for differential evolution,” in 2013 IEEE Congress on Evolutionary Computation.   IEEE, 2013, pp. 71–78.
  • [10] R. Tanabe and A. S. Fukunaga, “Improving the search performance of SHADE using linear population size reduction,” in 2014 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2014, pp. 1658–1665.
  • [11] V. Stanovov, S. Akhmedova, and E. Semenkin, “LSHADE algorithm with rank-based selective pressure strategy for solving CEC 2017 benchmark problems,” in 2018 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2018, pp. 1–8.
  • [12] T. J. Choi and C. W. Ahn, “An improved lshade-rsp algorithm with the cauchy perturbation: ilshade-rsp,” Knowledge-Based Systems, vol. 215, p. 106628, 2021.
  • [13] A. W. Mohamed, A. A. Hadi, and K. M. Jambi, “Novel mutation strategy for enhancing shade and lshade algorithms for global numerical optimization,” Swarm and Evolutionary Computation, vol. 50, p. 100455, 2019.
  • [14] R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, pp. 341–359, 1997.
  • [15] J. Zhang and A. C. Sanderson, “Jade: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009.
  • [16] J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–73, 1992.
  • [17] K. Jebari, M. Madiafi et al., “Selection methods for genetic algorithms,” International Journal of Emerging Sciences, vol. 3, no. 4, pp. 333–344, 2013.
  • [18] J. Brest, M. S. Maučec, and B. Bošković, “Single objective real-parameter optimization: algorithm jSO,” in 2017 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2017, pp. 1311–1318.
  • [19] J. Brest, M. S. Maučec, and B. Bošković, “il-shade: Improved l-shade algorithm for single objective real-parameter optimization,” in 2016 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2016, pp. 1188–1195.
  • [20] G. Wu, R. Mallipeddi, and P. N. Suganthan, “Problem definitions and evaluation criteria for the cec 2017 competition on constrained real-parameter optimization,” National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report, 2017.
  • [21] G. Zhang and Y. Shi, “Hybrid sampling evolution strategy for solving single objective bound constrained problems,” in 2018 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2018, pp. 1–7.
  • [22] A. Kumar, R. K. Misra, and D. Singh, “Improving the local search capability of effective butterfly optimizer using covariance matrix adapted retreat phase,” in 2017 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2017, pp. 1835–1842.