An Actor-Critic Method for Simulation-Based Optimization
Abstract
We focus on a simulation-based optimization problem of choosing the best design from the feasible space. Although the simulation model can be queried with finite samples, its internal processing rule cannot be utilized in the optimization process. We formulate the sampling process as a policy searching problem and give a solution from the perspective of Reinforcement Learning (RL). Concretely, Actor-Critic (AC) framework is applied, where the Actor serves as a surrogate model to predict the performance on unknown designs, whereas the actor encodes the sampling policy to be optimized. We design the updating rule and propose two algorithms for the cases where the feasible spaces are continuous and discrete respectively. Some experiments are designed to validate the effectiveness of proposed algorithms, including two toy examples, which intuitively explain the algorithms, and two more complex tasks, i.e., adversarial attack task and RL task, which validate the effectiveness in large-scale problems. The results show that the proposed algorithms can successfully deal with these problems. Especially note that in the RL task, our methods give a new perspective to robot control by treating the task as a simulation model and solving it by optimizing the policy generating process, while existing works commonly optimize the policy itself directly.
I INTRODUCTION
Simulation-based optimization problems arise in various disciplines such as computer science, economics, and so on. In general, an simulation-based optimization problem consists of maximizing (minimizing) a score (cost) function by choosing appropriate designs from an allowed set, and can be typically formulated as
(1) | ||||
s.t. |
where is a design from the allowed set defined by the constraints , and is the objective function, which is usually computationally expensive and can be queried only for limited times. The design can either be a scalar or vector depending on the problem.
There are various works to deal with different cases of (1), e.g., convex optimization deals with the case that all of and are convex ( [1]), and deep learning deals with the case that is totally or partially calculated by Neural Networks (NNs). However, in both above two cases, the operation rules are designed and utilized during the optimization process, e.g., convex optimization utilizes the gradient or second gradient of , and deep learning requires the gradient to all parameters of the NNs. Thus these methods are not suited for simulation-based optimization, where the gradient message cannot be obtained explicitly.
In order to address the issues brought by the “incomplete” objective functions, researchers propose to “reconstruct” it with a controllable surrogate model, which is usually a predictive model constructed by fitting the existing input-output pairs. Since that a certain deviation between the surrogate model and the true simulation model cannot be avoided, related methods are usually designed to alternate between updating the surrogate model and the sampling distribution. For example, in Bayesian Optimization (BO, [2]), the Gaussian Process (GP, [3]) serves as the surrogate model, and the updating process alternates between proposing the best design with the acquisition function constructed with the surrogate model and updating the surrogate model with the latest input-output pair. BO has the advantage of high efficiency, but there are some technical difficulties, e.g., finding the design with the highest score on the acquisition function is non-trivial, especially in high dimensional cases. The acquisition function is a predictive model, which can only predict the performance on any , rather than generating the best design , which is essential in many real tasks, such as identifying the best parameters for controlling the robots. In these cases, beyond the surrogate model, a generative model must be equipped to identify the best design .
The generative model generates a distribution over , which controls the sampling process. Therefore, the sampling distribution can be bridged to a stochastic control problem ( [4]), which adjusts the sampling distribution basing on previous information to reach higher expected score on the simulation model . The generative model is commonly used in AC-based RL, e.g., DDPG ( [5]), TD3 ( [6]) and SAC ( [7]). It is also termed as “actor” in AC framework, which generates a distribution over the action space (similar as the design space in simulation-based optimization) for each system state (more details can be found in papers mentioned above).
In this work, we also consider the simulation-based optimization problem, where the operation rule of the simulation model cannot be utilized in the optimizing process. Similar as BO, a surrogate model will be used to predict the scores on untested designs. In order to overcome the difficulty of finding the global optimal of the acquisition function, which is usually highly non-linear, we also introduce a generative model in our framework. Therefore, our methods inherit the framework of AC in RL area. The framework here is especially similar to that of SAC, which is the state-of-art RL algorithm. The predictive model is termed as critic, which predicts for each , and the generative model is termed as actor, which will be updated to reach higher expected score on . Each iteration starts with generating designs by the actor. These designs will be scored by the simulation model, and the obtained input-output pairs will be used to update the critic. The final step is updating the actor with the latest critic.
The contributions of this paper are mainly as follows:
1) Two AC-based algorithms are designed to solve the simulation-based optimization problem, where design space can be either continuous or discrete;
2) We take two high dimensional tasks, adversarial attack and RL, as examples to show the application on real tasks, and the results show that the proposed algorithms can successfully find satisfying solutions, even in the high dimensional cases;
3) Compared with existing RL methods, e.g., Dynamic Programming (DP, [8]), Q-learning ( [9]), DDPG, and SAC, which are motivated by optimizing the policy itself, the proposed algorithms offer a new perspective to robot control of reinforcing the policy through optimizing the policy generating process. Besides, the induced methods can be deployed in an off-policy fashion and avoid the problem of sparse and delayed reward.
II PRELIMINARY
Let us begin with introducing the problem of interest and the Actor-Critic (AC) framework extensively used in Reinforcement Learning (RL) technology.
II-A Optimization Model
The simulation model takes a scalar (for one dimensional case) or vector (for multi-dimensional case) as input, and outputs corresponding score , and we aim to find out the best design, i.e.,
(2) |
where is the allowed set.
In real implementations, could serve as any objective function. For example, it can be a convex optimization problem, if is a concave function as well as is a convex set. can also be a complex nonlinear function, e.g., Neural Networks (NNs), and the problem comes to find a special design which receives the maximal response. For another example, in RL tasks, the agent usually intends to find a parameterized policy, which can gather higher accumulated rewards by interacting with the environment. In this case, the environment is a simulation model , and the policy is a design . Then it also comes to the standard form of (2). In Section IV, we will demonstrate the applications in these tasks.
II-B Actor-Critic Framework
AC framework is widely used in RL area, which consists of two parts: actor, which generates actions, and critic, which evaluates each action. In RL area, both the actor and critic are typically constructed with NNs, where the actor network encodes a mapping from state to action , and the critic network maps each pair to its predicted accumulated reward. Related work can be found in DDPG ( [5]), TD3 ( [6]) and SAC ( [7]).
Typically, the critic is trained by minimizing the residual error of the Bellman equation:
(3) |
where is the critic parameterized by , is the instantaneous reward, is the succeeding state and is the next action drawn from current policy. Besides, is the discount factor for stabilizing the training process. can be updated by minimizing .
The actor is updated by maximizing the expected accumulated reward:
(4) |
where is the current policy parameterized by . Notice that in this work, we choose the stochastic policy instead of the deterministic one for its excellent performance of balancing the exploitation and exploration.
III MAIN METHODS
Let us introduce the methodology of solving (2) with AC framework. The cases where is continuous and discrete will be separately considered.



We use NNs to construct the actor and critic (Fig. 1). The actor takes a random noise (or constant) as input to capture the multi-modal (or single-modal) distributions, and the critic takes the design as input and outputs a predicted score.
The actor and critic are updated iteratively. Each iteration starts with generating a feasible design and getting its score by querying the simulation model . The pair will be stored in a replay buffer . Then a mini-batch will be sampled from to update the critic by minimizing the loss function:
(5) |
with one step of gradient descent:
(6) |
Finally, the actor will be updated with the latest critic.
A stochastic actor is used to generate designs, and in each iteration, the sampling distribution will be updated to improve the expected score. In order to reduce the risk of falling into local optimal, we use entropy regularization to reach the trade-off between exploitation and exploration. The updating rule of the actor will be studied for continuous and discrete design space respectively.
III-A Continuous Design Space
We firstly introduce the case where is continuous. For the ease of illustration, this paper focuses on the limited space . However, we note that the results can be easily extended to any continuous limited space through linear mapping.
Squashed Gaussian policy (similar as SAC) is used to generate designs, as shown in Fig. 1(a). Firstly, the actor network outputs two vectors, and , where is the parameter of the actor network. Then is generated by
(7) |
where represents the element-wise multiplication. , and have the same dimension with , and is a random noise taken from multi-dimensional standard Gaussian distribution . The function converts to another distribution whose value range is , and its probability density function can be calculated by
(8) |
at , where is the probability density function of , and is the inverse function of .
The sampling distribution will be updated to reach higher expected score
(9) |
where is the entropy regularization for balancing exploitation and exploration, and can be calculated by
(10) |
Therefore, can be rewritten as
(11) | ||||
by substituting (10) to (9), and its gradient with respect to can be derived by:
(12) | ||||
which can be estimated by a batch of samples taken from :
(13) | ||||
where is the size of . Finally, can be updated with (13) through one step of gradient ascent.
We conclude the optimization process in Algorithm 1. In each iteration, only a single design will be taken to be validated with the optimization model, and the data pair will be stored for future use. In the end, the best design in the replay buffer will be chosen as the final result.
(14) |
III-B Discrete Design Space
Here we consider the case that there are only finite feasible designs in . In this case, the network structure of the actor will be slightly modified to output a discrete distribution over the design space. As illustrated in 1(b), the number of the output channels equals the size of , and each channel corresponds to a feasible design. After applying the softmax function, a distribution over will be obtained:
(15) |
where by slightly abusing of notation, here denotes the output of the channel corresponding to . Then we update the actor by maximizing the expected score:
(16) | ||||
where the entropy regularization is also introduced as in Section III-A. Then, the gradient of with respect to can be calculated by:
(17) |
and the actor can be updated with one step of stochastic gradient ascent.
We conclude the optimization process in Algorithm 2. It is worth noting that in each update of the actor, we attempt to maximize the objective function (16), whose optimal solution is the energy-based policy ( [10]):
(19) |
There is a simple proof of (19) by constructing following optimization problem:
(20) | ||||
s.t. | ||||
It is easy to verify that (20) is a convex problem and satisfies the Slater condition ( [1]), which guarantees that is the unique solution of the KKT conditions:
(21a) | ||||||
(21b) | ||||||
(21c) | ||||||
(21d) | ||||||
(21e) |
where and are the optimal value of dual variables. Combining (21a), (21c) and (21d), we can get . Then, subtracting it into (21e), we get
(22) |
where . By subtracting (22) into (21b), we have
(23) |
Finally, according to (22) and (23), the optimal solution (19) can be obtained. In Section IV, we will show that the learned distribution is very close to the optimal distribution .
So far, we have introduced the methods to solve the simulation-based optimization problem 2 with AC framework, and proposed algorithms for continuous and discrete design space respectively. In the next section, we will design some experiments to validate them.
IV APPLICATIONS
In this section, we will start from a toy example, with which some intuitive interpretation of Algorithm 1 and 2 will be given. Then we design some complex tasks, e.g., adversarial attack and reinforcement learning, to show the effectiveness of the proposed algorithms in large-scale systems.
Firstly, we choose the probability density function of a Gaussian Mixture Model (GMM) as the toy example:
(24) | ||||
and the parameters are set as , , , , , .




We launch Algorithm 1 and the results are shown in Fig. 2. It can be found that the sampling distribution gradually converges to the optimal area as decreases, which means by setting an appropriate to controls the strength of the entropy regularization, satisfying designs can be drawn from the final actor. To analysis the effect of entropy regularization, is set as , , and respectively. Compared with smaller in Fig. 2(b) and Fig. 2(c), the actor in Fig. 2(a) generates almost an uniform distribution, which means that the entropy regularization has a greater impact on the objective in (9). Besides, since the samples distribute more evenly over the allowed set, the predicted scores are close to the ground truth almost everywhere. With decreasing, the sampling distribution narrows to the optimal area (Fig. 2(b) and Fig. 2(c)), which means that the actor can generate satisfying designs. However, the samples concentrate in some areas, which leads to worse performance of the critic in other areas, as illustrated in Fig. 2(c). We also test the performance of a deterministic actor, as in DDPG ( [5]), where the actor only generates a deterministic design, rather than a distribution over the design space. The final design falls into the local optimal more easily, as illustrated in Fig. 2(d). In real implementation, we usually set a larger in the initial phase, and decrease it step by step. In this way, the sampling distribution can converge to the neighbor area of the global optimal, while avoiding falling into local optimal.




The simulation model (24) is modified to test Algorithm 2 by converting the design space to a discrete set. The results are shown in Fig. 3. It can be found that by decreasing the strength of entropy regularization, the sampling distribution gradually converges to the optimal area. We also compare the effect of the entropy regularization by setting as , , and . When a larger is chosen, e.g., in Fig. 3(a), the actor generates nearly uniform distribution. When decreasing it, the learned distribution gradually converges to designs with higher scores, as in Fig. 3(b) and Fig. 3(c). However, if is set too small, the actor may fall into local optimal, as illustrated in Fig. 3(d). Therefore, in real implementation, we also set a bigger in the initial phase and decrease it step by step as in the continuous case. Furthermore, it can be found that the distribution generated by the actor is very close to the optimal one , which validates the theoretical analysis.
After the toy examples, we design more complex tasks to validate the proposed algorithms.
The first task is abstracted from adversarial attack tasks. In these tasks, we need to generate samples to “cheat” some NNs, which are trained to recognize the objects in the input pictures. In this experiment, we train a classifier to recognize digital numbers with the public data set MNIST. The classifier takes an image as input and generates a distribution over , which denotes the confidence of classifying the digital image to corresponding value. For the convenience of later representation, the distribution is denoted as and denotes the confidence of classifying the image into , where by slight abuse of notations, here the and are the size of images. In the adversarial attack tasks, we need to generate some fake images to enhance the data set and train a more robust classifier.


In order to generate the fake images, we construct the following simulation model:
(25) |
where and is the resizing operator converting the input vector to the appropriate shape. The linear mapping converts the value range from to , which is the same as the images in the date set. Then by solving the optimization problem (2), the actor can generate fake images which can mislead the classifier to believe it as . We choose in our experiments. The learning curve is illustrated in Fig. 4(a), where we can find that within episodes (only 1 sample is evaluated by in each episode), the average confidence reaches over . Before the -th episode, the average confidence is close to , which may be caused by the low-accuracy critic. After the -th episode, the score increases gradually, which means both the actor and critic are improved. One of the images generated by the final actor is shown in Fig. 4(b). It is just some noise from the viewpoint of humans, but the classifier tends to predict it as the digital number “1”.
Another type of fake images are also preferred, which remain the features of the initial digital number, but mislead the classifier to make wrong decisions by adding some special noise. In order to generate these images, we construct following simulation model:
(26) |
where the operator clips the input variables to , and is a small constant controlling the noise level, which is set as in our experiment. Besides, is an image from the initial data set, whose label is . In this case, the simulation-based optimization problem (2) tends to find special noise by adding which the classifier will wrongly classify the image from to .
The result is shown in Fig. 5. After about episodes, the actor can generate noise “special” enough to mislead the classifier. We present the noisy image at episodes , , and . All of these images remain the features of “6” in the viewpoint of humans, but the classifier will be “cheated” to make wrong decisions.


Finally, we apply Algorithm 1 to a more complex task, i.e., RL task. In typical policy-based and AC-based RL algorithms, the policy is parameterized by and generates action corresponding to each state. In existing RL methods, the parameter is adjusted by Policy Gradient (PG) algorithms, as in TRPO ( [11]), PPO( [12]), or Deterministic Policy Gradient (DPG) algorithms, as in DDPG ( [5]) and TD3 ( [6]), to reach higher accumulated reward. While in this work, we directly treat the task as a simulation-based model, where is a concrete design. We take the “CartPole-v0” environment ( [13]) in Fig. 6(a) as example and deploy Algorithm 1 to it. The result is shown in Fig. 6(b), where we can find that the policy (generated design) is successfully reinforced to reach higher score. Although existing methods in RL area have better performance in these RL tasks, our methods offer a new perspective to robot control, which may be helpful to solve the problem of sparse and delayed reward widely existing in RL problems.
We design these examples to show the application of our proposed methods, as well as validate their performance on large-scale problems. The results will be further concluded in the next section.
V CONCLUSIONS
In this work, we consider the simulation-based optimization problem of selecting the best design with a computationally expensive evaluation model. The sampling process is bridged to the policy optimization problem and solved under the AC framework, where the critic serves as the surrogate model predicting the scores of untested designs, and the actor encodes the sampling distribution. We propose algorithms to update the actor and critic for continuous and discrete design space respectively, and design experiments to validate their effectiveness, as well as explain the applications. The results show that our methods can successfully find satisfying designs and avoid falling into the local optimal to a certain extent. We note that in the experiment of the RL task, we offer a new perspective to robot control of optimizing the policy generating process, rather than optimizing the policy itself. This may be helpful to solve the problem of sparse and delayed reward in existing works. The new perspective will also be studied in future work. Another direction that will be researched in the future is adjusting the hyper-parameter automatically, as in [14].
References
- [1] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
- [2] M. Pelikan, D. E. Goldberg, E. Cantú-Paz, et al., “Boa: The bayesian optimization algorithm,” in Proceedings of the genetic and evolutionary computation conference GECCO-99, vol. 1. Citeseer, 1999, pp. 525–532.
- [3] R. M. Dudley, “Sample functions of the gaussian process,” Selected Works of RM Dudley, pp. 187–224, 2010.
- [4] Y. Peng, E. K. Chong, C.-H. Chen, and M. C. Fu, “Ranking and selection as stochastic control,” IEEE Transactions on Automatic Control, vol. 63, no. 8, pp. 2359–2373, 2018.
- [5] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
- [6] S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International Conference on Machine Learning. PMLR, 2018, pp. 1587–1596.
- [7] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International Conference on Machine Learning. PMLR, 2018, pp. 1861–1870.
- [8] R. Bellman, “Dynamic programming,” Science, vol. 153, no. 3731, pp. 34–37, 1966.
- [9] C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8, no. 3-4, pp. 279–292, 1992.
- [10] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, “Reinforcement learning with deep energy-based policies,” in International Conference on Machine Learning. PMLR, 2017, pp. 1352–1361.
- [11] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International conference on machine learning. PMLR, 2015, pp. 1889–1897.
- [12] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- [13] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” 2016.
- [14] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, et al., “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905, 2018.