JI]UM Joint Institute, Shanghai Jiao Tong University, Shanghai, China SEIEE]School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China PKU]School of Electronics, Peking University, Beijing, China UMJI]UM Joint Institute and School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
Learning-Based Adaptive Dynamic Routing with Stability Guarantee for a Single-Origin-Single-Destination Network
Abstract
We consider learning-based adaptive dynamic routing for a single-origin-single-destination queuing network with stability guarantees. Specifically, we study a class of generalized shortest path policies that can be parameterized by only two constants via a piecewise-linear function. Using the Foster-Lyapunov stability theory, we develop a criterion on the parameters to ensure mean boundedness of the traffic state. Then, we develop a policy iteration algorithm that learns the parameters from realized sample paths. Importantly, the piecewise-linear function is both integrated into the Lyapunov function for stability analysis and used as a proxy of the value function for policy iteration; hence, stability is inherently ensured for the learned policy. Finally, we demonstrate via a numerical example that the proposed algorithm learns a near-optimal routing policy with an acceptable optimality gap but significantly higher computational efficiency compared with a standard neural network-based algorithm.
keywords:
Dynamic routing, queuing networks, reinforcement learning, stability1 Introduction
Dynamic routing is a control scheme with extensive application in network systems including communications, manufacturing, transportation, etc [1, 2, 3, 4]. A primary objective of dynamic routing is to ensure stability, i.e. the boundedness of the long-time average of traffic state. The typical tool to approach this class of problems is the Lyapunov function method [5]. On top of this, a secondary objective is to minimize the system time experienced by each job. Queuing theorists have made efforts on characterization of the limiting distribution of the traffic state, but such results are usually complicated and non-scalable [6, 7, 8]. Recently, learning-based approaches have been proposed for dynamic routing. However, such approaches are usually purely numerical without explicit theoretical guarantees on traffic stability [9].
Motivated by the above challenge, we consider a dynamic routing scheme for single-origin-single-destination queuing networks that integrates Lyapunov function methods [10] and approximate dynamic programming [11]. We use a class of piecewise-linear (PL) functions that are increasing in traffic states to measure the travel cost of a path. An incoming job is routed to the path with the smallest cost, so the routing scheme can be viewed as a generalized shortest-path (GSP) strategy. We use the PL functions to construct a Lyapunov function for stability analysis, and we show that the resultant routing scheme is throughput-optimal. We also use the PL functions as proxies for action-value functions to obtain a policy iteration algorithm. This algorithm computes a suboptimal routing scheme that is guaranteed to be stabilizing. We also show that our scheme gives comparable performance (i.e., average system time) with respect to neural network-based methods but with a significant reduction in computational workload. This paper focuses on the bridge network in Fig. 1,

but our modeling, analysis, and computation approaches also provide hints for general networks.
Extensive results have been developed for traffic stabilization and throughput maximization [5, 12, 13, 14], which provide a basis for this paper. In terms of travel cost minimization, previous results are typically either restricted to simple scenarios (e.g., parallel queues) or based on numerical simulation [15, 16] or optimization methods without traffic stability analysis [17, 18]. In particular, reinforcement learning methods are proposed to compute optimal/suboptimal solutions to dynamic routing problems [19, 20]. Although learning methods usually give excellent performance, they focus on algorithm design and do not emphasize theoretical guarantees for stability/optimality. In addition, generic learning methods may be computationally inefficient for dynamic routing. In particular, Liu et al. proposed a mixed scheme that uses learning over a bounded set of states and uses a known stabilizing policy for the other states [21]; this notion of integration of learning and stability guarantee is aligned with the idea of this paper, but this paper makes such integration in a different manner.
We formulate our problem on the single-origin-single-destination queuing network in Fig. 1. Jobs arrive at the source node as a Poisson process. Each server has exponentially distributed service times. When a job arrives, the system operator assigns a path to the job, which will not be changed afterwards. Hence, every job belongs to a class that corresponds to the assigned path. The key of our routing scheme is a set of PL functions that approximate the expected travel times on each path. Such PL functions were introduced by Down and Meyn [10] to show positive Harris recurrence of the traffic state. Our previous work proposed an explicit construction of the parameters of the PL functions according to network topology, without solving model data-dependent inequalities [22]. In this paper, we further utilize the PL functions to bound the mean of the traffic state and to obtain our GSP routing policy.
We show that our GSP policy maximizes network throughput in the sense that it is stabilizing as long as the demand at the source is less than the min-cut capacity of the network (Theorem 1). The proof utilizes a critical insight about the PL functions: each linear regime corresponds to a particular configuration of bottlenecks. We construct a piecewise-quadratic Lyapunov function using the PL functions and show that the mean drift of the Lyapunov function is negative in heavy-traffic regimes. The drift condition also leads to a set of constraints on the PL parameters.
We then develop a policy iteration algorithm that learns the PL parameters (and thus the GSP policy). The policy iteration algorithm uses the PL functions as a proxy (as opposed to an immediate approximation) of the action-value functions; this connects the learning-based routing decisions with the above-mentioned theoretical properties. In each iteration, the current policy is evaluated using the PL function after a Monte-Carlo episode and improved by greedifying over the PL functions; the evaluation is done by an algorithm specifically designed for the PL functions. We compare our policy iteration algorithm with a standard neural-network dynamic programming algorithm. The implementation results indicate that our approach can attain a 1.9%-6.7% optimality gap with respect to the neural-network method with a 97.8%-99.8% reduction in training time.
In summary, our contributions are:
-
1.
A simple, insightful, and easy-to-implement dynamic routing scheme, i.e., the GSP policy.
-
2.
Theoretical guarantee on traffic stabilization and throughput maximization of the GSP policy over the bridge network.
-
3.
An efficient policy iteration algorithm for the GSP policy that attains comparable control performance but significantly reduces training time compared with neural network-based methods.
2 Modeling and formulation
Consider the single-origin-single-destination (SOSD) network of queuing servers with infinite buffer sizes in Fig. 1. Let be the set of servers. Each server has an exponential service rate and job number at time , . Jobs arrive at origin according to a Poisson process of rate . There are three from the origin to the destination, . We denote if server is on path
When a job arrives, it will go to one of the three paths according to a routing policy. The path of the job cannot be changed after it is decided at the origin. Thus, we divide the jobs into three classes according to their assigned paths. We use to denote the jobs queuing in server and going on path at time . For each server we have . Thus we can define the state of the network as the vector , and the state space is .
We consider two sets of actions, viz. and . Routing refers to determining the path of each job upon its arrival, which only occurs at . The routing action can be formulated as a vector
Scheduling refers to selecting a job from the queue to serve, which only occurs at Servers 1 and 5; preemptive priority is assumed at both servers. The scheduling action can also be formulated as a vector
where is the number of paths through server .
This paper focuses on a particular class of control policies which we call the (GSP) policy, which is based on the following piecewise-linear (PL) functions of traffic states:
where is a design parameter. We then define the notion of bottlenecks as follows:
Definition 1 (Bottleneck)
Given , a server is a bottleneck if
-
1.
where means the right derivative, and
-
2.
either has no downstream servers or the immediately downstream server is such that
Intuitively, a bottleneck is a server that immediately contributes to , while the immediately downstream server, if it exists, does not contribute to . Let be a combination of bottlenecks, where , and denote if is a bottleneck in combination . Let be the set of possible combinations, which has 12 elements, each corresponding to a subset of over which are all linear in .
The GSP policy essentially sends an incoming job to the path with the smallest function. However, the functions only capture the traffic states but do not consider the influence of the service rates, especially the service rate of the current bottleneck. To account for this, we introduce a second parameter that allows the GSP policy to prioritize the path with a higher bottleneck service rate. To be specific, for a combination , we first sort the paths according to their bottleneck service rates:
Then for , define the weight of each path as
(1) |
Now we are ready to formulate the GSP policy, with a slight abuse of notation, as follows: When there are no ties,
ties are broken randomly.
The above joint-routing-scheduling policy is called GSP, because the PL function is a generalized measurement of the “temporal length” of each path. By this policy, server will prioritize the jobs from class if server is the bottleneck of path . If server is the bottleneck of multiple paths, the GSP policy would further prioritize the jobs from the path with a larger function. We also define
note that .
We say that the traffic in the network is if there exists such that for any initial condition,
We say the network is if , where is the min-cut capacity of the network. Note that ensures the existence of a stabilizing Bernoulli routing policy.
3 Stability guarantee for GSP policy
In this section, we consider the stability condition under GSP policy for the bridge network.
To state the main result, let be the set of cuts of the network, and define
one can show that if the network is stabilizable. Let be the number of links on the longest path from the origin to server . For each , let , , and define
where (resp. ) is the sign (resp. positive part) function. Also define
Then, we can state the main result as follows:
Theorem 1
The bridge network is stable under the GSP policy if the network is stabilizable and
(2) |
The above result includes two key points. First, the GSP policy is stabilizing if the queuing network is stabilizable; i.e., the GSP policy maximizes the throughput of the network. Second, constraints on the parameters for the GSP policy are given, which determine the feasible set for the subsequent learning-based optimization (Section 4).
The parameters and indicate the relationships between the service rates and arrival rate, while the design parameters and characterize the GSP policy’s “preference”. Note that , which means each server and each path will be “equally emphasized” by the GSP policy as the network approaches saturation. The case when indicates that the path with a stronger service ability has a bottleneck closer to the origin. Then the degree of “preference” should decrease and the constraints of and should be stricter because even though with a weaker service ability, the other paths have the bottleneck closer to the destination, and jobs on these paths can get to the destination more quickly once they pass the bottleneck.
The proof of Theorem 1 uses a Lyapunov function constructed from the PL functions:
(3) |
The key technique is to utilize the connection between the locations of bottlenecks and the GSP policy to argue for the negative mean drift of the Lyapunov function. In particular, we show that there exist such that for all
(4) |
where is the infinitesimal generator of the network model under the GSP policy. (4) implies the boundedness of the 1-norm of the traffic state and thus the stability of the network, according to the Foster-Lyapunov criterion [10].
Proof of Theorem 1. Firstly, we consider the case when for some . Suppose and , which implies the servers on path are all empty. According to the GSP policy, . The jobs will be allocated to path , thus it only makes finite positive contribution to the drift. While path and are not empty and keep serving jobs, they make negative contribution to the drift:
(5) | ||||
Thus we can infer that the Lyapunov drift of the network can only be negative or finite positive when there are empty paths in the network, which also covers the cases of and .
The rest of this proof is devoted to the case where . Specifically, we consider five qualitatively different cases:
Case 1: .
In this case, the function of each path have the forms of:
Then we prove according to the different routing decision:
-
i)
-
ii)
Similarly, the jobs will go to path (1,3,5) when or . We have the drift:
-
iii)
Since , the drift satisfies (4).
For other cases when there are ties and , since a job can only take a single path, the cases can be proved analogously.
Case 2: .
When , the maximal coefficient of exists when and , the drift turns to be:
Since and the two nodes belong to the same cut, we have and , which make (4) stands.
The proof of other forms of is analogous with case 1.
Case 3: .
We only show the case where , the other cases can be proved analogously. For the diverse possible values of , the drift has the form of:
According to the GSP policy, we have the scaled ratio:
Thus the possible maximal coefficient of exists when:
We have:
Since there are and when , we have . Thus the (4) can be proved. The case when can be proved analogously.
Case 4:
We discuss when . When the possible maximal coefficient of exists, the drift is:
Since and when or , thus we have and (4) holds. The cases of other forms of when and can be proved analogously.
Case 5:
Considering , when the possible maximal coefficient of exists, the drift satisfies:
We have , and there is when . Thus there is , and (4) stands. The other forms of of case and can be proved analogously. The proof is the same for other cases of , , , .
In conclusion, we show that the mean drift always satisfies (4), which ensures the stability of the network.
4 Iterative computation of GSP parameters
In this section, we develop a policy iteration (PI) algorithm that computes the parameters for the GSP policy. In particular, theorem 1 gives a set of values for and that ensure stability; the goal of the PI algorithm is to search the aforementioned set for a specific combination of and that attains near-optimal traffic efficiency. In addition, the PI algorithm does not require a priori knowledge of the arrival and service rate. We also compare the performance of the GSP PI algorithm with three benchmark policies, viz. neural network (NN)-based PI, simple shortest path, and optimized Bernoulli.
We formulate the joint routing-scheduling problem as a Markov decision process (MDP) with state space . Since the routing and scheduling decision will only be made when an arrival or a departure occurs (i.e., at transition epochs [23, p.72]), the problem can be formulated as a discrete-time (DT) MDP. We denote the state and action of the DT MDP as and , respectively. With a slight abuse of notation, , where is the -th transition epoch of the continuous-time process. Similarly, the action is given by , where and are the route-specific arrival rates and service rates, respectively.
As indicated in Section 2, both the routing and the scheduling decisions can be parameterized via design variables and ; by Theorem 1, and should satisfy (2). Thus, the action is given by the policy
The transition probability of the DT process can be derived from the network model defined in Section 2 in a straightforward manner. The one-step cost of the MDP is given by
The total cost over one episode of the MDP is thus given by
where is the length (i.e., number of transition epochs) of one episode.
4.1 GSP PI algorithm
Closed-form solution to is not easy. Hence, we use the PL function defined by (3) as a proxy for . A key characteristic of our formulation is that parametrizes both the policy and the PL functions . Although this may lead to a gap towards the ground-truth optimal solution, it significantly simplifies the computation and, importantly, guarantees traffic stability.
The GSP PI algorithm has two tasks in parallel: calibration of model data , and optimization of control parameters . The latter depends on the former, since in (2) depends on model data. Let be the estimate of after iterations (i.e., episodes), be the corresponding , and be the control parameters after iterations. Then, the specific steps of the GSP PI algorithm can be summarized as follows:
-
1.
Initialize with , for each server , , .
-
2.
Policy evaluation: Simulate a Monte-Carlo episode of the network model with the policy . Update the estimates to obtain Update the parameters (See below for how the updates are done.)
-
3.
Policy improvement: Use to obtain a new policy .
-
4.
Terminate when and are smaller than some thresholds.
The rest of this subsection is devoted to the update of Given , suppose that a job arrives at epoch and takes path according to policy . Then, we track the system time 111This notation should cause no confusion, since at most one Poisson arrival can occur at an epoch. of this particular job, and use the square error
to update , where is the path that selects. Note that records the time that a job spends in the continuous-time setting; we define if no arrival occurs at epoch , so indicates whether there is an arrival at epoch . are obtained by minimizing the sum-of-squares error:
(7) |
Note that the PL function actually approximates the job-specific system time rather than the total system time , which are in fact related via
4.1.1 Estimate of
In the th iteration, we run a Monte-Carlo simulation of the network model. The simulation has a discrete time step size that is sufficiently small to approximate the continuous time process; i.e., we use Bernoulli processes with sufficiently small time increment to approximate Poisson processes. We track every inter-arrival time at the origin and compute the average to obtain We also track every service time at each server and compute the average to obtain
4.1.2 Update of
In the th iteration, we record if . Let be the table of . We determine by a method similar to the least-squares partition algorithm in [24]. We first partition according to the boundaries of the PL functions with . Then we use the L-BFGS-B algorithm [25] to calculate a better that minimizes the sum-of-squares error defined in (7), with fixed to be . This leads to a new partition according to . We recursively do so until the partition stabilizes or the iteration reaches a certain maximum number, and the last is taken as of this episode.
4.1.3 Update of
After is obtained, we may determine the relation between and for each set of data in . To be specific, we determine the combination of bottlenecks at state based on , and is given by (1). Then, we again use the L-BFGS-B algorithm to calculate which minimize the sum-of-squares error defined in (7) with fixed to be . When update of is completed, of this iteration is then set to be . The process of the th iteration of and is given by Algorithm 1.
4.2 Benchmark policies and algorithms
To evaluate the performance of the GSP PI algorithm, we consider three benchmarks:
4.2.1 Neural network (NN) PI
We consider action-value-function and use NN to approximate it. Initially, we get the first sequence of state-action-reward and
for the initial iteration after a Monte-Carlo episode, where is a discount of future action-state-reward. The value is renewed as
in iteration . Then the current policy is evaluated by a designed NN, which has two convolutional layers, two maxpooling layers and two fully connected layers. We adopt the rectified linear unit (ReLU) activation function at each layer to avoid gradient explosion and extinction. The weights of NN are updated by the adaptive moment estimation (Adam) algorithm [26] to narrow the gap between the predicted and calculated state-action-value. Finally, we use the NN approximation for for policy improvement by the -greedy algorithm. Since an exact optimal policy of the original MDP is not readily available, the policy computed by NN PI is used as an approximate optimal policy.
4.2.2 Simple shortest-path (SSP) policy
For routing and scheduling decisions under SSP policy, we simply calculate the cost of paths by and select the path with the smallest value.
4.2.3 Optimal Bernoulli (OB) policy
A Bernoulli policy route any incoming job to a route with a time-invariant probability such that Consequently, the routing probabilities are optimized using a simple numerical search scheme.
4.3 Evaluation and comparison
We compare our GSP policy with the three benchmarks according to the following performance metrics:
-
1)
Computational efficiency: training time and number of iterations (if applicable.)
-
2)
Implementation efficiency: the average system time experienced by all jobs that go through the network.
Consider the network in Fig 1 and suppose , , , , , , all in unit sec-1. Then we have for this set of parameters. For GSP PI, we initialize the estimates of with 0.5 and estimate of with 0.1. The OB policy turns out to be . For simulation, a discrete time step of 0.1 sec is used. All experiments were implemented in Google Colab [27], using Intel(R) Xeon(R) CPU with 12.7GB memory. NN PI was trained on NVIDIA GeForce RTX 2080 Ti with 40GB memory.
The computational efficiency of our GSP policy is significantly better than the general NN algorithm with a 97.8%-99.8% reduction of training time as demonstrated in Fig. 2(a).


Our GSP PI algorithm converges rapidly within a few seconds while NN PI takes more than twelve hours to converge. This is expected, since the PL functions have simple structures, while the NN model require sophisticated training.
Table 1 lists the Normalized Average System Times (NAST) under various methods. The results are generated from test simulations of sec. Although NN PI performs better in long terms of learning as expected, GSP PI performs better with just a few number of iterations as demonstrated in Fig. 2(b), and can attain a 1.9%-6.7% optimality gap towards NN PI.
\hhlineAlgorithm | NAST |
\hhlineNeural network (NN) | 1.00 |
\hhlineGeneralized shortest-path (GSP) | 1.03 |
\hhlineSimple shortest-path (SSP) | 1.12 |
\hhlineOptimal Bernoulli (OB) | 1.25 |
\hhline |
The job will spend more time going through the queuing network under SSP or OB policy at the average of sec and sec. Though the implementation efficiency of GSP PI is slightly worse than NN PI, GSP PI gives the best trade-off between computational efficiency and implementation efficiency: the average system time of NN and GSP method are sec and sec, which implies an adequate optimality gap of 3%.
5 Concluding remarks
In this article, we focus on a class of control policies for a single-origin-single-destination network, which we call the generalized shortest-path policy. We design a set of piecewise-linear functions as a proxy of the travel cost on the path over the network. We use the piecewise-linear functions to construct a Lyapunov function to derive theoretical guarantee on the stability of the traffic state. We also develop a policy iteration algorithm to learn the parameters for the proposed policy. Then we implement and compare the algorithm with three other benchmarks including a standard neural network-based algorithm. The experimental results showed that our policy and PI algorithm can perform efficiently and steadily when significantly reducing the computation cost as well, which is cost-effective. Future work may include (i) extension of the theoretical results as well as PI algorithm to general single-origin-single-destination acyclic networks and (ii) refinement of the PI algorithm to develop more efficient temporal-difference algorithms.
References
- [1] Debasis Mitra and Judith B Seery. Comparative evaluations of randomized and dynamic routing strategies for circuit-switched networks. IEEE Transactions on Communications, 39(1):102–116, 1991.
- [2] D Down and Sean P Meyn. Piecewise linear test functions for stability and instability of queueing networks. Queueing Systems, 27(3-4):205–226, 1997.
- [3] Murat Alanyali and Bruce Hajek. Analysis of simple algorithms for dynamic load balancing. Mathematics of Operations Research, 22(4):840–871, 1997.
- [4] Li Jin and Saurabh Amin. Stability of fluid queueing systems with parallel servers and stochastic capacities. IEEE Transactions on Automatic Control, 63(11):3948–3955, 2018.
- [5] PR Kumar and Sean P Meyn. Stability of queueing networks and scheduling policies. IEEE Transactions on Automatic Control, 40(2):251–260, 1995.
- [6] G Foschini and JACK Salz. A basic dynamic routing problem and diffusion. IEEE Transactions on Communications, 26(3):320–327, 1978.
- [7] Anthony Ephremides, P Varaiya, and Jean Walrand. A simple dynamic routing problem. IEEE Transactions on Automatic Control, 25(4):690–693, 1980.
- [8] L Tassiulas and A Ephremides. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936–1948, 1992.
- [9] Zubair Md Fadlullah, Fengxiao Tang, Bomin Mao, Nei Kato, Osamu Akashi, Takeru Inoue, and Kimihiro Mizutani. State-of-the-art deep learning: Evolving machine intelligence toward tomorrow’s intelligent network traffic control systems. IEEE Communications Surveys & Tutorials, 19(4):2432–2455, 2017.
- [10] Sean P. Meyn and R. L. Tweedie. Stability of markovian processes iii: Foster–lyapunov criteria for continuous-time processes. Advances in Applied Probability, 25(3):518–548, 1993.
- [11] Dimitri Bertsekas. Dynamic Programming and Optimal Control: Volume I, volume 1. Athena Scientific, 2012.
- [12] Jim G Dai and Sean P Meyn. Stability and convergence of moments for multiclass queueing networks via fluid limit models. IEEE Transactions on Automatic Control, 40(11):1889–1904, 1995.
- [13] Christopher V Hollot, Vishal Misra, Donald Towsley, and Weibo Gong. Analysis and design of controllers for aqm routers supporting tcp flows. IEEE Transactions on Automatic Control, 47(6):945–959, 2002.
- [14] Leonidas Georgiadis, Michael J Neely, Leandros Tassiulas, et al. Resource allocation and cross-layer control in wireless networks. Foundations and Trends® in Networking, 1(1):1–144, 2006.
- [15] Abhishek Sinha and Eytan Modiano. Optimal control for generalized network-flow problems. IEEE/ACM Transactions on Networking, 26(1):506–519, 2017.
- [16] Jianan Zhang, Abhishek Sinha, Jaime Llorca, Antonia M Tulino, and Eytan Modiano. Optimal control of distributed computing networks with mixed-cast traffic flows. IEEE/ACM Transactions on Networking, 29(4):1760–1773, 2021.
- [17] Keith W Ross. Optimal dynamic routing in markov queueing networks. Automatica, 22(3):367–370, 1986.
- [18] Quan-Lin Li, Jing-Yu Ma, Rui-Na Fan, and Li Xia. An overview for markov decision processes in queues and networks. Stochastic Models in Reliability, Network Security and System Safety: Essays Dedicated to Professor Jinhua Cao on the Occasion of His 80th Birthday, pages 44–71, 2019.
- [19] Zhiyuan Xu, Jian Tang, Jingsong Meng, Weiyi Zhang, Yanzhi Wang, Chi Harold Liu, and Dejun Yang. Experience-driven networking: A deep reinforcement learning based approach. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications, pages 1871–1879. IEEE, 2018.
- [20] Guillermo Bernárdez, José Suárez-Varela, Albert López, Bo Wu, Shihan Xiao, Xiangle Cheng, Pere Barlet-Ros, and Albert Cabellos-Aparicio. Is machine learning ready for traffic engineering optimization? In 2021 IEEE 29th International Conference on Network Protocols (ICNP), pages 1–11. IEEE, 2021.
- [21] Bai Liu, Qiaomin Xie, and Eytan Modiano. Rl-qn: A reinforcement learning framework for optimal control of queueing systems. ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 7(1):1–35, 2022.
- [22] Qian Xie and Li Jin. Stabilizing queuing networks with model data-independent control. IEEE Transactions on Control of Network Systems, 9(3):1317–1326, 2022.
- [23] Robert G Gallager. Stochastic Processes: Theory for Applications. Cambridge University Press, 2013.
- [24] Alessandro Magnani and Stephen P. Boyd. Convex piecewise-linear fitting. Optimization and Engineering, 10:1–17, 2009.
- [25] Ciyou Zhu, Richard H Byrd, Peihuang Lu, and Jorge Nocedal. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software (TOMS), 23(4):550–560, 1997.
- [26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- [27] Google Colaboratory. https://colab.research.google.com/. Accessed: 2023-03-28.