This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Optimal Control Operator Perspective and a Neural Adaptive Spectral Method

Mingquan Feng1, Zhijie Chen2, Yixin Huang1, Yizhou Liu1, Junchi Yan1,3111Corresponding author. This work was in part supported by the NSFC (62222607, 623B1009).
Abstract

Optimal control problems (OCPs) involve finding a control function for a dynamical system such that a cost functional is optimized. It is central to physical systems in both academia and industry. In this paper, we propose a novel instance-solution control operator perspective, which solves OCPs in a one-shot manner without direct dependence on the explicit expression of dynamics or iterative optimization processes. The control operator is implemented by a new neural operator architecture named Neural Adaptive Spectral Method (NASM), a generalization of classical spectral methods. We theoretically validate the perspective and architecture by presenting the approximation error bounds of NASM for the control operator. Experiments on synthetic environments and a real-world dataset verify the effectiveness and efficiency of our approach, including substantial speedup in running time, and high-quality in- and out-of-distribution generalization.

Codehttps://github.com/FengMingquan-sjtu/NASM.git

1 Introduction

Although control theory has been rooted in a model-based design and solving paradigm, the demands of model reusability and the opacity of complex dynamical systems call for a rapprochement of modern control theory, machine learning, and optimization. Recent years have witnessed the emerging trends of control theories with successful applications to engineering and scientific research, such as robotics (Krimsky and Collins 2020), aerospace technology (He et al. 2019), and economics and management (Lapin, Zhang, and Lapin 2019) etc.

We consider the well-established formulation of optimal control (Kirk 2004) in finite time horizon T=[t0,tf]T=[t_{0},t_{f}]. Denote XX and UU as two vector-valued function sets, representing state functions and control functions respectively. Functions in XX (resp. UU) are defined over TT and have their outputs in dx\mathbb{R}^{d_{x}} (resp. du\mathbb{R}^{d_{u}}). State functions 𝒙X{\bf\it x}\in X and control functions 𝒖U{\bf\it u}\in U are governed by a differential equation. The optimal control problem (OCP) is targeted at finding a control function that minimizes the cost functional ff (Kirk 2004; Lewis, Vrabie, and Syrmos 2012):

min𝒖U\displaystyle\min_{{\bf\it u}\in U} f(𝒙,𝒖)\displaystyle f({\bf\it x},{\bf\it u}) =t0tfp(𝒙(t),𝒖(t))dt+h(𝒙(tf))\displaystyle=\int_{t_{0}}^{t_{f}}p({\bf\it x}(t),{\bf\it u}(t))\differential{t}+h({\bf\it x}(t_{f})) (1a)
s.t. 𝒙˙(t)\displaystyle\dot{{\bf\it x}}(t) =𝒅(𝒙(t),𝒖(t)),\displaystyle={\bf\it d}({\bf\it x}(t),{\bf\it u}(t)), (1b)
𝒙(t0)\displaystyle{\bf\it x}(t_{0}) =𝒙0,\displaystyle={\bf\it x}_{0}, (1c)

where 𝒅{\bf\it d} is the dynamics of differential equations; pp evaluates the cost alongside the dynamics and hh evaluates the cost at the termination state 𝒙(tf){\bf\it x}(t_{f}); and 𝒙0{\bf\it x}_{0} is the initial state. We restrict our discussion to differential equation-governed optimal control problems, leaving the control problems in stochastic networks (Dai and Gluzman 2022), inventory management (Abdolazimi et al. 2021), etc. out of the scope of this paper. The analytic solution of Eq. 1 is usually unavailable, especially for complex dynamical systems. Thus, there has been a wealth of research towards accurate, efficient, and scalable numerical OCP solvers (Rao 2009) and neural network-based solvers (Kiumarsi et al. 2017). However, both classic and modern OCP solvers are facing challenges, especially in the big data era, as briefly discussed below.

1) Opacity of Dynamical Systems. Existing works (Böhme and Frank 2017a; Effati and Pakdaman 2013; Jin et al. 2020) assume the dynamical systems a priori and exploit their explicit forms to ease the optimization. However, real-world dynamical systems can be unknown and hard to model. It raises serious challenges in data collection and system identification (Ghosh, Birrell, and De Angelis 2021), where special care is required for error reduction.

2) Model Reusability. Model reusability is conceptually measured by the capability of utilizing historical data when facing an unprecedented problem instance. Since solving an individual instance of Eq. 1 from scratch is expensive, a reusable model that can be well adapted to new problems is welcomed for practical usage.

3) Running Paradigm. Numerical optimal control solvers traditionally use iterative methods before picking the control solution, thus introducing a multiplicative term regarding the iteration in the running time complexity. This sheds light on the high cost of solving a single OC problem.

4) Control Solution Continuity. Control functions are defined on a continuous domain (typically time) despite being intractable for numerical solvers. Hence resorting to discretization in the input domain gives rise to the trade-off in the precision of the control solution and the computational cost, as well as the truncation errors. While the discrete solution can give point-wise queries, learning a control function for arbitrary time queries is much more valued.

Refer to caption
Figure 1: Phase-2 cost curves of two failed instances of two-phase control (Hwang et al. 2022) on Pendulum system. The control function gradually moves outside the training distribution of phase 1. As a result, the control function converges w.r.t. the cost predicted by the surrogate model (blue), but diverges w.r.t. true cost (red).

5) Running Phase. The two-phase models (Chen, Shi, and Zhang 2018; Wang, Bhouri, and Perdikaris 2021; Hwang et al. 2022) can (to some extent) overcome the above issues at the cost of introducing an auxiliary dynamics inference phase. This thread of works first approximate the state dynamics by a differentiable surrogate model and then, in its second phase, solves an optimization problem for control variables. However, the two-phase paradigm increases computational cost and manifests inconsistency between the two phases. A motivating example in Fig. 1 shows that the two-phase paradigm leads to failures. When the domain of phase-2 optimization goes outside the training distribution in phase-1, it might collapse.

Table 1 compares the methods regarding the above aspects. We propose an instance-solution operator perspective for learning to solve OCPs, thereby tackling the issues above. The highlights are:

Table 1: Optimal control approaches. Our NCO covers all the merits of performing a single-phase direct-mapping paradigm without relying on known system dynamics and supports arbitrary input-domain queries.
Methods Phase Continuity Dynamics Reusability Paradigm
Direct (Böhme and Frank 2017a) Single Discrete Required No Iterative
Two-Phase (Hwang et al. 2022) Two Discrete Dispensable1 Partial2 Iterative
PDP (Jin et al. 2020) Single Discrete Required No Iterative
DP (Tassa, Mansard, and Todorov 2014) Single Discrete Required No Iterative
NCO (ours) Single Continuous Dispensable Yes One-pass
  • 1

    if phase-1 uses PINN loss (Wang, Bhouri, and Perdikaris 2021): required.

  • 2

    only phase-1 is reusable.

1) We propose the Neural Control Operator (NCO) method of optimal control, solving OCPs by learning a neural operator that directly maps problem instances to solutions. The system dynamics is implicitly learned during the training, which relies on neither any explicit form of the system nor the optimization process at test time. The operator can be reused for similar OCPs from the same distribution without retraining, unlike learning-free solvers. Furthermore, the single-phase direct mapping paradigm avoids iterative processes with substantial speedup.

2) We instantiate a novel neural operator architecture: NASM (Neural Adaptive Spectral Method) to implement the NCO and derive bounds on its approximation error.

3) Experiments on both synthetic and real systems show that NASM is versatile for various forms of OCPs. It shows over 6Kx speedup (on synthetic environments) over classical direct methods. It also generalizes well on both in- and out-of-distribution OCP instances, outperforming other neural operator architectures in most of the experiments.

Background and Related Works. Most OCPs can not be solved analytically, thus numerical methods (Körkel* et al. 2004) are developed. An OCP numerical method has three components: problem formulation, discretization scheme, and solver. The problem formulations fall into three categories: direct methods, indirect methods, and dynamic programming. Each converts OCPs into infinite-dimensional optimizations or differential equations, which are then discretized into finite-dimensional problems using schemes like finite difference, finite element, or spectral methods. These finite problems are solved by algorithms such as interior-point methods for nonlinear programming and the shooting method for boundary-value problems. These solvers rely on costly iterative algorithms and have limited ability to reuse historical data.

In addition, our work is also related to the neural operators, which are originally designed for solving parametric PDEs. Those neural operators learn a mapping from the parameters of a PDE to its solution. For example, DeepONet (DON) (Lu et al. 2021) consists of two sub-networks: a branch net for input functions (i.e. parameters of PDE) and a trunk net for the querying locations or time indices. The output is obtained by computing the inner product of the outputs of branch and trunk networks. Another type of neural operator is Fourier transform based, such as Fourier Neural Operator (FNO) (Li et al. 2020). The FNO learns parametric linear functions in the frequency domain, along with nonlinear functions in the time domain. The conversion between those two domains is realized by discrete Fourier transformation. There are many more architectures, such as Graph Element Network (GEN) (Alet et al. 2019), Spectral Neural Operator(SNO) (Fanaskov and Oseledets 2022) etc., as will be compared in our experiments.

2 Methodology

In this section, we will present the instance-solution control operator perspective for solving OCPs. The input of operator 𝒢\displaystyle\mathcal{G} is an OCP instance ii, and the output is the optimal control function 𝒖{\bf\it u}^{\ast}, i.e. 𝒢:IU\displaystyle\mathcal{G}:I\rightarrow U. Then we propose the Neural Control Operator (NCO), a class of end-to-end OCP neural solvers that learn the underlying infinite-dimensional operator 𝒢\displaystyle\mathcal{G} via a neural operator. A novel neural operator architecture, the Neural Adaptive Spectral Method (NASM), is implemented for NCO method.

Instance-Solution Control Operator Perspective of OCP Solvers

This section presents a high-level instance-solution control operator perspective of OCPs. In this perspective, an OCP solver is an operator that maps problem instances (defined by cost functionals, dynamics, and initial conditions) into their solutions, i.e. the optimal controls. Denote the problem instance by i=(f,𝒅,𝒙init)Ii=(f,{\bf\it d},{\bf\it x}_{init})\in I, with its optimal control solution 𝒖U{\bf\it u}^{*}\in U. The operator is defined as 𝒢:IU\mathcal{G}:I\rightarrow U, a mapping from cost functional to optimal control.

In practice, the non-linear operator 𝒢\mathcal{G} can hardly have a closed-form expression. Therefore, various numerical solvers have been developed, such as direct method and indirect method, etc, as enumerated in the related works. All those numerical solvers can be regarded as approximate operators 𝒩:IU\mathcal{N}:I\rightarrow U of the exact operator 𝒢\mathcal{G}. To measure the quality of the approximated operator, we define the approximation error as the distance between the cost of the solution and the optimal cost. Let μ\mu be the probability measure of the problem instances, assume the control dimension du=1d_{u}=1, and cost fi>0f_{i}>0 (subscript ii means the cost is dependent on ii), w.l.o.g. Then the approximation error is defined as:

^:=I|fi𝒩(i)fi𝒢(i)fi𝒢(i)|dμ(i).\widehat{\mathscr{E}}:=\int_{I}\absolutevalue{\frac{f_{i}\circ\mathcal{N}(i)-f_{i}\circ\mathcal{G}(i)}{f_{i}\circ\mathcal{G}(i)}}\differential\mu(i). (2)

Classical numerical solvers are generally guaranteed to achieve low approximation error (with intensive computing cost). For efficiency, we propose to approximate the operator 𝒢\mathcal{G} by neural networks, i.e. Neural Control Operator (NCO) method. The networks are trained by demonstration data pairs (i,𝒖)(i,{\bf\it u}^{*}) produced by some numerical solvers, without the need for explicit knowledge of dynamics. Once trained, they infer optimal control for any unseen instances with a single forward pass. As empirically shown in our experiments, the efficiency (at inference) of neural operators is consistently better than classical solvers. We will also provide an approximation error bound for our specific implementation NASM of NCO.

Before introducing the network architecture, it is necessary to clarify a common component, the Encoder, used in both classical and neural control operators. The encoder converts the infinite-dimensional input ii to finite-dimensional representation 𝒆{\bf\it e}. Take the cost functional for example. If the cost functional f(x,u)=(xxtarget)2dtf(x,u)=\int(x-x_{\text{target}})^{2}\differential t is explicitly known, then the ff may be encoded as the symbolic expression or the parameters xtargetx_{\text{target}} only. Otherwise, the cost functional can be represented by sampling on grids. After encoding, The encoded vector 𝒆{\bf\it e} is fed into numerical algorithms or neural networks in succeeding modules. The encoder’s design is typically driven by the problem setting rather than the specific solver or network architecture.

Neural Adaptive Spectral Method

From the operator’s perspective, constructing an OCP solver is equivalent to an operator-learning problem. We propose to learn the operator by neural network in a data-driven manner. Now we elaborate on a novel neural operator architecture named Neural Adaptive Spectral Method (NASM), inspired by the spectral method.

The spectral method assumes that the solution is the linear combination of pp basis functions:

𝒩spec(i)(t)=j=1pcj(i)bj(t),\mathcal{N}_{\text{spec}}(i)(t)=\sum_{j=1}^{p}c_{j}(i)b_{j}(t), (3)

where the basis functions {bj}p\{b_{j}\}_{p} are chosen as orthogonal polynomials e.g. Fourier series (trigonometric polynomial) and Chebyshev polynomials. The coefficients {cj}p\{c_{j}\}_{p} are derived from instance ii by a numerical algorithm. The Spectral Neural Operator (SNO) (Fanaskov and Oseledets 2022) obtains the coefficients by a network.

Our NASM model is based on a similar idea but with more flexible and adaptive components than SNO. The first difference is that summation \sum is extended to aggregation \bigoplus. The aggregation is defined as any mapping p\mathbb{R}^{p}\rightarrow\mathbb{R} and can be implemented by either summation or another network. Secondly, the coefficient {cj}p\{c_{j}\}_{p} is obtained from Coefficient Net, which is dependent on not only instance ii but time index tt as well, inspired by time-frequency analysis (Cohen 1995). It is a generalization and refinement of 𝒩spec\mathcal{N}_{\text{spec}}(Eq. 3), for the case when the characteristics of 𝒢\mathcal{G} are non-static. For example, 𝒩spec\mathcal{N}_{\text{spec}} with Fourier basis can only capture globally periodic functions, while the 𝒩NASM\mathcal{N}_{\text{NASM}} with the same basis can model both periodic and non-periodic functions. Lastly, the basis functions {bj}p\{b_{j}\}_{p} are adaptive instead of fixed. The adaptiveness is realized by parameterizing basis functions by θ\theta, which is inferred from t,it,i also by the neural network.

𝒩NASM(i)(t)=j=1pcj(t,i)bj(t;𝜽(t,i)).\mathcal{N}_{\text{NASM}}(i)(t)=\bigoplus_{j=1}^{p}c_{j}(t,i)b_{j}(t;{\bf\it\theta}(t,i)). (4)

As a concrete example of adaptive basis functions, the adaptive Fourier series is obtained from the original basis by scaling and shifting, according to the parameter 𝜽{\bf\it\theta}. We limit the absolute value of elements of 𝜽{\bf\it\theta} to avoid overlapping the adaptive range of basis functions. Following this principle, one can design adaptive versions for other basis function sets.

{bj(t;𝜽)}={1,sin(π[(1+θ1)t+θ2]),\displaystyle\{b_{j}(t;{\bf\it\theta})\}=\{1,\>\>\sin(\pi[(1+\theta_{1})t+\theta_{2}]), (5)
cos(π[(1+θ3)t+θ4]),},|θk|0.5,k.\displaystyle\>\>\cos(\pi[(1+\theta_{3})t+\theta_{4}]),\>\>\cdots\},\quad\absolutevalue{\theta_{k}}\leq 0.5,\forall k.
Refer to caption
Figure 2: The architecture of NASM. The network takes two inputs: OCP instance ii and time index tt. The input ii is pre-processed by the Encoder. Then both tt and encoding 𝒆{\bf\it e} are fed into the Coefficient Network to obtain coefficients 𝒄{\bf\it c} and adaptive parameters 𝜽{\bf\it\theta}. The adaptive basis (e.g. Fourier series) outputs function values 𝒃{\bf\it b}, which is multiplied with 𝒄{\bf\it c} and aggregated to the final output 𝒖^(t)\hat{{\bf\it u}}(t), the estimation of optimal control for instance ii at time tt. Detailed explanation is given in Section 2.

The overall architecture of NASM is given in Fig. 2. The theoretic result guarantees that there exists a network instance of NASM approximating the operator 𝒢\mathcal{G} to arbitrary error tolerance. Furthermore, the size and depth of such a network are upper-bounded. The technical line of our analysis is partly inspired by (Lanthaler, Mishra, and Karniadakis 2022) providing error estimation for DeepONets.

Theorem 1 (Approximator Error, Informal).

Under some regularity assumptions, given an operator 𝒢\mathcal{G}, and any error budget ϵ\epsilon, there exists a NASM satisfying the budget with bounded network size and depth.

Comparing NASM error bound with that of DeepONet, we discover that NASM can achieve the same error bound as DeepONet with fewer learnable parameters. More details of theorems and proofs are presented in Appx.C and Appx.D.

3 Experiments

We give experiments on synthetic and real-world environments to evaluate our instance-solution operator framework.

Synthetic Control Systems

Control Systems and Data Generation

We evaluate NASM on five representative optimal control systems by following the same protocol of (Jin et al. 2020), as summarized in Table 8. We postpone the rest systems to Appendix E, and only describe the details of the Quadrotor control here:

𝒑˙=𝒗,m𝒗˙=[undef]+𝑹(𝒒)[undef],\displaystyle\dot{{\bf\it p}}={\bf\it v},\qquad m\dot{{\bf\it v}}=\bmqty{undef}+{\bf\it R}^{\top}({\bf\it q})\bmqty{undef},
𝒒˙=12𝜴(𝝎)𝒒,𝑱𝝎˙=𝑻𝒖𝝎×𝑱𝝎.\displaystyle\dot{{\bf\it q}}=\frac{1}{2}{\bf\it\Omega}({\bf\it\omega}){\bf\it q},\qquad{\bf\it J}\dot{{\bf\it\omega}}={\bf\it T}{\bf\it u}-{\bf\it\omega}\times{\bf\it J}{\bf\it\omega}.

This system describes the dynamics of a helicopter with four rotors. The state 𝒙=[𝒑,𝒗,𝝎]9{\bf\it x}=[{\bf\it p}^{\top},{\bf\it v}^{\top},{\bf\it\omega}^{\top}]^{\top}\in\mathbb{R}^{9} consists of parts: position 𝒑{\bf\it p}, velocity 𝒗{\bf\it v}, and angular velocity 𝝎{\bf\it\omega}. The control 𝒖4{\bf\it u}\in\mathbb{R}^{4} is the thrusts of the four rotating propellers of the quadrotor. 𝒒4{\bf\it q}\in\mathbb{R}^{4} is the unit quaternion (Jia 2019) representing the attitude (spacial rotation) of quadrotor w.r.t. the inertial frame. 𝑱{\bf\it J} is the moment of inertia in the quadrotor’s frame, and 𝑻𝒖{\bf\it T}{\bf\it u} is the torque applied to the quadrotor. We set the initial state 𝒙init=[[8,6,9],0,0]{\bf\it x}_{init}=[[-8,-6,9]^{\top},{\bf\it 0},{\bf\it 0}]^{\top}, the initial quaternion 𝒒init=0{\bf\it q}_{init}={\bf\it 0}. The matrices 𝜴(𝝎),𝑹(𝒒),𝑻{\bf\it\Omega}({\bf\it\omega}),{\bf\it R}({\bf\it q}),{\bf\it T} are coefficient matrices, see definition in Appx. E. The cost functional is defined as 0tf𝒄𝒙(𝒙(t)𝒙goal)2+cu𝒖(t)2dt\int_{0}^{tf}{\bf\it c_{x}}^{\top}({\bf\it x}(t)-{\bf\it x}_{goal})^{2}+c_{u}\norm{{\bf\it u}(t)}^{2}\differential{t}, with coefficients 𝒄𝒙=1{\bf\it c_{x}}={\bf\it 1}, cu=0.1c_{u}=0.1.

In experiments presented in this section, unless otherwise specified, we temporally fix the cost functional symbolic expression, dynamics parameters, and initial condition in both training and testing. In this setting, the solution of OCP only depends on the parameter of cost, i.e. target state 𝒙goal{\bf\it x}_{goal}. The input of NCO is the cost functional only, and the information of dynamics, and initial conditions are explicitly learned. Therefore, we generate datasets (for model training/validation) and benchmarks (for model testing) by sampling target states from a pre-defined distribution. To fully evaluate the generalization ability, we define both in-distribution (ID) and out-of-distribution (OOD) (Shen et al. 2021). Specifically, we design two random variables, 𝐱goalin:=𝒙goalbase+ϵin\displaystyle\mathbf{x}^{in}_{goal}:={\bf\it x}_{goal}^{base}+{\bf\it\epsilon}_{in}, and 𝐱goalout:=𝒙goalbase+ϵout\displaystyle\mathbf{x}^{out}_{goal}:={\bf\it x}_{goal}^{base}+{\bf\it\epsilon}_{out}, where 𝒙goalbase{\bf\it x}_{goal}^{base} is a baseline goal state, and ϵin,out{\bf\it\epsilon}_{in,out} are different noise applied to ID and OOD. In Quadrotor problems for example, we set 𝒙goalbase=0.6{\bf\it x}_{goal}^{base}={\bf\it 0.6}, and uniform noise ϵin𝒰(0.5,0.5){\bf\it\epsilon}_{in}\sim\mathcal{U}(-0.5,0.5), and ϵout𝒰(0.7,0.5){\bf\it\epsilon}_{out}\sim\mathcal{U}(-0.7,-0.5).

The training data are sampled from ID, while validation data and benchmark sets are both sampled from ID and OOD separately. The data generation process is shown in Alg. 1 in Appendix. For a given distribution, we sample a target state 𝒙goal{\bf\it x}_{goal} and construct the corresponding cost functional ff and OCP instance ii. Then define 100 time indices uniformly spaced in time horizon T=[0,tf]T=[0,\text{tf}], tf𝒰(1,1.01)\text{tf}\sim\mathcal{U}(1,1.01). The length of TT is slightly perturbed to add noise to the dataset. Then we solve the resulting OCP by the Direct Method(DM) solver and get the optimal control uu^{*} at those time indices. After that we sample 10 time indices {tj}1j10\{t_{j}\}_{1\leq j\leq 10}, creating 10 triplets {(f,tj,u(tj))}1j10\{(f,t_{j},u^{*}(t_{j}))\}_{1\leq j\leq 10} and adding them to the dataset. Repeat the process, until the size meets the requirement. The benchmark set is generated in the same way, but we store (f,Jopt)(f,J_{opt}) pairs (JoptJ_{opt} is the optimal cost) instead of the optimal control. The dataset with all three factors (cost functional, dynamics parameters, and initial condition) changeable can be generated similarly.

Implementation and Baselines

For all systems and all neural models, the loss is the mean squared error defined below, with a dataset DD of NN samples: =1Ni,jD𝒩(i)(tj)𝒖i(tj)2.\mathcal{L}=\frac{1}{N}\sum_{i,j\in D}\norm{\mathcal{N}(i)(t_{j})-{\bf\it u}_{i}^{*}(t_{j})}^{2}., the learning rate starts from 0.01, decaying every 1,000 epochs at a rate of 0.9. The batch size is min(10,000,N)\min(10,000,N), and the optimizer is Adam (Kingma and Ba 2014). The NASM uses 11 basis functions of the Fourier series. For comparison, we choose the following baselines. Other details of implementation and baselines are recorded in Appendix F.

1) Direct Method (DM): a classical direct OCP solver, with finite difference discretization and Interior Point OPTimizer (IPOPT) (Biegler and Zavala 2009) backend NLP solver.

2) Pontryagin Differentiable Programming (PDP) (Jin et al. 2020): an adjoint-based indirect method, differentiating through PMP, and optimized by gradient descent.

3) DeepONet (DON) (Lu et al. 2021): The seminal work of neural operator. The DON output is a linear combination of basis function values, and both coefficients and basis are pure neural networks.

4) Multi-layer Perceptron (MLP): A fully connected network implementation of the neural operator.

5) Fourier Neural Operator (FNO) (Li et al. 2020): A neural operator with consecutive Fourier transform layers, and fully connected layers at the beginning and the ending.

6) Graph Element Network (GEN) (Alet et al. 2019): Neural operator with graph convolution backbone.

7) Spectral Neural Operator (SNO) (Fanaskov and Oseledets 2022): linear combination of a fixed basis (Eq. 3), with neuralized coefficients. It is a degenerated version of NASM and will be compared in the ablation study part.

Refer to caption
(a) Time (sec./instance)
Refer to caption
(b) ID MAPE
Refer to caption
(c) OOD MAPE
Figure 3: Inference time and mean absolute percentage error (MAPE) on in-distribution (ID) and OOD benchmarks. NASM (red bars) achieves higher or comparable accuracy, with the fastest or second fastest speed.

Results and Discussion

We present the numerical results on the five systems to evaluate the efficiency and accuracy of NASM and other architectures for instance-solution operator framework. The metrics of interest are 1) the running time of solving problems; 2) the quality of solution, measured by mean absolute percentage error (MAPE) between the true optimal cost and the predicted cost, which is defined as the mean of |(JoptJsol)/Jopt||(J_{opt}-J_{sol})/J_{opt}|, where JoptJ_{opt} is the optimal cost generated by DM (regarded as ground truth), and JsolJ_{sol} is the cost of the solution produced by the model. The MAPE is calculated on ID/OOD benchmarks respectively, and the running time is averaged for 2,000 random problems. The results of ODE-constrained OCP are visualized in Fig. 3.

First, the comparison of the wall-clock running time is shown in Fig. 3(a) as well as in the third column of Tab. 2, which shows that the neural operator solvers are much faster than the classic solvers, although they both tested on CPU for fairness. For example, NASM achieves over 6000 times speedup against the DM solver. The acceleration can be reasoned in two aspects: 1) the neural operator solvers produce the output by a single forward propagation, while the classic methods need to iterate between forward and backward pass multiple times; 2) the neural solver calculation is highly paralleled. Among the neural operator solvers, the running time of NASM, DON, and MLP are similar. The FNO follows a similar diagram as MLP, but 10+ times slower than MLP, since it involves computationally intensive Fourier transformations. The GEN is 100+ times slower than MLP, probably because of the intrinsic complexity of graph construction and graph convolution.

Table 2: Results of Quadrotor environment.
Formulation Model Time(sec./instance) ID MAPE OOD MAPE
Direct Method DM 9.23×1029.23\times 10^{-2} \diagdown \diagdown
Indirect Method PDP 7.25×1017.25\times 10^{1} 1.24×1041.24\times 10^{-4} 1.16×1041.16\times 10^{-4}
Instance-Solution Control Operator NASM 6.50×1056.50\times 10^{-5} 6.17×𝟏𝟎𝟔\mathbf{6.17\times 10^{-6}} 1.21×𝟏𝟎𝟒\mathbf{1.21\times 10^{-4}}
DON 6.54×1056.54\times 10^{-5} 4.09×1054.09\times 10^{-5} 2.40×1042.40\times 10^{-4}
MLP 3.04×𝟏𝟎𝟓\mathbf{3.04\times 10^{-5}} 1.10×1041.10\times 10^{-4} 1.33×1021.33\times 10^{-2}
GEN 6.15×1036.15\times 10^{-3} 6.40×1046.40\times 10^{-4} 2.77×1032.77\times 10^{-3}
FNO 6.38×1046.38\times 10^{-4} 8.73×1058.73\times 10^{-5} 1.92×1031.92\times 10^{-3}

The accuracy on in- and out-of-distribution benchmark sets is compared in Fig. 3(b)-3(c). Compared with other neural models, NASM achieves better or comparable accuracy in general. In addition, NASM outperforms classical PDP on more than half of the benchmarks. As a concrete example, we investigate the performance of the Quadrotor environment. From Tab. 2, one can observe that MAPE of NASM on the ID and OOD is the best among all neural models. The OOD performance is close to that of the classical method PDP, which is insensitive to distribution shift. We conjecture that both ID/OOD generalization ability of NASM results from its architecture, where coefficients and basis functions are explicitly disentangled. Such a structure may inherit the inductive bias from numerical basis expansion methods (e.g. (Kafash, Delavarkhalafi, and Karbassi 2014)), thus being more robust to distribution shifts.

For NASM in all synthetic environment experiments, we choose MLP as the backbone of Coefficient Net, Fourier adaptive basis, non-static coefficients, and summation aggregation. To justify these architecture choices, we perform an ablation study by changing or removing each of the modules, as displayed in Tab. 3. We use the abbreviation w.o. for ’without’, and w. for ’with’. The first row denotes the unchanged model, and the second and third rows are fixing basis functions, and using static (time-independent) Coefficient Net. These two modifications slightly lower the performance in both ID and OOD. The 4th row is Fourier SNO, i.e. applying the above two modifications together on NASM, which dramatically increases the error. The reason is that SNO uses a fixed Fourier basis, and thus requires the optimal control to be globally periodic for accurate interpolation, while Quadrotor control is non-periodic. If the basis of SNO changes to non-periodic, such as Chebyshev basis as in the 5th row, then the performance is covered. The result shows that the accuracy of SNO depends heavily on the choice of pre-defined basis function. In comparison, NASM is less sensitive to the choice of basis due to its adaptive nature, as the 6th row shows that changing the basis from Fourier to Chebyshev hardly changes the NASM’s performance. The 7th and 8th rows demonstrate that both replacing summation with a neural network aggregation and switching to a Convolutional network backbone do not improve the performance on Quadrotor.

Table 3: Architecture justification of NASM on Quadrotor.
Modification ID MAPE OOD MAPE
1 NASM 6.17×𝟏𝟎𝟔\mathbf{6.17\times 10^{-6}} 1.21×𝟏𝟎𝟒\mathbf{1.21\times 10^{-4}}
2 w.o. Adapt Params 7.86×1067.86\times 10^{-6} 1.97×1041.97\times 10^{-4}
3 w.o. Non-static Coef 9.01×1069.01\times 10^{-6} 5.83×1045.83\times 10^{-4}
4 SNO (Fourier) 9.59×1029.59\times 10^{-2} 8.39×1028.39\times 10^{-2}
5 SNO (Chebyshev) 6.64×1066.64\times 10^{-6} 3.82×1043.82\times 10^{-4}
6 w. Chebyshev 6.36×1066.36\times 10^{-6} 2.86×1042.86\times 10^{-4}
7 w. Neural Aggr 7.13×1067.13\times 10^{-6} 1.29×1041.29\times 10^{-4}
8 w. Conv 1.16×1051.16\times 10^{-5} 4.95×1034.95\times 10^{-3}

Extended Experiment Result on Quadrotor

1) More Variables. In the previous experiments, only the cost function (target state) is changeable. Here, we add more variables to the network input, such that the cost function, dynamics (physical parameters e.g. wing length), and initial condition are all changeable. The in/out- distributions for dynamics and initial conditions are uniform distributions, designed similarly to that of the target state. The result is listed in Tab. 4.

Table 4: Quadrotor with changeable cost, dynamics and initial conditions.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 1.59×1011.59\times 10^{-1} \diagdown \diagdown
PDP 7.25×1017.25\times 10^{1} 1.13×1041.13\times 10^{-4} 1.15×1041.15\times 10^{-4}
NASM 4.40×1054.40\times 10^{-5} 5.92×𝟏𝟎𝟓\mathbf{5.92\times 10^{-5}} 1.57×𝟏𝟎𝟒\mathbf{1.57\times 10^{-4}}
DON 3.39×𝟏𝟎𝟓\mathbf{3.39\times 10^{-5}} 1.02×1041.02\times 10^{-4} 3.19×1043.19\times 10^{-4}
MLP 8.45×1058.45\times 10^{-5} 5.94×1045.94\times 10^{-4} 2.36×1032.36\times 10^{-3}
GEN 1.10×1021.10\times 10^{-2} 6.06×1046.06\times 10^{-4} 2.32×1032.32\times 10^{-3}
FNO 1.95×1031.95\times 10^{-3} 6.22×1046.22\times 10^{-4} 2.32×1032.32\times 10^{-3}

2) Extreme OOD shifts and Transfer Learning. In addition to the OOD shifts in the previous experiments, we design some more extremely shifted distributions: ϵout1𝒰(1.0,0.8),ϵout2𝒰(1.3,1.1),ϵout3𝒰(1.6,1.4){\bf\it\epsilon}_{out}^{1}\sim\mathcal{U}(-1.0,-0.8),{\bf\it\epsilon}_{out}^{2}\sim\mathcal{U}(-1.3,-1.1),{\bf\it\epsilon}_{out}^{3}\sim\mathcal{U}(-1.6,-1.4)

Table 5: MAPE of Quadrotor over OOD shift, without finetune.
Model ϵout1{\bf\it\epsilon}_{out}^{1} ϵout2{\bf\it\epsilon}_{out}^{2} ϵout3{\bf\it\epsilon}_{out}^{3}
NASM 1.29×𝟏𝟎𝟑\mathbf{1.29\times 10^{-3}} 6.90×1036.90\times 10^{-3} 1.50×1021.50\times 10^{-2}
DON 2.32×1032.32\times 10^{-3} 7.58×1037.58\times 10^{-3} 1.61×1021.61\times 10^{-2}
MLP 3.15×1033.15\times 10^{-3} 6.10×𝟏𝟎𝟑\mathbf{6.10\times 10^{-3}} 9.53×𝟏𝟎𝟑\mathbf{9.53\times 10^{-3}}
GEN 1.14×1021.14\times 10^{-2} 3.94×1023.94\times 10^{-2} 2.18×1012.18\times 10^{-1}
FNO 1.35×1021.35\times 10^{-2} 5.04×1025.04\times 10^{-2} 1.03×1011.03\times 10^{-1}

Tab. 5 shows the OOD MAPE on those distributions. From the table, one can observe that the performance of neural models decreases with more distribution shifts. When the distribution shift increases to ϵout3{\bf\it\epsilon}_{out}^{3}, the results of neural models are almost unacceptable.

In practice, if large shift OOD reusability is required for some applications, we may assume a few training samples from OOD are available (i.e. few-shot). NCOs can be fine-tuned on the training samples from OOD to achieve better performance. This idea has been proposed for DeepONet transfer learning in (Goswami et al. 2022).

We follow the fine-tuning scheme proposed in (Goswami et al. 2022). We set the size of the OOD fine-tuning dataset and #epochs to be 20% of that of the ID training dataset. The learning rate is fixed at 0.001. The embedding network, basis function net, and the first two convolution blocks are all frozen during fine-tuning since we assume they keep the distribution invariant information.

Table 6: MAPE of Quadrotor on various OOD shift, with finetune.
Model ϵout1{\bf\it\epsilon}_{out}^{1} ϵout2{\bf\it\epsilon}_{out}^{2} ϵout3{\bf\it\epsilon}_{out}^{3}
NASM 1.02×𝟏𝟎𝟓\mathbf{1.02\times 10^{-5}} 2.29×𝟏𝟎𝟓\mathbf{2.29\times 10^{-5}} 3.07×1053.07\times 10^{-5}
DON 1.95×1051.95\times 10^{-5} 2.57×1052.57\times 10^{-5} 3.02×𝟏𝟎𝟓\mathbf{3.02\times 10^{-5}}
MLP 3.10×1053.10\times 10^{-5} 1.01×1041.01\times 10^{-4} 1.88×1041.88\times 10^{-4}
GEN 2.36×1042.36\times 10^{-4} 4.07×1044.07\times 10^{-4} 8.06×1048.06\times 10^{-4}
FNO 4.34×1054.34\times 10^{-5} 1.16×1041.16\times 10^{-4} 2.11×1042.11\times 10^{-4}

Tab. 6 reveals that fine-tuning greatly improves neural model performance on extreme distribution shifts. Therefore, the availability of fine-tuning on extra data benefits the reusability of neural operators on OCP.

Real-world Dataset of Planar Pushing

We present how to learn OC of a robot arm for pushing objects of varying shapes on various planar surfaces. We use the Pushing dataset (Yu et al. 2016), a challenging dataset consisting of noisy, real-world data produced by an ABB IRB 120 industrial robotic arm (Fig. 4, right part).

The robot arm is controlled to push objects starting from various contact positions and 9 angles (initial state), along different trajectories (target state functions), with 11 object shapes and 4 surface materials (dynamics). The control function is represented by the force exerted on to object. Left of Fig. 4 gives an overview of input variables.

We apply NASM to learn a mapping from a pushing OCP instance (represented by variables above) to the optimal control function. The input now is no longer the parameters of cost functional ff only, but parameters and representations of (f,𝒅,𝒙init)(f,{\bf\it d},{\bf\it x}_{init}). The encoder is realized by different techniques for different inputs, such as Savitzky–Golay smoothing (Savitzky and Golay 1964) and down-sampling for target trajectories, mean and standard value extraction for friction map, and CNN for shape images.

We extract training data from ID, validation, and test data from both ID/OOD. The ID/OOD is distinguished by different initial contact positions of the arm and object. The accuracy metric MAPE is now defined as u^u/u\norm{\hat{u}-u^{*}}/\norm{u^{*}}. We compare NASM performance only with neural baselines since the explicit expression of pushing OCP is unavailable, thus classical methods DM and PDP are not applicable. All neural models share the same encoder structure, while the parameters of the encoder are trained end-to-end individually for each model. The results are displayed in Tab. 7, from which one can observe that NASM outperforms all baselines in the ID MAPE and that NASM achieves the second lowest OOD MAPE, with a slight disadvantage against DON.

The design of NASM in the pushing environment is an adaptive Chebyshev basis, non-static Coefficient Net with Convolution backbone, and neuralized aggregation. The justification of the design is shown in Tab. 7. Its performance is insensitive to the choice of basis, as the second row reveals. Both the convolution backbone and the neuralized aggregation contribute to the high accuracy. A possible reason is that the control operator 𝒢\mathcal{G} in the pushing dataset is highly non-smooth and noisy, thus a more complicated architecture with higher capability and flexibility is preferred.

Refer to caption
Figure 4: Pushing environment(Yu et al. 2016).
Table 7: Results of Pushing environment.
Model ID MAPE OOD MAPE
NASM 0.108 (±\pm 0.006) 0.137 (±\pm 0.007)
NASM(Fourier) 0.113 (±\pm 0.006) 0.139 (±\pm 0.009)
NASM(w.o. Conv) 0.132 (±\pm 0.008) 0.151 (±\pm 0.009)
NASM(w.o. NAggr) 0.125 (±\pm 0.007) 0.172 (±\pm 0.009)
DON 0.117 (±\pm 0.006) 0.134 (±\pm 0.008)
MLP 0.128 (±\pm 0.006) 0.149 (±\pm 0.008)
GEN 0.209 (±\pm 0.014) 0.199 (±\pm 0.015)
FNO 0.131 (±\pm 0.008) 0.172 (±\pm 0.009)

4 Conclusion and Outlook

We have proposed an instance-solution operator perspective of OCPs, where the operator directly maps cost functionals to OC functions. We then present a neural operator NASM, with a theoretic guarantee on its approximation capability. Experiments show outstanding generalization ability and efficiency, in both ID and OOD settings. We envision the proposed model will be beneficial in solving numerous high-dimensional problems in the learning and control fields.

We currently do not specify the forms of problem instances (e.g. ODE/PDE-constrained OCP) or investigate sophisticated models for specific problems, which calls for careful designs and exploitation of the problem structures.

References

  • Abdolazimi et al. (2021) Abdolazimi, O.; Shishebori, D.; Goodarzian, F.; Ghasemi, P.; and Appolloni, A. 2021. Designing a new mathematical model based on ABC analysis for inventory control problem: A real case study. RAIRO-operations research, 55(4): 2309–2335.
  • Al-Tamimi, Lewis, and Abu-Khalaf (2008) Al-Tamimi, A.; Lewis, F. L.; and Abu-Khalaf, M. 2008. Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4): 943–949.
  • Alet et al. (2019) Alet, F.; Jeewajee, A. K.; Villalonga, M. B.; Rodriguez, A.; Lozano-Perez, T.; and Kaelbling, L. 2019. Graph element networks: adaptive, structured computation and memory. In International Conference on Machine Learning, 212–222. PMLR.
  • Anandkumar et al. (2020) Anandkumar, A.; Azizzadenesheli, K.; Bhattacharya, K.; Kovachki, N.; Li, Z.; Liu, B.; and Stuart, A. 2020. Neural operator: Graph kernel network for partial differential equations. In ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations.
  • Andersson et al. (2019) Andersson, J. A. E.; Gillis, J.; Horn, G.; Rawlings, J. B.; and Diehl, M. 2019. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1): 1–36.
  • Bazaraa, Sherali, and Shetty (2013) Bazaraa, M. S.; Sherali, H. D.; and Shetty, C. M. 2013. Nonlinear programming: theory and algorithms. John Wiley & Sons.
  • Bellman and Kalaba (1960) Bellman, R.; and Kalaba, R. 1960. Dynamic programming and adaptive processes: mathematical foundation. IRE transactions on automatic control, (1): 5–10.
  • Bettiol and Bourdin (2021) Bettiol, P.; and Bourdin, L. 2021. Pontryagin maximum principle for state constrained optimal sampled-data control problems on time scales. ESAIM: Control, Optimisation and Calculus of Variations, 27: 51.
  • Biegler and Zavala (2009) Biegler, L. T.; and Zavala, V. M. 2009. Large-scale nonlinear programming using IPOPT: An integrating framework for enterprise-wide dynamic optimization. Computers & Chemical Engineering, 33(3): 575–582.
  • Bock and Plitt (1984) Bock, H. G.; and Plitt, K.-J. 1984. A multiple shooting algorithm for direct solution of optimal control problems. IFAC Proceedings Volumes, 17(2): 1603–1608.
  • Boggs and Tolle (1995) Boggs, P. T.; and Tolle, J. W. 1995. Sequential quadratic programming. Acta numerica, 4: 1–51.
  • Böhme and Frank (2017a) Böhme, T. J.; and Frank, B. 2017a. Direct Methods for Optimal Control, 233–273. Cham: Springer International Publishing. ISBN 978-3-319-51317-1.
  • Böhme and Frank (2017b) Böhme, T. J.; and Frank, B. 2017b. Indirect Methods for Optimal Control, 215–231. Cham: Springer International Publishing. ISBN 978-3-319-51317-1.
  • Chang, Roohi, and Gao (2019) Chang, Y.-C.; Roohi, N.; and Gao, S. 2019. Neural lyapunov control. Advances in neural information processing systems, 32.
  • Chen and Chen (1995) Chen, T.; and Chen, H. 1995. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4): 911–917.
  • Chen, Shi, and Zhang (2018) Chen, Y.; Shi, Y.; and Zhang, B. 2018. Optimal control via neural networks: A convex approach. arXiv preprint arXiv:1805.11835.
  • Cheng et al. (2020) Cheng, L.; Wang, Z.; Song, Y.; and Jiang, F. 2020. Real-time optimal control for irregular asteroid landings using deep neural networks. Acta Astronautica, 170: 66–79.
  • Cohen (1995) Cohen, L. 1995. Time-frequency analysis, volume 778. Prentice hall New Jersey.
  • Dai and Gluzman (2022) Dai, J. G.; and Gluzman, M. 2022. Queueing network controls via deep reinforcement learning. Stochastic Systems, 12(1): 30–67.
  • Davis (1975) Davis, P. J. 1975. Interpolation and approximation. Courier Corporation.
  • Dontchev and Zolezzi (2006) Dontchev, A. L.; and Zolezzi, T. 2006. Well-posed optimization problems. Springer.
  • D’ambrosio et al. (2021) D’ambrosio, A.; Schiassi, E.; Curti, F.; and Furfaro, R. 2021. Pontryagin neural networks with functional interpolation for optimal intercept problems. Mathematics, 9(9): 996.
  • Effati and Pakdaman (2013) Effati, S.; and Pakdaman, M. 2013. Optimal control problem via neural networks. Neural Computing and Applications, 23(7): 2093–2100.
  • Fanaskov and Oseledets (2022) Fanaskov, V.; and Oseledets, I. 2022. Spectral Neural Operators. arXiv preprint arXiv:2205.10573.
  • Gao, Kong, and Clarke (2013) Gao, S.; Kong, S.; and Clarke, E. M. 2013. dReal: An SMT solver for nonlinear theories over the reals. In Automated Deduction–CADE-24: 24th International Conference on Automated Deduction, Lake Placid, NY, USA, June 9-14, 2013. Proceedings 24, 208–214. Springer.
  • Ghosh, Birrell, and De Angelis (2021) Ghosh, S.; Birrell, P.; and De Angelis, D. 2021. Variational inference for nonlinear ordinary differential equations. In International Conference on Artificial Intelligence and Statistics, 2719–2727. PMLR.
  • Goswami et al. (2022) Goswami, S.; Kontolati, K.; Shields, M. D.; and Karniadakis, G. E. 2022. Deep transfer operator learning for partial differential equations under conditional shift. Nature Machine Intelligence, 1–10.
  • Gühring, Kutyniok, and Petersen (2020) Gühring, I.; Kutyniok, G.; and Petersen, P. 2020. Error bounds for approximations with deep ReLU neural networks in W s, p norms. Analysis and Applications, 18(05): 803–859.
  • He et al. (2019) He, X.; Li, J.; Mader, C. A.; Yildirim, A.; and Martins, J. R. 2019. Robust aerodynamic shape optimization—from a circle to an airfoil. Aerospace Science and Technology, 87: 48–61.
  • Hewing et al. (2020) Hewing, L.; Wabersich, K. P.; Menner, M.; and Zeilinger, M. N. 2020. Learning-based model predictive control: Toward safe learning in control. Annual Review of Control, Robotics, and Autonomous Systems, 3: 269–296.
  • Hwang et al. (2022) Hwang, R.; Lee, J. Y.; Shin, J. Y.; and Hwang, H. J. 2022. Solving pde-constrained control problems using operator learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4504–4512.
  • Jia (2019) Jia, Y.-B. 2019. Quaternions. Com S, 477: 577.
  • Jin et al. (2020) Jin, W.; Wang, Z.; Yang, Z.; and Mou, S. 2020. Pontryagin differentiable programming: An end-to-end learning and control framework. Advances in Neural Information Processing Systems, 33: 7979–7992.
  • Kafash, Delavarkhalafi, and Karbassi (2014) Kafash, B.; Delavarkhalafi, A.; and Karbassi, S. 2014. A numerical approach for solving optimal control problems using the Boubaker polynomials expansion scheme. J. Interpolat. Approx. Sci. Comput, 3: 1–18.
  • Khoo, Lu, and Ying (2021) Khoo, Y.; Lu, J.; and Ying, L. 2021. Solving parametric PDE problems with artificial neural networks. European Journal of Applied Mathematics, 32(3): 421–435.
  • Kingma and Ba (2014) Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Kipf and Welling (2016) Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  • Kirk (2004) Kirk, D. E. 2004. Optimal control theory: an introduction. Courier Corporation.
  • Kiumarsi et al. (2017) Kiumarsi, B.; Vamvoudakis, K. G.; Modares, H.; and Lewis, F. L. 2017. Optimal and autonomous control using reinforcement learning: A survey. IEEE transactions on neural networks and learning systems, 29(6): 2042–2062.
  • Körkel* et al. (2004) Körkel*, S.; Kostina, E.; Bock, H. G.; and Schlöder, J. P. 2004. Numerical methods for optimal control problems in design of robust optimal experiments for nonlinear dynamic processes. Optimization Methods and Software, 19(3-4): 327–338.
  • Krimsky and Collins (2020) Krimsky, E.; and Collins, S. H. 2020. Optimal control of an energy-recycling actuator for mobile robotics applications. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 3559–3565. IEEE.
  • Lanthaler, Mishra, and Karniadakis (2022) Lanthaler, S.; Mishra, S.; and Karniadakis, G. E. 2022. Error estimates for deeponets: A deep learning framework in infinite dimensions. Transactions of Mathematics and Its Applications, 6(1): tnac001.
  • Lapin, Zhang, and Lapin (2019) Lapin, A.; Zhang, S.; and Lapin, S. 2019. Numerical solution of a parabolic optimal control problem arising in economics and management. Applied Mathematics and Computation, 361: 715–729.
  • Lasota (1968) Lasota, A. 1968. A discrete boundary value problem. In Annales Polonici Mathematici, volume 20, 183–190. Institute of Mathematics Polish Academy of Sciences.
  • Lewis, Vrabie, and Syrmos (2012) Lewis, F. L.; Vrabie, D.; and Syrmos, V. L. 2012. Optimal control. John Wiley & Sons.
  • Li and Todorov (2004) Li, W.; and Todorov, E. 2004. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), 222–229. Citeseer.
  • Li et al. (2020) Li, Z.; Kovachki, N. B.; Azizzadenesheli, K.; Bhattacharya, K.; Stuart, A.; Anandkumar, A.; et al. 2020. Fourier Neural Operator for Parametric Partial Differential Equations. In International Conference on Learning Representations.
  • Lu et al. (2021) Lu, L.; Jin, P.; Pang, G.; Zhang, Z.; and Karniadakis, G. E. 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3): 218–229.
  • Mathiesen, Yang, and Hu (2022) Mathiesen, F. B.; Yang, B.; and Hu, J. 2022. Hyperverlet: A Symplectic Hypersolver for Hamiltonian Systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4575–4582.
  • Mehrotra (1992) Mehrotra, S. 1992. On the implementation of a primal-dual interior point method. SIAM Journal on optimization, 2(4): 575–601.
  • Nubert et al. (2020) Nubert, J.; Köhler, J.; Berenz, V.; Allgöwer, F.; and Trimpe, S. 2020. Safe and fast tracking on a robot manipulator: Robust mpc and neural network control. IEEE Robotics and Automation Letters, 5(2): 3050–3057.
  • Paszke et al. (2019) Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems.
  • Petersen and Voigtlaender (2020) Petersen, P.; and Voigtlaender, F. 2020. Equivalence of approximation by convolutional neural networks and fully-connected networks. Proceedings of the American Mathematical Society, 148(4): 1567–1581.
  • Pontryagin (1987) Pontryagin, L. S. 1987. Mathematical theory of optimal processes. CRC press.
  • Powell et al. (1981) Powell, M. J. D.; et al. 1981. Approximation theory and methods. Cambridge university press.
  • Raissi, Perdikaris, and Karniadakis (2019) Raissi, M.; Perdikaris, P.; and Karniadakis, G. E. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378: 686–707.
  • Raissi, Yazdani, and Karniadakis (2020) Raissi, M.; Yazdani, A.; and Karniadakis, G. E. 2020. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367(6481): 1026–1030.
  • Rao (2009) Rao, A. V. 2009. A survey of numerical methods for optimal control. Advances in the Astronautical Sciences, 135(1): 497–528.
  • Savitzky and Golay (1964) Savitzky, A.; and Golay, M. J. 1964. Smoothing and differentiation of data by simplified least squares procedures. Analytical chemistry, 36(8): 1627–1639.
  • Shen et al. (2021) Shen, Z.; Liu, J.; He, Y.; Zhang, X.; Xu, R.; Yu, H.; and Cui, P. 2021. Towards out-of-distribution generalization: A survey. arXiv preprint arXiv:2108.13624.
  • Spong, Hutchinson, and Vidyasagar (2020) Spong, M. W.; Hutchinson, S.; and Vidyasagar, M. 2020. Robot modeling and control. John Wiley & Sons.
  • Tassa, Mansard, and Todorov (2014) Tassa, Y.; Mansard, N.; and Todorov, E. 2014. Control-limited differential dynamic programming. In 2014 IEEE International Conference on Robotics and Automation (ICRA), 1168–1175. IEEE.
  • Tedrake (2022) Tedrake, R. 2022. Underactuated Robotics.
  • Truesdell (1992) Truesdell, C. A. 1992. A First Course in Rational Continuum Mechanics V1. Academic Press.
  • Wang, Bhouri, and Perdikaris (2021) Wang, S.; Bhouri, M. A.; and Perdikaris, P. 2021. Fast PDE-constrained optimization via self-supervised operator learning. arXiv preprint arXiv:2110.13297.
  • Wang, Teng, and Perdikaris (2021) Wang, S.; Teng, Y.; and Perdikaris, P. 2021. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5): A3055–A3081.
  • Wang, Wang, and Perdikaris (2021) Wang, S.; Wang, H.; and Perdikaris, P. 2021. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Science advances, 7(40): eabi8605.
  • Xie et al. (2018) Xie, S.; Hu, X.; Qi, S.; and Lang, K. 2018. An artificial neural network-enhanced energy management strategy for plug-in hybrid electric vehicles. Energy, 163: 837–848.
  • Xiu and Hesthaven (2005) Xiu, D.; and Hesthaven, J. S. 2005. High-order collocation methods for differential equations with random inputs. SIAM Journal on Scientific Computing, 27(3): 1118–1139.
  • Yu et al. (2018) Yu, B.; et al. 2018. The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1): 1–12.
  • Yu et al. (2016) Yu, K.; Bauza, M.; Fazeli, N.; and Rodriguez, A. 2016. More than a Million Ways to be Pushed. A High-Fidelity Experimental Data Set of Planar Pushing In: IEEE/RSJ IROS.
  • Zhou (2020) Zhou, D.-X. 2020. Universality of deep convolutional neural networks. Applied and computational harmonic analysis, 48(2): 787–794.
  • Zhou et al. (2022) Zhou, R.; Quartz, T.; De Sterck, H.; and Liu, J. 2022. Neural Lyapunov Control of Unknown Nonlinear Systems with Stability Guarantees. arXiv preprint arXiv:2206.01913.
  • Zhu and Zabaras (2018) Zhu, Y.; and Zabaras, N. 2018. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366: 415–447.
  • Zhu et al. (2019) Zhu, Y.; Zabaras, N.; Koutsourelakis, P.-S.; and Perdikaris, P. 2019. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. Journal of Computational Physics, 394: 56–81.

Appendix A Algorithms

The data generation algorithm is listed in Alg. 1.

Algorithm 1 Data generation

Input: Distribution of target state 𝐱goal\mathbf{x}_{goal}
Output: Dataset

1:  NN\leftarrow Number of samples per trajectory;
2:  Dataset \leftarrow empty set;
3:  while not Dataset is full do
4:     sample 𝒙goal{\bf\it x}_{goal} from 𝐱goal\mathbf{x}_{goal}
5:     construct cost functional ff with 𝒙goal{\bf\it x}_{goal}
6:     construct OCP (Eq. 1) with cost functional ff
7:     𝒖{\bf\it u}^{*}\leftarrow OC_Solver(OCP); {Any solver is applicable. We choose DM.}
8:     sample {tj}1jN\{t_{j}\}_{1\leq j\leq N} from time horizon TT of OCP 
9:     add triplets {(f,tj,𝒖(tj))}1jN\{(f,t_{j},{\bf\it u}^{*}(t_{j}))\}_{1\leq j\leq N} to Dataset 
10:  end while
11:  return Dataset

Appendix B Related Works

OCP Solvers

Traditional numerical solvers are well developed over the decades, which are learning-free and often involve tedious optimization iterations to find an optimal solution.

Direct methods (Böhme and Frank 2017a) reformulate OCP as finite-dimensional nonlinear programming (NLP) (Bazaraa, Sherali, and Shetty 2013), and solve the problem by NLP algorithms, e.g. sequential quadratic programming (Boggs and Tolle 1995) and interior-point method (Mehrotra 1992). The reformulation essentially constructs surrogate models, where the state and control function (infinite dimension) is replaced by polynomial or piece-wise constant functions. The dynamics constraint is discretized into equality constraints. The direct methods optimize the surrogate models, thus the solution is not guaranteed to be optimal for the origin problem. Likewise, typical direct neural solvers  (Chen, Shi, and Zhang 2018; Wang, Bhouri, and Perdikaris 2021; Hwang et al. 2022), termed as Two-Phase models, consist of two phases: 1) approximating the dynamics by a neural network (surrogate model); 2) solving the NLP via gradient descent, by differentiating through the network. The advantage of Two-Phase models against traditional direct methods is computational efficiency, especially in high-dimensional cases. However, the two-phase method is sensitive to distribution shift (see Fig. 1).

Indirect methods (Böhme and Frank 2017b) are based on Pontryagin’s Maximum Principle (PMP) (Pontryagin 1987). By PMP, indirect methods convert OCP (Eq. 1) into a boundary-value problem (BVP) (Lasota 1968), which is then solved by numerical methods such as shooting method (Bock and Plitt 1984), collocation method (Xiu and Hesthaven 2005), adjoint-based gradient descend (Effati and Pakdaman 2013; Jin et al. 2020). These numerical methods are sensitive to the initial guesses of the solution. Some indirect methods-based neural solvers approximate the finite-dimensional mapping from state 𝒙(t)dx{\bf\it x}^{\ast}(t)\in\mathbb{R}^{d_{x}} to control 𝒖(t)du{\bf\it u}^{\ast}(t)\in\mathbb{R}^{d_{u}} (Cheng et al. 2020), or to co-state 𝝀(t)dx{\bf\it\lambda}^{\ast}(t)\in\mathbb{R}^{d_{x}} (Xie et al. 2018). The full trajectory of the control function is obtained by repeatedly applying the mapping and getting feedback from the system, and such sequential nature is the efficiency bottleneck. Another work (D’ambrosio et al. 2021) proposes to solve the BVP via a PINN, thus its trained network works only for one specific OCP instance.

Dynamic programming (DP) is an alternative, based on Bellman’s principle of optimality (Bellman and Kalaba 1960). It offers a rule to divide a high-dimensional optimization problem with a long time horizon into smaller, easier-to-solve auxiliary optimization problems. Typical methods are Hamilton-Jacobi-Bellman (HJB) equation (Al-Tamimi, Lewis, and Abu-Khalaf 2008), differential dynamical programming (DDP) (Tassa, Mansard, and Todorov 2014), which assumes quadratic dynamics and value function, and iterative linear quadratic regulator (iLQR) (Li and Todorov 2004), which assumes linear dynamics and quadratic value function. Similar to dynamic programming, model predictive control (MPC) synthesizes the approximate control function via the repeated solution of finite overlapping horizons (Hewing et al. 2020). The main drawback of DP is the curse of dimensionality on the number and complexity of the auxiliary problem. MPC alleviates this problem at the expense of optimality. Yet fast implementation of MPC is still under exploration and remains open (Nubert et al. 2020).

Differential Equation Neural solvers

A variety of networks have been developed to solve DE, including Physics-informed neural networks (PINNs) (Raissi, Perdikaris, and Karniadakis 2019), neural operators (Lu et al. 2021), hybrid models (Mathiesen, Yang, and Hu 2022), and frequency domain models (Li et al. 2020). We will briefly introduce the first two models for their close relevance to our work.

PINNs parameterize the DE solution as a neural network and learn the parameters by minimizing the residual loss and boundary condition loss (Yu et al. 2018; Raissi, Perdikaris, and Karniadakis 2019). PINNs are similar to those numerical methods e.g. the finite element method in that they replace the linear span of a finite set of local basis functions with neural networks. PINNs usually have simple architectures (e.g. MLP), although they have produced remarkable results across a wide range of areas in computational science and engineering (Raissi, Yazdani, and Karniadakis 2020; Zhu et al. 2019). However, these models are limited to learning the solution of one specific DE instance, but not the operator. In other words, if the coefficients of the DE instance slightly change, then a new neural network needs to be trained accordingly, which is time-consuming. Another major drawback of PINNs, as pointed out by (Wang, Teng, and Perdikaris 2021), is that the magnitude of two loss terms (i.e.residual loss and boundary condition loss) is inherently imbalanced, leading to heavily biased predictions even for simple linear equations.

Neural operators regards DE as an operator that maps the input to the solution. Learning operators using neural networks was introduced in the seminal work (Chen and Chen 1995). It proposes the universal approximation theorem for operator learning, i.e. a network with a single hidden layer can approximate any nonlinear continuous operator. (Lu et al. 2021) follows this theorem by designing a deep architecture named DeepONet, which consists of two networks: a branch net for input functions and a trunk net for the querying locations in the output space. The DeepONet output is obtained by computing the inner product of the outputs of branch and trunk networks. The DeepONet can be regarded as a special case of our NASM, since DeepONet is equivalent to a NASM without the convolution blocks and the projecting network. And our analysis of optimal control error bound is partly inspired by (Lanthaler, Mishra, and Karniadakis 2022), providing error estimation of DeepONet.

In addition, there exist many neural operators with other architectures. For example, another type of neural operator is to parameterize the operator as a convolutional neural network (CNN) between the finite-dimensional data meshes (Zhu and Zabaras 2018; Khoo, Lu, and Ying 2021). The major weakness of these models is that it is impossible to query solutions at off-grid points. Moreover, graph neural networks (GNNs) (Kipf and Welling 2016) are also applied in operator learning (Alet et al. 2019; Anandkumar et al. 2020). The key idea is to construct the spacial mesh as a graph, where the nodes are discretized spatial locations, and the edges link neighboring locations. Compared with CNN-based models, the graph operator model is less sensitive to resolution and is capable of inferencing at off-grid points by adding new nodes to the graph. However, its computational cost is still high, growing quadratically with the number of nodes. Another category of neural operators is Fourier transform based (Anandkumar et al. 2020; Li et al. 2020). The models learn parametric linear functions in the frequency domain, along with nonlinear functions in the time domain. The conversion between those two domains is realized by discrete Fourier transformation. The approximator architecture of NASM is inspired by the Fourier neural operators, and we replace the matrix product in the frequency domain with the convolution in the original domain.

Appendix C Approximation Error Theorems

We give the estimation for the approximation error of the proposed model. The theoretic result guarantees that there exists a neural network instance of NASM architecture approximating the operator 𝒢\mathcal{G} to arbitrary error tolerance. Furthermore, the size and depth of such a network are upper-bounded. The error bound will be compared with the DON error bound derived in (Lanthaler, Mishra, and Karniadakis 2022). The technical line of our analysis is inspired by (Lanthaler, Mishra, and Karniadakis 2022) which provides error estimation for DON’s three components: encoder, approximation, and reconstructor.

Definition and Decomposition of Errors

For fair and systematic comparison, we also decompose the architecture of NASM into three components. The encoder aims to convert the infinite-dimensional input to a finite vector, as introduced in the main text. The approximator of NASM is the coefficient network that produces the (approximated) coefficients. The reconstructor is the basis functions and the aggregation, that reconstructs the optimal control via the coefficients. In this way, the approximation error of NASM can also be decomposed into three parts: 1) encoder error, 2) approximator error, and 3) reconstructor error.

For ease of analysis, the error of the encoder is temporally not considered and is assumed to be zero. The reason is that the design of the encoder is more related to the problem setting instead of the neural operator itself. In our experiment, all neural operators use the same encoder, thus the error incurred by the encoder should be almost the same. Therefore, assuming the zero encoder error is a safe simplification. In our assumption, for a given encoder \mathcal{E}, there exists an inverse mapping 1\mathcal{E}^{-1}, such that 1=Id\mathcal{E}^{-1}\circ\mathcal{E}=\operatorname{Id}. The non-zero encoder error is analyzed in Appx D.

For a reconstructor \mathcal{R}, its error ^\widehat{\mathscr{E}}_{\mathcal{R}} is estimated by the mismatch between \mathcal{R} and its approximate inverse mapping, projector 𝒫\mathcal{P}, weighted by push-forward measure 𝒢#μ(u):=μ(𝒢1(u))\mathcal{G}_{\#}\mu(u):=\mu(\mathcal{G}^{-1}(u)).

^\displaystyle\widehat{\mathscr{E}}_{\mathcal{R}} :=\displaystyle:= (U𝒫(u)uUd(𝒢#μ)(u))12\displaystyle\left(\int_{U}\norm{\mathcal{R}\circ\mathcal{P}(u)-u}_{U}\differential\left(\mathcal{G}_{\#}\mu\right)(u)\right)^{\frac{1}{2}} (6)
𝒫\displaystyle\mathcal{P} :=\displaystyle:= argmin𝒫^,s.t.𝒫=Id.\displaystyle\operatornamewithlimits{argmin}_{\mathcal{P}}\widehat{\mathscr{E}}_{\mathcal{R}},\quad\text{s.t.}\quad\mathcal{P}\circ\mathcal{R}=\mathrm{Id}.

Intuitively, such reconstructor error quantifies the information loss induced by \mathcal{R}. An ideal \mathcal{R} without any information loss should be invertible, i.e. its optimal inverse 𝒫\mathcal{P} is exactly 1\mathcal{R}^{-1}, thus we have ^=0\widehat{\mathscr{E}}_{\mathcal{R}}=0.

Given the encoder and reconstructor, and denote the encoder output is 𝒆m{\bf\it e}\in\mathbb{R}^{m}, the error ^𝒜\widehat{\mathscr{E}}_{\mathcal{A}} induced by approximator 𝒜\mathcal{A} is defined as the distance between the approximator output and the optimal coefficient vector, weighted on push-forward measure #μ(𝒆):=μ(1(𝒆))\mathcal{E}_{\#}\mu({\bf\it e}):=\mu(\mathcal{E}^{-1}({\bf\it e})):

^𝒜:=(m𝒜(𝒆)𝒫𝒢1(𝒆)2(p)2d(#μ)(𝒆))12.\widehat{\mathscr{E}}_{\mathcal{A}}:=\left(\int_{\mathbb{R}^{m}}\|\mathcal{A}({\bf\it e})-\mathcal{P}\circ\mathcal{G}\circ\mathcal{E}^{-1}({\bf\it e})\|_{\ell^{2}\left(\mathbb{R}^{p}\right)}^{2}\differential\left(\mathcal{E}_{\#}\mu\right)({\bf\it e})\right)^{\frac{1}{2}}. (7)

With the definitions above, we can estimate the approximation error of our NASM by the error of each of its components, as stated in the following theorem (see detailed proof in Appendix D):

Theorem 2 (Decomposition of NASM Approximation Error).

Suppose the cost functional fif_{i} is Lipschitz continuous, with Lipschitz constant Lip(fi)\operatorname{Lip}(f_{i}). Define constant C=supiILip(fi)fi𝒢(i).C=\sup_{i\in I}\frac{\operatorname{Lip}(f_{i})}{f_{i}\circ\mathcal{G}(i)}. The approximation error ^\widehat{\mathscr{E}} (Eq. 2) of a NASM 𝒩=𝒜\mathcal{N}=\mathcal{R}\circ\mathcal{A}\circ\mathcal{E} is upper-bounded by

^C(Lip()^𝒜+^).\widehat{\mathscr{E}}\leq C\quantity(\operatorname{Lip}(\mathcal{R})\widehat{\mathscr{E}}_{\mathcal{A}}+\widehat{\mathscr{E}}_{\mathcal{R}}). (8)

Estimation of Decomposed Errors

The reconstructor error ^\widehat{\mathscr{E}}_{\mathcal{R}} of NASM reduces to the error of interpolation by basis functions, which is thoroughly studied in approximation theory (Davis 1975; Powell et al. 1981). For the Fourier basis, we restate the existing result to establish the following theorem:

Theorem 3 (Fourier Reconstructor Error).

If 𝒢\mathcal{G} defines a Lipschitz mapping 𝒢:IHs(T)\mathcal{G}:I\rightarrow H^{s}(T) ,for some s>0,M>0s>0,M>0, with I𝒢(i)Hs2dμ(i)M<,\int_{I}\|\mathcal{G}(i)\|_{H^{s}}^{2}\differential\mu(i)\leq M<\infty, then there exists a constant C=C(s,M)>0C=C(s,M)>0, such that for any pp\in\mathbb{N}, the Fourier basis 𝐛:T{\bf\it b}:T\rightarrow\mathbb{R} with the summation aggregation and the associated reconstruction :pU\mathcal{R}:\mathbb{R}^{p}\rightarrow U satisfies:

^Cps.\displaystyle\widehat{\mathscr{E}}_{\mathcal{R}}\leq Cp^{-s}. (9)

Furthermore, the reconstruction \mathcal{R} satisfy Lip()1\operatorname{Lip}(\mathcal{R})\leq 1.

HsH^{s} is a Sobolev space with ss degrees of regularity and L2L^{2} norm. The proof (omitted here) is based on an observation that a smooth function (i.e. with large ss) changes slowly with time, having small (exponentially decaying) coefficients in the high frequency.

Next, the error bound of the approximator will be presented. Notice that the approximator learns to map between two vector spaces using MLP, whose error estimation is well-studied in deep learning theory. One of the existing works (Gühring, Kutyniok, and Petersen 2020) derives the estimation based on the Sobolev regularity of the mapping. We extend their result to both MLP and Convolution (Appendix D). And the approximator error can be directly obtained from those results. We present the MLP-backbone approximator error as follows, and the Convolution-backbone error can be similarly derived.

Theorem 4 (Approximator Error (MLP backbone)).

Given operator 𝒢:IU\mathcal{G}:I\rightarrow U, encoder ImI\rightarrow\mathbb{R}^{m}, and reconstructor :pU\mathcal{R}:\mathbb{R}^{p}\rightarrow U, let 𝒫\mathcal{P} denote the corresponding projector. If for some s2,M>0s\in\mathbb{N}_{\geq 2},M>0, the following bound is satisfied: 𝒫𝒢1Hs(#μ)M<,\norm{\mathcal{P}\circ\mathcal{G}\circ\mathcal{E}^{-1}}_{H^{s}(\mathcal{E}_{\#}\mu)}\leq M<\infty, then there exists a constant C=C(m,s,M)>0C=C(m,s,M)>0, such that for any ε(0,12)\varepsilon\in(0,\frac{1}{2}), there exists an approximator 𝒜:mp\mathcal{A}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{p} implemented by MLP such that:

size(𝒜)Cp2εm/slog(ε1),depth(𝒜)Clog(ε1),^𝒜pε.\displaystyle\operatorname{size}(\mathcal{A})\leq Cp^{2}\varepsilon^{-m/s}\log\left(\varepsilon^{-1}\right),\qquad\operatorname{depth}(\mathcal{A})\leq C\log\left(\varepsilon^{-1}\right),\qquad\widehat{\mathscr{E}}_{\mathcal{A}}\leq\sqrt{p}\varepsilon. (10)

Note that size()\operatorname{size}(\cdot) is defined as the number of trainable parameters of a neural network, and depth()\operatorname{depth}(\cdot) denotes the number of hidden layers.

In summary, we have proved that the approximation error is bounded by the sum of the reconstructor and approximator errors. Those errors can hold under arbitrary small tolerance, with bounded size and depth. The theorems hold as long as some integrative and continuous conditions are satisfied (Dontchev and Zolezzi 2006), which is trivial in many real-world continuous OCPs.

Comparasion with DeepONet Error

For fairness, we assume the DON uses the same encoder and network backbone as the NASM. Then the approximator error and the encoder error of both operators should be the same. We only need to compare the reconstructor error.

The error of DeepONet Reconstructor don\mathcal{R}_{don} is analyzed in a previous work (Lanthaler, Mishra, and Karniadakis 2022), by assuming the DeepONet trunk-net learns an approximated Fourier basis. We cite the result to establish the following theorem:

Theorem 5 (DeepONet Reconstructor Error (Lanthaler, Mishra, and Karniadakis 2022)).

If 𝒢\mathcal{G} defines a Lipschitz mapping 𝒢:IHs(T)\mathcal{G}:I\rightarrow H^{s}(T) ,for some s>0,M>0s>0,M>0, with 𝒢Hs2M<\norm{\mathcal{G}}_{H^{s}}^{2}\leq M<\infty, then there exists a constant C=C(s,M)>0C=C(s,M)>0, such that for any pp\in\mathbb{N}, there exists a trunk net (i.e. the basis function net) 𝛕:Tp\boldsymbol{\tau}:T\rightarrow\mathbb{R}^{p} and the associated reconstruction don:pU\mathcal{R}_{don}:\mathbb{R}^{p}\rightarrow U satisfies:

size(don)\displaystyle\operatorname{size}(\mathcal{R}_{don}) \displaystyle\leq Cp(1+log(p)2),\displaystyle Cp\left(1+\log(p)^{2}\right), (11)
depth(don)\displaystyle\operatorname{depth}(\mathcal{R}_{don}) \displaystyle\leq C(1+log(p)2),\displaystyle C\left(1+\log(p)^{2}\right), (12)
^don\displaystyle\widehat{\mathscr{E}}_{\mathcal{R}_{don}} \displaystyle\leq Cps.\displaystyle Cp^{-s}. (13)

Furthermore, the reconstruction don\mathcal{R}_{don} and the optimal projection 𝒫don\mathcal{P}_{don} satisfy Lip(don),Lip(𝒫don)2\operatorname{Lip}(\mathcal{R}_{don}),\operatorname{Lip}(\mathcal{P}_{don})\leq 2.

One can observe that the DON reconstructor error has the same form as that of NASM. However, the DON reconstructor requires a neural network (trunk-net) to implement basis functions, while NASM uses a pre-defined Fourier basis. In the degenerated form of the NASM, i.e. without adaptive parameters and neural aggregations, the NASM reconstructor has no learnable parameters. In this sense, NASM can achieve the same error bound with fewer parameters, compared with DON.

Appendix D Mathematical Preliminaries and Proofs

Preliminaries on Sobolev Space

For d1,Ωd\geq 1,\Omega an open subset of d,p[1;+]\mathbb{R}^{d},p\in[1;+\infty] and ss\in\mathbb{N}, the Sobolev space Ws,p(d)W^{s,p}\left(\mathbb{R}^{d}\right) is defined by

Ws,p(Ω)={fLp(Ω):|α|s,xαfLp(Ω)}W^{s,p}(\Omega)=\left\{f\in L^{p}(\Omega):\forall|\alpha|\leq s,\partial_{x}^{\alpha}f\in L^{p}(\Omega)\right\}

where α=(α1,,αd),|α|=α1++αd\alpha=\left(\alpha_{1},\ldots,\alpha_{d}\right),|\alpha|=\alpha_{1}+\ldots+\alpha_{d}, and the derivatives xαf=x1α1xdαdf\partial_{x}^{\alpha}f=\partial_{x_{1}}^{\alpha_{1}}\cdots\partial_{x_{d}}^{\alpha_{d}}f are considered in the weak sense. Ws,p(Ω)W^{s,p}(\Omega) is a Banach space if its norm is defined as (Sobolev norm):

fWs,p=|α|sxαfLp\|f\|_{W^{s,p}}=\sum_{|\alpha|\leq s}\left\|\partial_{x}^{\alpha}f\right\|_{L^{p}}

In the special case p=2,Ws,2(Ω)p=2,W^{s,2}(\Omega) is denoted by Hs(Ω)H^{s}(\Omega), that is, a Hilbert space for the inner product

f,gs,Ω=|α|sxαf,xαgL2(Ω)=|α|sΩxαfxαg¯dμ\langle f,g\rangle_{s,\Omega}=\sum_{|\alpha|\leq s}\left\langle\partial_{x}^{\alpha}f,\partial_{x}^{\alpha}g\right\rangle_{L^{2}(\Omega)}=\sum_{|\alpha|\leq s}\int_{\Omega}\partial_{x}^{\alpha}f\overline{\partial_{x}^{\alpha}g}d\mu

Error estimation of MLP and CNN

The fundamental neural modules of NASM are MLP and CNN, thus the error estimation of NASM entails the error estimation of those two modules. In this section, we cite and extend their error analysis from previous work.

MLP

Firstly, the error bound of an MLP approximating a mapping in vector spaces is well-studied in the deep learning theory. One of the existing works (Gühring, Kutyniok, and Petersen 2020) derives the estimation based on the Sobolev regularity of the mapping. We cite the main result as the following lemma (notation modified for consistency):

Lemma 6 (Approximation Error of one-dimensional MLP (Gühring, Kutyniok, and Petersen 2020)).

Let m,s2,1q,M>0m\in\mathbb{N},s\in\mathbb{N}_{\geq 2},1\leq q\leq\infty,M>0, and 0n10\leq n\leq 1, then there exists a constant C=C(m,s,q,M,n)C=C(m,s,q,M,n), with the following properties: For any function ff with m-dimensional input and one-dimensional output in subsets of the Sobolev space Ws,qW^{s,q}:

fWs,qM,\norm{f}_{W^{s,q}}\leq M,

and for any ϵ(0,1/2)\epsilon\in(0,1/2),there exists a ReLU MLP 𝒩\mathcal{N} such that:

𝒩fWn,qϵ,\norm{\mathcal{N}-f}_{W^{n,q}}\leq\epsilon,

and:

size(𝒩)\displaystyle\operatorname{size}(\mathcal{N}) \displaystyle\leq Cϵm/(sn)log(ϵs/(sn)),\displaystyle C\epsilon^{-m/(s-n)}\log(\epsilon^{-s/(s-n)}),
depth(𝒩)\displaystyle\operatorname{depth}(\mathcal{N}) \displaystyle\leq Clog(ϵs/(sn)).\displaystyle C\log(\epsilon^{-s/(s-n)}).

However, such error bounds of one-dimensional MLP can not be directly applied to an MLP with pp-dimensional output. A pp-dimensional MLP can not be represented by simply stacking pp independent one-dimensional output networks {𝒩j}1jp\{\mathcal{N}_{j}\}_{1\leq j\leq p} and concatenating the outputs. The key difference lies in the parameter sharing of hidden layers. To fill the gap and estimate the error of 𝜷{\bf\it\beta}, we design a special structure of MLP without parameter sharing, as explained below.

Given pp independent one-dimensional output networks {𝒩j:m1}1jp\{\mathcal{N}_{j}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{1}\}_{1\leq j\leq p}, denote the weight matrix of the ii-th layer of the 𝒩j\mathcal{N}_{j} as 𝑾i,j{\bf\it W}_{i,j}. The weight matrix of ii-th layer of the pp-dimensional MLP 𝜷{\bf\it\beta}, denoted as 𝑾i𝜷{\bf\it W}^{{\bf\it\beta}}_{i}, can be constructed as:

𝑾1𝜷=[undef],𝑾i2𝜷=[undef].{\bf\it W}^{{\bf\it\beta}}_{1}=\bmqty{undef},{\bf\it W}^{{\bf\it\beta}}_{i\geq 2}=\bmqty{undef}.

The weight of first layer 𝑾1𝜷{\bf\it W}^{{\bf\it\beta}}_{1} is a vertical concatenation of {𝑾1,j}1jp\{{\bf\it W}_{1,j}\}_{1\leq j\leq p}. And the weight 𝑾i𝜷{\bf\it W}^{{\bf\it\beta}}_{i} of any remaining layer i2i\geq 2 is a block diagonal matrix, with the main-diagonal blocks being {𝑾i,j}1jp\{{\bf\it W}_{i,j}\}_{1\leq j\leq p}. It is easy to verify that such an approximator is computationally equivalent to the stacking of pp independent one-dimensional output networks stack({𝒩j}1jp)\operatorname{stack}(\{\mathcal{N}_{j}\}_{1\leq j\leq p}).

Let q=2q=2, n=0n=0 (i.e. Wn,q=L2\norm{\cdot}_{W^{n,q}}=\norm{\cdot}_{L^{2}}), and f:=dpf:=\mathbb{R}^{d}\rightarrow\mathbb{R}^{p}, then by Lemma 6 the approximation error is bounded by:

^𝜷\displaystyle\widehat{\mathscr{E}}_{{\bf\it\beta}} =𝜷fL2=(j𝒩jfj2)1/2\displaystyle=\norm{{\bf\it\beta}-f}_{L^{2}}=\left(\sum_{j}\norm{\mathcal{N}_{j}-f_{j}}^{2}\right)^{1/2}
(pε2)1/2=pε.\displaystyle\leq(p\varepsilon^{2})^{1/2}=\sqrt{p}\varepsilon.

And the depth and size of 𝜷{\bf\it\beta} can be calculated by comparing with any 𝒩j\mathcal{N}_{j}:

depth(𝜷)\displaystyle\operatorname{depth}({\bf\it\beta}) =depth(𝒩j)Clog(ε1),\displaystyle=\operatorname{depth}(\mathcal{N}_{j})\leq C\log\left(\varepsilon^{-1}\right),
size(𝜷)\displaystyle\operatorname{size}({\bf\it\beta}) =size(𝑾1𝜷)+i=2psize(𝑾i𝜷)\displaystyle=\operatorname{size}({\bf\it W}^{{\bf\it\beta}}_{1})+\sum^{p}_{i=2}\operatorname{size}({\bf\it W}^{{\bf\it\beta}}_{i})
=psize(𝑾1,j)+i=2pp2size(𝑾i,j)\displaystyle=p\operatorname{size}({\bf\it W}^{1,j})+\sum^{p}_{i=2}p^{2}\operatorname{size}({\bf\it W}_{i,j})
i=1pp2size(𝑾i,j)\displaystyle\leq\sum^{p}_{i=1}p^{2}\operatorname{size}({\bf\it W}_{i,j})
=p2size(𝒩j)\displaystyle=p^{2}\operatorname{size}(\mathcal{N}_{j})
Cp2εm/slog(ε1).\displaystyle\leq Cp^{2}\varepsilon^{-m/s}\log\left(\varepsilon^{-1}\right).

The above result can be summarized as another lemma:

Lemma 7 (Approximation Error of 𝐩\mathbf{p}-dimensional MLP).

Let m,s2,M>0m\in\mathbb{N},s\in\mathbb{N}_{\geq 2},M>0, then there exists a constant C=C(m,s,M)C=C(m,s,M), with the following properties: For any function ff with mm-dimensional input and pp-dimensional output in subsets of the Hilbert space HsH^{s}: fHsM,\norm{f}_{H^{s}}\leq M, and for any ϵ(0,1/2)\epsilon\in(0,1/2),there exists a ReLU MLP 𝛃{\bf\it\beta} such that:

𝜷fL2\displaystyle\norm{{\bf\it\beta}-f}_{L^{2}} \displaystyle\leq pϵ,\displaystyle\sqrt{p}\epsilon,
size(𝜷)\displaystyle\operatorname{size}({\bf\it\beta}) \displaystyle\leq Cp2εm/slog(ε1),\displaystyle Cp^{2}\varepsilon^{-m/s}\log\left(\varepsilon^{-1}\right),
depth(𝜷)\displaystyle\operatorname{depth}({\bf\it\beta}) \displaystyle\leq Clog(ε1).\displaystyle C\log\left(\varepsilon^{-1}\right).

CNN

Next, we are going to derive a similar error bound for CNNs, which has not been widely studied before. A previous work (Zhou 2020) derives the error bound for one-dimensional CNNs with a single channel, and the results do not apply in our case. Another related work (Petersen and Voigtlaender 2020) established the equivalence of approximation properties between MLPs and CNNs, as the following lemma:

Lemma 8 (Equivalence of approximation between MLPs and CNNs (Petersen and Voigtlaender 2020)).

For any input/output channel number Nin,Nout+N_{in},N_{out}\in\mathbb{N}_{+}, input size d+d\in\mathbb{N}_{+}, function f:Nin×dNout×df:\mathbb{R}^{N_{in}\times d}\rightarrow\mathbb{R}^{N_{out}\times d}, p>0p>0 and error ϵ0\epsilon\geq 0:

(1) If there is an MLP Φ\Phi with size(Φ)=S\operatorname{size}(\Phi)=S and depth(Φ)=L\operatorname{depth}(\Phi)=L satisfying ΦfLpε\norm{\Phi-f}_{L^{p}}\leq\varepsilon, then there exists a CNN Ψ\Psi with size(Ψ)2S\operatorname{size}(\Psi)\leq 2S and depth(Ψ)=L\operatorname{depth}(\Psi)=L, and such that ΨfLpd1/pε\norm{\Psi-f}_{L^{p}}\leq d^{1/p}\varepsilon.

(2) If there exists a CNN Ψ\Psi with size(Ψ)=S\operatorname{size}(\Psi)=S and depth(Ψ)=L\operatorname{depth}(\Psi)=L satisfying ΨfLpε\norm{\Psi-f}_{L^{p}}\leq\varepsilon, then there exists an MLP Φ\Phi with size(Φ)d2S\operatorname{size}(\Phi)\leq d^{2}S and depth(Φ)=L\operatorname{depth}(\Phi)=L and such that ΦfLpε\norm{\Phi-f}_{L^{p}}\leq\varepsilon.

Proof idea: The general proof idea of part (1) is to construct a lifted CNN from the MLP, such that each neuron of the MLP is replaced by a full channel of CNN, and that the lifted CNN computes an equivariant version of the MLP. And the proof of part (2) is based on the fact that convolutions are special linear maps.

The equivalence lemma (Lemma 8, Part (1)), combined with the approximation error of MLPs(Lemma 7), leads naturally to the approximation error of CNNs:

Lemma 9 (Approximation Error of CNN).

Let input/output channel number Nin,Nout+N_{in},N_{out}\in\mathbb{N}_{+}, input size d+d\in\mathbb{N}_{+}, s2s\in\mathbb{N}_{\geq 2} and M>0M>0, there exists a constant C=C(Nin,Nout,d,s,M)C=C(N_{in},N_{out},d,s,M) with the following properties: for any function f:Nin×dNout×df:\mathbb{R}^{N_{in}\times d}\rightarrow\mathbb{R}^{N_{out}\times d}, fHsM\norm{f}_{H^{s}}\leq M and error ϵ(0,1/2)\epsilon\in(0,1/2), there exists a CNN Ψ\Psi such that:

ΨfL2\displaystyle\norm{\Psi-f}_{L^{2}} \displaystyle\leq Noutdϵ,\displaystyle\sqrt{N_{out}}d\epsilon,
size(Ψ)\displaystyle\operatorname{size}(\Psi) \displaystyle\leq 2C(Noutd)2εNind/slog(ε1),\displaystyle 2C(N_{out}d)^{2}\varepsilon^{-N_{in}d/s}\log\left(\varepsilon^{-1}\right),
depth(Ψ)\displaystyle\operatorname{depth}(\Psi) \displaystyle\leq Clog(ε1).\displaystyle C\log\left(\varepsilon^{-1}\right).

Proof of Theorem 8: Decomposition of NASM Approximation Error

Proof.

Firstly, extract the constant, and decompose the error by triangle inequality (subscript of norm omitted):

^\displaystyle\widehat{\mathscr{E}} =\displaystyle= |fi𝒢fi𝒩fi𝒢|\displaystyle\absolutevalue{\frac{f_{i}\circ\mathcal{G}-f_{i}\circ\mathcal{N}}{f_{i}\circ\mathcal{G}}}
\displaystyle\leq supiI(Lip(fi)fi𝒢)𝒢𝒩=C𝒢𝒩\displaystyle\sup_{i\in I}\quantity(\frac{\operatorname{Lip}(f_{i})}{f_{i}\circ\mathcal{G}})\norm{\mathcal{G}-\mathcal{N}}=C\norm{\mathcal{G}-\mathcal{N}}
𝒢𝒩\displaystyle\norm{\mathcal{G}-\mathcal{N}} =\displaystyle= 𝒢𝒜\displaystyle\norm{\mathcal{G}-\mathcal{R}\circ\mathcal{A}\circ\mathcal{E}}
=\displaystyle= 𝒢𝒫𝒢+𝒫𝒢𝒜\displaystyle\norm{\mathcal{G}-\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}+\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}-\mathcal{R}\circ\mathcal{A}\circ\mathcal{E}}
\displaystyle\leq 𝒢𝒫𝒢+𝒫𝒢𝒜\displaystyle\norm{\mathcal{G}-\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}}+\norm{\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}-\mathcal{R}\circ\mathcal{A}\circ\mathcal{E}}

The first term is exactly the reconstructor error ^\widehat{\mathscr{E}}_{\mathcal{R}}, by definition of push-forward:

𝒢𝒫𝒢L2(μ)\displaystyle\norm{\mathcal{G}-\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}}_{L^{2}(\mu)} =\displaystyle= Id𝒫L2(𝒢#μ)=^\displaystyle\norm{\operatorname{Id}-\mathcal{R}\circ\mathcal{P}}_{L^{2}\left(\mathcal{G}_{\#}\mu\right)}=\widehat{\mathscr{E}}_{\mathcal{R}}

And the second term is related to the approximator error ^𝒜\widehat{\mathscr{E}}_{\mathcal{A}}:

𝒫𝒢𝒜L2(μ)\displaystyle\norm{\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}-\mathcal{R}\circ\mathcal{A}\circ\mathcal{E}}_{L^{2}(\mu)}
=𝒫𝒢1𝒜L2(μ)\displaystyle=\norm{\mathcal{R}\circ\mathcal{P}\circ\mathcal{G}\circ\mathcal{E}^{-1}\circ\mathcal{E}-\mathcal{R}\circ\mathcal{A}\circ\mathcal{E}}_{L^{2}(\mu)}
Lip()𝒫𝒢1𝒜L2(μ)\displaystyle\leq\operatorname{Lip}(\mathcal{R})\norm{\mathcal{P}\circ\mathcal{G}\circ\mathcal{E}^{-1}\circ\mathcal{E}-\mathcal{A}\circ\mathcal{E}}_{L^{2}(\mu)}
=Lip()𝒫𝒢1𝒜L2(#μ)\displaystyle=\operatorname{Lip}(\mathcal{R})\norm{\mathcal{P}\circ\mathcal{G}\circ\mathcal{E}^{-1}\circ\mathcal{E}-\mathcal{A}\circ\mathcal{E}}_{L^{2}\left(\mathcal{E}_{\#}\mu\right)}
=Lip()^𝒜\displaystyle=\operatorname{Lip}(\mathcal{R})\widehat{\mathscr{E}}_{\mathcal{A}}

Extension to Non-zero Encoder Error

The approximation error estimation can be naturally extended to model non-zero encoder error, as derived in (Lanthaler, Mishra, and Karniadakis 2022). For an encoder \mathcal{E}, its error ^\widehat{\mathscr{E}}_{\mathcal{E}} is estimated by the distance to its optimal approximate inverse mapping, decoder 𝒟\mathcal{D}, weighted by measure μ\mu.

^\displaystyle\widehat{\mathscr{E}}_{\mathcal{E}} :=\displaystyle:= (I𝒟(i)iIdμ(i))12\displaystyle\left(\int_{I}\norm{\mathcal{D}\circ\mathcal{E}(i)-i}_{I}\differential\mu(i)\right)^{\frac{1}{2}} (14)
𝒟\displaystyle\mathcal{D} :=\displaystyle:= argmin𝒟^,s.t.𝒟=Id.\displaystyle\operatornamewithlimits{argmin}_{\mathcal{D}}\widehat{\mathscr{E}}_{\mathcal{E}},\quad\text{s.t.}\quad\mathcal{E}\circ\mathcal{D}=\mathrm{Id}.

Similar to reconstructor error, this error quantifies the information loss during encoding. An ideal encoder should be invertible, i.e. the decoder 𝒟=1\mathcal{D}=\mathcal{E}^{-1}, thus we have ^=0\widehat{\mathscr{E}}_{\mathcal{E}}=0.

The estimation of ^\widehat{\mathscr{E}}_{\mathcal{E}} depends on the specific architecture of the encoder. If the encoder can be represented by a composition of MLP and CNN, then one can derive an estimation using the lemmas in Appx. D. The error of point-wise evaluation is discussed in previous work of DeepONet (Lanthaler, Mishra, and Karniadakis 2022, Sec.3.5) and (Lu et al. 2021, Sec.3). However, the estimation of more complex encoders (e.g. the Savitzky–Golay smoothing we used in the pushing dataset) is slightly out of the scope of this paper.

When the encoder error is non-zero, the definition of the approximator error ^𝒜\widehat{\mathscr{E}}_{\mathcal{A}} (Eq. 7) should be modified accordingly, by replacing 1\mathcal{E}^{-1} to 𝒟\mathcal{D}:

^𝒜:=(m𝒜(𝒆)𝒫𝒢𝒟(𝒆)2(p)2d(#μ)(𝒆))12.\widehat{\mathscr{E}}_{\mathcal{A}}:=\left(\int_{\mathbb{R}^{m}}\|\mathcal{A}({\bf\it e})-\mathcal{P}\circ\mathcal{G}\circ\mathcal{D}({\bf\it e})\|_{\ell^{2}\left(\mathbb{R}^{p}\right)}^{2}\differential\left(\mathcal{E}_{\#}\mu\right)({\bf\it e})\right)^{\frac{1}{2}}. (15)

Also, the approximation error bound (Eq. 9) is changed to (proof similar to D, omitted here):

^C(Lip(𝒢)Lip(𝒫)^+Lip()^𝒜+^).\widehat{\mathscr{E}}\leq C\quantity(\operatorname{Lip}(\mathcal{G})\operatorname{Lip}(\mathcal{R}\circ\mathcal{P})\widehat{\mathscr{E}}_{\mathcal{E}}+\operatorname{Lip}(\mathcal{R})\widehat{\mathscr{E}}_{\mathcal{A}}+\widehat{\mathscr{E}}_{\mathcal{R}}). (16)

Appendix E Environment Settings

Table 8: Typical control systems used in literature and our experiments and their dimensions.
System Description 𝒖{\bf\it u} # 𝒙{\bf\it x} #
Pendulum single pendulum 1 2
RobotArm two-link robotic arm 1 4
CartPole one pendulum with cart 1 4
Quadrotor helicopter with 4 rotors 4 9
Rocket 6-DoF rocket 3 9
Pushing pushing objects on various surfaces 1 3

Pendulum

minu0tf𝒄𝒙(𝒙(t)𝒙goal)2+cuu2(t)dt\displaystyle\min_{u}\int_{0}^{tf}{\bf\it c_{x}}^{\top}({\bf\it x}(t)-{\bf\it x}_{goal})^{2}+c_{u}u^{2}(t)\differential{t}
s.t.𝒙˙(t)=[undef],\displaystyle\text{s.t.}\quad\dot{{\bf\it x}}(t)=\bmqty{undef},
𝒙(0)=𝒙init\displaystyle{\bf\it x}(0)={\bf\it x}_{init}

where state 𝒙=[x1,x2]{\bf\it x}=[x_{1},x_{2}]^{\top}, denoting the angle and angular velocity of the pendulum respectively, and control uu is the external torque. The initial condition 𝒙init=[0,0]{\bf\it x}_{init}=[0,0]^{\top}, i.e. the pendulum starts from the lowest position with zero velocity. The cost functional consists of two parts: state mismatching penalty and control function regularization, and 𝒄𝒙=[10,1],cu=0.1{\bf\it c_{x}}=[10,1]^{\top},c_{u}=0.1 are balancing coefficients. Other constants are: m=1m=1, g=10g=10, l=1l=1, I=1/3I=1/3.

Robot Arm

𝑴(𝒙)\displaystyle{\bf\it M}({\bf\it x}) =[undef],\displaystyle=\bmqty{undef},
𝑪(𝒙)\displaystyle{\bf\it C}({\bf\it x}) =[undef],\displaystyle=\bmqty{undef},
𝒈(𝒙)\displaystyle{\bf\it g}({\bf\it x}) =[undef],\displaystyle=\bmqty{undef},
[undef]\displaystyle\bmqty{undef} =[undef],\displaystyle=\bmqty{undef},
[undef]\displaystyle\bmqty{undef} =𝑴(𝒙)[undef]+𝑪(𝒙)[undef]+𝒈(𝒙).\displaystyle={\bf\it M}({\bf\it x})\bmqty{undef}+{\bf\it C}({\bf\it x})\bmqty{undef}+{\bf\it g}({\bf\it x}).

The RobotArm (also named Acrobot) is a planar two-link robotic arm in the vertical plane, with an actuator at the elbow. The state is 𝒙=[x1,x2,x3x4]{\bf\it x}=[x_{1},x_{2},x_{3}x_{4}]^{\top}, where x1x_{1} is the shoulder joint angle, and x2x_{2} is the elbow (relative) joint angle, x3,x4x_{3},x_{4} denotes their angular velocity respectively. The control uu is the torque at the elbow. Note that the last equation is the manipulator equation, where 𝑴{\bf\it M} is the inertia matrix, 𝑪{\bf\it C} captures Coriolis forces, and 𝒈{\bf\it g} is the gravity vector. The details of the derivation can be found in (Spong, Hutchinson, and Vidyasagar 2020, Sec. 6.4).

The initial condition 𝒙init=[π/4,π/2,0,0]{\bf\it x}_{init}=[\pi/4,\pi/2,0,0]^{\top}, and the target state baseline 𝒙goal=[π/2,0,0,0]{\bf\it x}_{goal}=[\pi/2,0,0,0]^{\top}. The cost functional coefficients are 𝒄𝒙=[0.1,0.1,0.1,0.1],cu=0.1{\bf\it c_{x}}=[0.1,0.1,0.1,0.1]^{\top},c_{u}=0.1. Other constants are: mass of two links m1,2=1m_{1,2}=1, gravitational acceleration g=0g=0, links length l1,2=1l_{1,2}=1, distance from joint to the center of mass r1,2=0.5r_{1,2}=0.5, moment of inertia I1,2=1/3I_{1,2}=1/3.

Cart-Pole

x1˙\displaystyle\dot{x_{1}} =x3,\displaystyle=x_{3},
x2˙\displaystyle\dot{x_{2}} =x4,\displaystyle=x_{4},
x3˙\displaystyle\dot{x_{3}} =1mc+mpsin2(x2)[u+mpsin(x2)(lx42+gcos(x2))],\displaystyle=\frac{1}{m_{c}+m_{p}\sin^{2}(x_{2})}\left[u+m_{p}\sin(x_{2})\left(lx_{4}^{2}+g\cos(x_{2})\right)\right],
x4˙\displaystyle\dot{x_{4}} =1l(mc+mpsin2(x2))[ucos(x2)mplx42cos(x2)sin(x2)(mc+mp)gsin(x2)].\displaystyle=\frac{1}{l\left(m_{c}+m_{p}\sin^{2}(x_{2})\right)}[-u\cos(x_{2})-m_{p}lx_{4}^{2}\cos(x_{2})\sin(x_{2})-\left(m_{c}+m_{p}\right)g\sin(x_{2})].

In the CartPole system, an un-actuated joint connects a pole(pendulum) to a cart that moves along a frictionless track. The pendulum is initially positioned upright on the cart, and the goal is to balance the pendulum by applying horizontal forces to the cart. The state is 𝒙=[x1,x2,x3x4]{\bf\it x}=[x_{1},x_{2},x_{3}x_{4}]^{\top}, where x1x_{1} is the horizontal position of the cart, x2x_{2} is the counter-clockwise angle of the pendulum, x3x_{3} velocity and angular velocity of cart and pendulum respectively. We refer the reader to (Tedrake 2022, Sec. 3.2) for the derivation of the above equations.

The initial condition 𝒙init=[0,0,0,0]{\bf\it x}_{init}=[0,0,0,0]^{\top}, and the target state baseline 𝒙goal=[0,π,0,0]{\bf\it x}_{goal}=[0,\pi,0,0]^{\top}. The cost functional coefficients are 𝒄𝒙=[0.1,0.6,0.1,0.1],cu=0.3{\bf\it c_{x}}=[0.1,0.6,0.1,0.1]^{\top},c_{u}=0.3. Other constants are: mass of cart and pole mc,p=0.1m_{c,p}=0.1, gravitational acceleration g=10g=10, pole length l=1l=1.

Quadrotor

𝒑˙\displaystyle\dot{{\bf\it p}} =𝒗,\displaystyle={\bf\it v},
m𝒗˙\displaystyle m\dot{{\bf\it v}} =[undef]+𝑹(𝒒)[undef],\displaystyle=\bmqty{undef}+{\bf\it R}^{\top}({\bf\it q})\bmqty{undef},
𝒒˙\displaystyle\dot{{\bf\it q}} =12𝜴(𝝎)𝒒,\displaystyle=\frac{1}{2}{\bf\it\Omega}({\bf\it\omega}){\bf\it q},
𝑱𝝎˙\displaystyle{\bf\it J}\dot{{\bf\it\omega}} =𝑻𝒖𝝎×𝑱𝝎.\displaystyle={\bf\it T}{\bf\it u}-{\bf\it\omega}\times{\bf\it J}{\bf\it\omega}.

This system describes the dynamics of a helicopter with four rotors. The state 𝒙=[𝒑,𝒗,𝝎]9{\bf\it x}=[{\bf\it p}^{\top},{\bf\it v}^{\top},{\bf\it\omega}^{\top}]^{\top}\in\mathbb{R}^{9} consists of three parts: position 𝒑{\bf\it p}, velocity 𝒗{\bf\it v}, and angular velocity 𝝎{\bf\it\omega}. The control 𝒖4{\bf\it u}\in\mathbb{R}^{4} is the thrusts of the four rotating propellers of the quadrotor. 𝒒4{\bf\it q}\in\mathbb{R}^{4} is the unit quaternion (Jia 2019) representing the attitude(spacial rotation) of quadrotor w.r.t. the inertial frame. 𝑱{\bf\it J} is the moment of inertia in the quadrotor’s frame, and 𝑻𝒖{\bf\it T}{\bf\it u} is the torque applied to the quadrotor. Our setting is similar to (Jin et al. 2020, Appx. E.1), but we exclude the quaternion from the state.

The derivation is straightforward. The first two equations are Newton’s laws of motion, and the third equation is time-derivative of quaternion (Jia 2019, Appx. B), and the last equation is Euler’s rotation equation (Truesdell 1992, Sec. I.10). The coefficient matrices and operators used in the equations are defined as follows:

𝜴(𝝎)\displaystyle{\bf\it\Omega}\left({\bf\it\omega}\right) =[0ω1ω2ω3ω10ω3ω2ω2ω30ω1ω3ω2ω10],\displaystyle=\left[\begin{array}[]{cccc}0&-\omega_{1}&-\omega_{2}&-\omega_{3}\\ \omega_{1}&0&\omega_{3}&-\omega_{2}\\ \omega_{2}&-\omega_{3}&0&\omega_{1}\\ \omega_{3}&\omega_{2}&-\omega_{1}&0\end{array}\right],
𝑹(𝒒)\displaystyle{\bf\it R}({\bf\it q}) =[undef],\displaystyle=\bmqty{undef},
𝑻\displaystyle{\bf\it T} =[undef],\displaystyle=\bmqty{undef},

We set the initial state 𝒙init=[[8,6,9],0,0]{\bf\it x}_{init}=[[-8,-6,9]^{\top},{\bf\it 0},{\bf\it 0}]^{\top}, the initial quaternion 𝒒init=0{\bf\it q}_{init}={\bf\it 0}, and the target state baseline 𝒙goal=0{\bf\it x}_{goal}={\bf\it 0}. Cost functional coefficients 𝒄𝒙=1{\bf\it c_{x}}={\bf\it 1}, cu=0.1c_{u}=0.1. Other constants are configured as: mass m=1m=1, wing length l=0.4l=0.4, moment of inertia 𝑱=1{\bf\it J}={\bf\it 1}, z-axis torque constant c=0.01c=0.01.

Rocket

𝒑˙\displaystyle\dot{{\bf\it p}} =𝒗,\displaystyle={\bf\it v},
m𝒗˙\displaystyle m\dot{{\bf\it v}} =[undef]+𝑹(𝒒)𝒖,\displaystyle=\bmqty{undef}+{\bf\it R}^{\top}({\bf\it q}){\bf\it u},
𝒒˙\displaystyle\dot{{\bf\it q}} =12𝜴(𝝎)𝒒,\displaystyle=\frac{1}{2}{\bf\it\Omega}({\bf\it\omega}){\bf\it q},
𝑱𝝎˙\displaystyle{\bf\it J}\dot{{\bf\it\omega}} =𝑻𝒖𝝎×𝑱𝝎.\displaystyle={\bf\it T}{\bf\it u}-{\bf\it\omega}\times{\bf\it J}{\bf\it\omega}.

The rocket system models a 6-DoF rocket in 3D space. The formulation is very close to the Quadrotor mentioned above. The state 𝒙=[𝒑,𝒗,𝝎]9{\bf\it x}=[{\bf\it p}^{\top},{\bf\it v}^{\top},{\bf\it\omega}^{\top}]^{\top}\in\mathbb{R}^{9} is same as that of Quadrotor, but the control 𝒖3{\bf\it u}\in\mathbb{R}^{3} is slightly different. Here 𝒖{\bf\it u} denotes the total thrust in 3 dimensions. Accordingly, the torque 𝑻𝒖{\bf\it T}{\bf\it u} is changed to:

𝑻𝒖\displaystyle{\bf\it T}{\bf\it u} =[undef]×𝒖\displaystyle=\bmqty{undef}\times{\bf\it u}

We set the initial state 𝒙init=[10,8,5,1,0]{\bf\it x}_{init}=[10,-8,5,-1,{\bf\it 0}]^{\top}, the initial quatenion 𝒒init=[cos(0.75),0,0,sin(0.75)]{\bf\it q}_{init}=[\cos(0.75),0,0,\sin(0.75)]^{\top}, and the target state baseline 𝒙goal=0{\bf\it x}_{goal}={\bf\it 0}. The cost functional coefficients 𝒄𝒙=1{\bf\it c_{x}}={\bf\it 1}, cu=0.4c_{u}=0.4. Other constants are configured as: mass m=1m=1, rocket length l=1l=1, the moment of inertia 𝑱=diag([0.5,1,1]){\bf\it J}=\operatorname{diag}([0.5,1,1])

Pushing

In this dataset, the robot executes an open-loop straight push along a straight line of 5 cm, with different shapes of objects, materials of surface, velocity, accelerations, and contact positions and angles, see Fig. 5 for details.

Refer to caption
Figure 5: List of variables explored in Pushing dataset, credited to (Yu et al. 2016).

The control and state trajectories are recorded at 250 Hz. The length of the recorded time horizon varies among samples, due to the difference in velocity and acceleration. We select acceleration a=0.5ms2a=0.5ms^{-2}, with initial velocity v=0v=0. Then define time horizon T=0.44sT=0.44s, and extract 110 time indices per instance.

The input to the encoder is 4167-dim, including a 768-dim gray-scale image of the shape, 3280-dim friction map matrix, 110-dim trajectory, and 9-dim other parameters (e.g. mass and moment of inertia). The input to the neural network (including CNN) is 801-dim, and the encoded vector 𝒆{\bf\it e} is 44-dim.

Appendix F Details on Implementations

For all the systems, we have 2,000 samples in each ID/OOD validation set and 100 problems in each ID/OOD benchmark set. The size of the training set varies among systems. We set 4 convolution blocks, each with 4 kernels, and the kernel length is 16. Other detailed settings are displayed in Table 9. Note that the hyper-parameter settings may not be optimal, since they are not carefully tuned.

Table 9: Hyper-parameter settings of the proposed NASM for different systems.
System #Params #Train Data #Epochs
Pendulum 3153 5×1035\times 10^{3} 1×1041\times 10^{4}
RobotArm 1593 5×1035\times 10^{3} 1×1041\times 10^{4}
CartPole 3233 5×1035\times 10^{3} 2×1032\times 10^{3}
Quadrotor 13732 1×1041\times 10^{4} 5×1025\times 10^{2}
Rocket 10299 1×1041\times 10^{4} 5×1025\times 10^{2}
Pushing 13174 1×1041\times 10^{4} 2×1032\times 10^{3}
Brachistochrone 3153 1×1041\times 10^{4} 5×1035\times 10^{3}
Zermelo 4993 1×1041\times 10^{4} 5×1035\times 10^{3}

PDP, DM, and synthetic control systems are implemented in CasADi (Andersson et al. 2019), which are adapted from the code repository222https://github.com/wanxinjin/Pontryagin-Differentiable-Programming. For classical methods, the dynamics are discretized by Euler method. To limit the running time, we set the maximum number of iterations of PDP to 2,500.

NASM and other neural models are implemented in PyTorch (Paszke et al. 2019). The DON number of layers is tuned in the range of [3,5][3,5] for each environment, to achieve the best performance. For MLP, hyper-parameters are the same or as close as that of DON. We adjust the width of the MLP layer to reach almost the same number of parameters. For FNO333https://github.com/zongyi-li/fourier˙neural˙operator, we set the number of Fourier layers to 4 as suggested in the open-source codes, and tune the network width such that the number of parameters is in the same order as that of NASM. Notice that the original FNO outputs function values at fixed time indices, which is inconsistent with our experiment setting. Thus we slightly modify it by adding time indices to its input. For GEN444https://github.com/FerranAlet/graph˙element˙networks, we set 9 graph nodes uniformly spaced in time (or space) horizon, and perform 3 graph convolution steps on them. The input function initializes the node features at time index t=0t=0 (multiplied by weights). For any time index tt, the GEN output is defined as the weighted average of all node features. Both input/output weights are softmax of negative distances between tt and node positions.

For fairness of running time comparison, all training/testing cases are executed on an Intel i9-10920X CPU, without GPU.

Appendix G More Experiment Results on Synthetic Environment

Table 10: Results of Pendulum environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 3.80×1023.80\times 10^{-2} \diagdown \diagdown
PDP 5.29×1015.29\times 10^{1} 2.79×1012.79\times 10^{-1} 1.04×1011.04\times 10^{-1}
NASM 1.80×1051.80\times 10^{-5} 8.20×1058.20\times 10^{-5} 2.90×1032.90\times 10^{-3}
DON 3.36×1053.36\times 10^{-5} 4.06×1044.06\times 10^{-4} 1.17×1021.17\times 10^{-2}
MLP 3.40×1053.40\times 10^{-5} 2.32×1042.32\times 10^{-4} 5.56×1035.56\times 10^{-3}
GEN 6.19×1036.19\times 10^{-3} 2.10×1032.10\times 10^{-3} 3.43×1023.43\times 10^{-2}
FNO 3.47×1043.47\times 10^{-4} 5.94×1045.94\times 10^{-4} 1.42×1031.42\times 10^{-3}
Table 11: Results of RobotArm environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 4.58×1024.58\times 10^{-2} \diagdown \diagdown
PDP 5.62×1015.62\times 10^{1} 4.73×1034.73\times 10^{-3} 1.43×1021.43\times 10^{-2}
NASM 1.07×1051.07\times 10^{-5} 6.11×1066.11\times 10^{-6} 5.73×1045.73\times 10^{-4}
DON 1.63×1051.63\times 10^{-5} 1.28×1051.28\times 10^{-5} 5.79×1045.79\times 10^{-4}
MLP 1.56×1051.56\times 10^{-5} 1.05×1051.05\times 10^{-5} 1.41×1021.41\times 10^{-2}
GEN 6.24×1036.24\times 10^{-3} 2.21×1052.21\times 10^{-5} 7.91×1047.91\times 10^{-4}
FNO 3.48×1043.48\times 10^{-4} 5.92×1065.92\times 10^{-6} 1.42×1041.42\times 10^{-4}
Table 12: Results of CartPole environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 4.63×1024.63\times 10^{-2} \diagdown \diagdown
PDP 5.61×1015.61\times 10^{1} 5.96×1045.96\times 10^{-4} 4.52×1054.52\times 10^{-5}
NASM 1.69×1051.69\times 10^{-5} 2.33×1052.33\times 10^{-5} 6.86×1056.86\times 10^{-5}
DON 2.53×1052.53\times 10^{-5} 4.43×1054.43\times 10^{-5} 4.39×1044.39\times 10^{-4}
MLP 2.02×1052.02\times 10^{-5} 4.81×1054.81\times 10^{-5} 4.70×1044.70\times 10^{-4}
GEN 6.41×1036.41\times 10^{-3} 6.89×1056.89\times 10^{-5} 3.39×1043.39\times 10^{-4}
FNO 3.49×1043.49\times 10^{-4} 2.48×1052.48\times 10^{-5} 7.60×1057.60\times 10^{-5}
Table 13: Results of Rocket environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 9.71×1029.71\times 10^{-2} \diagdown \diagdown
PDP 7.01×1017.01\times 10^{1} 1.80×1071.80\times 10^{-7} 1.52×1071.52\times 10^{-7}
NASM 4.90×1054.90\times 10^{-5} 8.88×1068.88\times 10^{-6} 1.82×1041.82\times 10^{-4}
DON 5.20×1055.20\times 10^{-5} 1.28×1051.28\times 10^{-5} 2.34×1042.34\times 10^{-4}
MLP 2.87×1052.87\times 10^{-5} 2.76×1052.76\times 10^{-5} 1.60×1031.60\times 10^{-3}
GEN 6.16×1036.16\times 10^{-3} 3.57×1043.57\times 10^{-4} 4.88×1034.88\times 10^{-3}
FNO 6.33×1046.33\times 10^{-4} 2.11×1052.11\times 10^{-5} 2.18×1032.18\times 10^{-3}

Influence of Number of Training Samples

We test the performance of NASM and other neural operators as a function of training examples on the Quadrotor environment and display it in Table 14. As expected, the result shows that performance generally improves when the number of training samples increases. However, even when the samples are very limited (e.g. only 2000 samples), the results are still acceptable. There is a trade-off between available optimal trajectories and the generalization performance.

Table 14: Effect of train dataset size NN on Quadrotor ID MAPE
Model 2000 4000 6000 8000 10000
NASM 7.92×1067.92\times 10^{-6} 7.27×1067.27\times 10^{-6} 7.76×1067.76\times 10^{-6} 8.10×1068.10\times 10^{-6} 6.17×1066.17\times 10^{-6}
DON 4.43×1054.43\times 10^{-5} 4.14×1054.14\times 10^{-5} 4.74×1054.74\times 10^{-5} 1.89×1051.89\times 10^{-5} 4.08×1054.08\times 10^{-5}
MLP 2.40×1042.40\times 10^{-4} 7.20×1057.20\times 10^{-5} 9.19×1059.19\times 10^{-5} 1.13×1041.13\times 10^{-4} 1.10×1041.10\times 10^{-4}
GEN 5.36×1045.36\times 10^{-4} 4.25×1044.25\times 10^{-4} 4.54×1044.54\times 10^{-4} 5.65×1045.65\times 10^{-4} 6.42×1046.42\times 10^{-4}
FNO 8.95×1058.95\times 10^{-5} 7.87×1057.87\times 10^{-5} 7.02×1057.02\times 10^{-5} 7.91×1057.91\times 10^{-5} 8.73×1058.73\times 10^{-5}

Appendix H Extended Experiments on OCP with Analytical Solutions

Brachistochrone

The Brachistochrone is the curve along which a massive point without initial speed must move frictionlessly in a uniform gravitational field gg so that its travel time is the shortest among all curves y=u(x)y=u(x) connecting two fixed points (x1,y1),(x2,y2)(x_{1},y_{1}),(x_{2},y_{2}). Formally, it can be formulated as the following OCP:

minuC1([x1,x2])\displaystyle\min_{u\in C^{1}([x_{1},x_{2}])} T\displaystyle T =12gx1x21+|u(x)|2y1u(x)dx\displaystyle=\frac{1}{\sqrt{2g}}\int_{x_{1}}^{x_{2}}\sqrt{\frac{1+\left|u^{\prime}(x)\right|^{2}}{y_{1}-u(x)}}\differential{x} (17a)
s.t. u(x1)\displaystyle u(x_{1}) =y1,\displaystyle=y_{1}, (17b)
u(x2)\displaystyle u(x_{2}) =y2.\displaystyle=y_{2}. (17c)

And the optimal solution is defined by a parametric equation:

x(θ)=x1+k(θsinθ),\displaystyle x(\theta)=x_{1}+k(\theta-\sin\theta),
u(θ)=y1k(1cosθ),\displaystyle u(\theta)=y_{1}-k(1-\cos\theta),
θ[0,Θ],\displaystyle\theta\in[0,\Theta],

where the k,Θk,\Theta are determined by the boundary condition x(Θ)=x2,u(Θ)=y2x(\Theta)=x_{2},u(\Theta)=y_{2}.

We want to examine the performance of the Instance-Solution operator using the analytical optimal solution. The x-coordinate of endpoints is fixed as x1=0,x2=2x_{1}=0,x_{2}=2, discretized uniformly into 100 intervals. Then the OCP instance is uniquely determined by (y1,y2)(y_{1},y_{2}), which is defined as the input of neural operators. The in-distribution(ID) is y1in𝒰(2,3),y2in𝒰(1,2)y_{1}^{\text{in}}\sim\mathcal{U}(2,3),y_{2}^{\text{in}}\sim\mathcal{U}(1,2) and out-of-distribution (OOD) is y1out𝒰(2.9,3.8),y2out𝒰(1.9,2.8)y_{1}^{\text{out}}\sim\mathcal{U}(2.9,3.8),y_{2}^{\text{out}}\sim\mathcal{U}(1.9,2.8). The training data is still generated by the direct method(DM), not the analytical solution. The performance metric MAPE denotes the distance with the analytical solution.

The results are displayed in Table  15. The NASM backbone achieves the best ID MAPE and the second-best OOD MAPE, with the shortest running time. All neural operators have performance close to baseline solver DM, demonstrating the effectiveness of our Instance-Solution operator framework. It is interesting that the FNO backbone has a better OOD MAPE than DM, although it is trained by DM data. This result may indicate that the Instance-Solution operator can learn a more accurate solution from less accurate supervision data.

Table 15: Results of Brachistochrone environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 9.91×1029.91\times 10^{-2} 7.33×1037.33\times 10^{-3} 4.85×1034.85\times 10^{-3}
NASM 1.40×1051.40\times 10^{-5} 8.05×1038.05\times 10^{-3} 5.52×1035.52\times 10^{-3}
DON 1.56×1051.56\times 10^{-5} 8.31×1038.31\times 10^{-3} 5.54×1035.54\times 10^{-3}
MLP 1.49×1051.49\times 10^{-5} 8.65×1038.65\times 10^{-3} 6.44×1036.44\times 10^{-3}
GEN 6.21×1036.21\times 10^{-3} 9.44×1039.44\times 10^{-3} 8.02×1028.02\times 10^{-2}
FNO 3.59×1043.59\times 10^{-4} 8.23×1038.23\times 10^{-3} 3.82×1033.82\times 10^{-3}

Zermelo’s Navigation Problem

Zermelo’s navigation problem searches for the optimal trajectory and the associated guidance of a boat traveling between two given points with minimal time. Suppose the speed of the boat relative to the water is a constant VV. And the β(t)\beta(t) is the heading angle of the boat’s axis relative to the horizontal axis, which is defined as the control function. Suppose u,vu,v are the speed of currents along x,yx,y directions. Then the OCP is formalized as:

minβC1([0,T])\displaystyle\min_{\beta\in C^{1}([0,T])} T\displaystyle T (18a)
s.t. x(t)\displaystyle x^{\prime}(t) =Vcosβ(t)+u(t,x(t),y(t)),\displaystyle=V\cos\beta(t)+u(t,x(t),y(t)), (18b)
y(t)\displaystyle y^{\prime}(t) =Vsinβ(t)+v(t,x(t),y(t)),\displaystyle=V\sin\beta(t)+v(t,x(t),y(t)), (18c)
x(0)\displaystyle x(0) =x1,y(0)=y1,\displaystyle=x_{1},\quad y(0)=y_{1}, (18d)
x(T)\displaystyle x(T) =x2,y(T)=y2.\displaystyle=x_{2},\quad y(T)=y_{2}. (18e)

The analytical solution derived from PMP is an ODE, named Zermelo’s navigation formula:

dβdt=sin2βvx+sinβcosβ(uxvy)cos2βuy.\frac{\mathrm{d}\beta}{\mathrm{d}t}=\sin^{2}\beta\frac{\partial v}{\partial x}+\sin\beta\cos\beta\left(\frac{\partial u}{\partial x}-\frac{\partial v}{\partial y}\right)-\cos^{2}\beta\frac{\partial u}{\partial y}. (19)

In our experiment, we simplify u,vu,v as linear function u(x,y)=Ax+By,v(x,y)=Cx+Dyu(x,y)=Ax+By,v(x,y)=Cx+Dy, where A,B,C,DA,B,C,D are parameters. And we fix the initial point x1=0,y1=0x_{1}=0,y_{1}=0. Then an OCP instance is uniquely defined by a tuple (x2,y2,V,A,B,C,D)(x_{2},y_{2},V,A,B,C,D), which is defined as the input of our Instance-Solution operator. For the ID and OOD, we first define a base tuple (1,1,2,0,0,0,0)(1,1,2,0,0,0,0), then add with noises 𝜺in𝒰(0,1){\bf\it\varepsilon}^{in}\in\mathcal{U}(0,1) and 𝜺out𝒰(1,2){\bf\it\varepsilon}^{out}\in\mathcal{U}(1,2) for ID and OOD respectively. Again, all operators are trained with direct method(DM) solution data, and MAPE evaluates the distance toward the analytical solution.

The results are displayed in Table 16. The NASM backbone has the best performance in time, ID MAPE and OOD MAPE. Most neural operators have close ID performance to DM, supporting the feasibility of the Instance-Solution operator framework.

Table 16: Results of Zermelo environment.
Model Time(sec./instance) ID MAPE OOD MAPE
DM 4.62×1024.62\times 10^{-2} 1.46×1031.46\times 10^{-3} 2.63×1032.63\times 10^{-3}
NASM 1.58×1051.58\times 10^{-5} 2.48×1032.48\times 10^{-3} 1.31×1021.31\times 10^{-2}
DON 2.84×1052.84\times 10^{-5} 8.67×1038.67\times 10^{-3} 3.96×1023.96\times 10^{-2}
MLP 2.56×1052.56\times 10^{-5} 6.32×1036.32\times 10^{-3} 2.86×1022.86\times 10^{-2}
GEN 6.81×1036.81\times 10^{-3} 8.64×1038.64\times 10^{-3} 3.85×1023.85\times 10^{-2}
FNO 4.37×1044.37\times 10^{-4} 2.66×1032.66\times 10^{-3} 3.40×1023.40\times 10^{-2}

Appendix I Future Direction: Enforcing the Constraints

In the current setting of Instance-Solution operators, boundary conditions can not be exactly enforced, since the operators are set to be purely data-driven and physics-agnostic. Instead, the operators approximately satisfy the boundary conditions by learning from the reference solution. Such a physics-agnostic setting is designed (and reasonable) for scenarios where explicit expressions of dynamics and boundary conditions are unavailable (e.g. the Pusher experiments in Section 3.2). However, our Instance-Solution operator framework may also be applied to a physics-informed setting, by changing the network backbone to the Physics-informed neural operator (Wang, Wang, and Perdikaris 2021).

And there are more important constraints in practice, such as the input constraints (e.g. maximum thrusts of quadrotor) and state constraints (e.g. quadrotor with obstacles. In our paper, they are formulated in the form of function space. Take the state xx for example, we define xXL(T;dx)x\in X\subseteq L(T;\mathbb{R}^{d_{x}}), where XX is its function space, and L(T;dx)L(T;\mathbb{R}^{d_{x}}) is the full Lebesgue space. Then X=L(T;dx)X=L(T;\mathbb{R}^{d_{x}}) represents the unconstrained states, and XL(T;dx)X\subsetneq L(T;\mathbb{R}^{d_{x}}) denotes constrained states. The constrained state and control can be naturally incorporated into the PMP (Bettiol and Bourdin 2021), thus they can be modeled in our Instance-Solution framework. Again, these constraints are satisfied approximately in our model.

Inspired by the recent works that use SMT solver (Gao, Kong, and Clarke 2013) to guarantee Lyapunov stability for unknown systems (Chang, Roohi, and Gao 2019; Zhou et al. 2022), we propose a possible approach to provably satisfy the OCP constraints. Given a trained neural operator, the domain of OCP instances, and constraints, we can easily establish a first-order logic formula over real numbers, and then check the formula via the SMT solver. The solver will either prove that the neural operator satisfies all constraints for all instances within the domain or generate some counterexamples that further enhance the training dataset. If it is the latter case, the operator will be re-trained on the enhanced dataset, until the SMT solver proves that it satisfies all constraints.