Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory
Abstract
A neural network model of a differential equation, namely neural ODE, has enabled the learning of continuous-time dynamical systems and probabilistic distributions with high accuracy. The neural ODE uses the same network repeatedly during a numerical integration. The memory consumption of the backpropagation algorithm is proportional to the number of uses times the network size. This is true even if a checkpointing scheme divides the computation graph into sub-graphs. Otherwise, the adjoint method obtains a gradient by a numerical integration backward in time. Although this method consumes memory only for a single network use, it requires high computational cost to suppress numerical errors. This study proposes the symplectic adjoint method, which is an adjoint method solved by a symplectic integrator. The symplectic adjoint method obtains the exact gradient (up to rounding error) with memory proportional to the number of uses plus the network size. The experimental results demonstrate that the symplectic adjoint method consumes much less memory than the naive backpropagation algorithm and checkpointing schemes, performs faster than the adjoint method, and is more robust to rounding errors.
1 Introduction
Deep neural networks offer remarkable methods for various tasks, such as image recognition [18] and natural language processing [4]. These methods employ a residual architecture [21, 34], in which the output of the -th operation is defined as the sum of a subroutine and the input as . The residual architecture can be regarded as a numerical integration applied to an ordinary differential equation (ODE) [30]. Accordingly, a neural network model of the differential equation , namely, neural ODE, was proposed in [2]. Given an initial condition as an input, the neural ODE solves an initial value problem by numerical integration, obtaining the final value as an output . The neural ODE can model continuous-time dynamics such as irregularly sampled time series [24], stable dynamical systems [37, 42], and physical phenomena associated with geometric structures [3, 13, 31]. Further, because the neural ODE approximates a diffeomorphism [43], it can model probabilistic distributions of real-world data by a change of variables [12, 23, 25, 45].
For an accurate integration, the neural ODE must employ a small step size and a high-order numerical integrator composed of many internal stages. A neural network is used at each stage of each time step. Thus, the backpropagation algorithm consumes exorbitant memory to retain the whole computation graph [39, 2, 10, 46]. The neural ODE employs the adjoint method to reduce memory consumption—this method obtains a gradient by a backward integration along with the state , without consuming memory for retaining the computation graph over time [8, 17, 41, 44]. However, this method incurs high computational costs to suppress numerical errors. Several previous works employed a checkpointing scheme [10, 46, 47]. This scheme only sparsely retains the state as checkpoints and recalculates a computation graph from each checkpoint to obtain the gradient. However, this scheme still consumes a significant amount of memory to retain the computation graph between checkpoints.
To address the above limitations, this study proposes the symplectic adjoint method. The main advantages of the proposed method are presented as follows.
Exact Gradient and Fast Computation: In discrete time, the adjoint method suffers from numerical errors or needs a smaller step size. The proposed method uses a specially designed integrator that obtains the exact gradient in discrete time. It works with the same step size as the forward integration and is thus faster than the adjoint method in practice.
Minimal Memory Consumption: Excepting the adjoint method, existing methods apply the backpropagation algorithm to the computation graph of the whole or a subset of numerical integration [10, 46, 47]. The memory consumption is proportional to the number of steps/stages in the graph times the neural network size. Conversely, the proposed method applies the algorithm only to each use of the neural network, and thus the memory consumption is only proportional to the number of steps/stages plus the network size.
Robust to Rounding Error: The backpropagation algorithm accumulates the gradient from each use of the neural network and tends to suffer from rounding errors. Conversely, the proposed method obtains the gradient from each step as a numerical integration and is thus more robust to rounding errors.
Methods | Gradient Calculation | Exact | Checkpoints | Memory Consumption |
Computational Cost |
|
---|---|---|---|---|---|---|
checkpoint |
backprop. |
|||||
NODE [2] | adjoint method | no | ||||
NODE [2] | backpropagation | yes | — | — | | |
baseline scheme |
backpropagation | yes | ||||
ACA [46] | backpropagation | yes | ||||
MALI [47]∗ | backpropagation | yes | ||||
proposed∗∗ |
symplectic adjoint method |
yes |
∗Available only for the asynchronous leapfrog integrator. ∗∗Available for any Runge–Kutta methods.
2 Background and Related Work
2.1 Neural Ordinary Differential Equation and Adjoint Method
We use the following notation.
-
:
the number of stacked neural ODE components,
-
:
the number of layers in a neural network,
-
, :
the number of time steps in the forward and backward integrations, respectively, and
-
:
the number of uses of a neural network per step.
is typically equal to the number of internal stages of a numerical integrator [17]. A numerical integration forward in time requires a computational cost of . It also provides a computation graph over time steps, which is retained with a memory of ; the backpropagation algorithm is then applied to obtain the gradient. The total computational cost is , where we suppose the computational cost of the backpropagation algorithm is equal to that of forward propagation. The memory consumption and computational cost are summarized in Table 1.
To reduce the memory consumption, the original study on the neural ODE introduced the adjoint method [2, 8, 17, 41, 44]. This method integrates the pair of the system state and the adjoint variable backward in time. The adjoint variable represents the gradient of some function , and the backward integration of the adjoint variable works as the backpropagation (or more generally the reverse-mode automatic differentiation) in continuous time. The memory consumption is to retain the final values of neural ODE components and to obtain the gradient of a neural network for integrating the adjoint variable . The computational cost is at least doubled because of the re-integration of the system state backward in time. The adjoint method suffers from numerical errors [41, 10]. To suppress the numerical errors, the backward integration often requires a smaller step size than the forward integration (i.e., ), leading to an increase in computation time. Conversely, the proposed symplectic adjoint method uses a specially designed integrator, which provides the exact gradient with the same step size as the forward integration.
2.2 Checkpointing Scheme
The checkpointing scheme has been investigated to reduce the memory consumption of neural networks [14, 15], where intermediate states are retained sparsely as checkpoints, and a computation graph is recomputed from each checkpoint. For example, Gruslys et al. applied this scheme to recurrent neural networks [15]. When the initial value of each neural ODE component is retained as a checkpoint, the initial value problem is solved again before applying the backpropagation algorithm to obtain the gradient of the component. Then, the memory consumption is for checkpoints and for the backpropagation; the memory consumption is in total (see the baseline scheme). ANODE scheme retains each step as a checkpoint with a memory of [10]. Form each checkpoint , this scheme recalculates the next step and obtains the gradient using the backpropagation algorithm with a memory of ; the memory consumption is in total. ACA scheme improves ANODE scheme for methods with adaptive time-stepping by discarding the computation graph to find an optimal step size. Even with checkpoints, the memory consumption is still proportional to the number of uses of a neural network per step, which is not negligible for a high-order integrator, e.g., for the Dormand–Prince method [7]. In this context, the proposed method is regarded as a checkpointing scheme inside a numerical integrator. Note that previous studies did not use the notation .
Instead of a checkpointing scheme, MALI employs an asynchronous leapfrog (ALF) integrator after the state is paired up with the velocity state [47]. The ALF integrator is time-reversible, i.e., the backward integration obtains the state equal to that in the forward integration without checkpoints [17]. However, the ALF integrator is a second-order integrator, implying that it requires a small step size and a high computational cost to suppress numerical errors. Higher-order Runge–Kutta methods cannot be used in place of the ALF integrator because they are implicit or non-time-reversible. The ALF integrator is inapplicable to physical systems without velocity such as partial differential equation (PDE) systems. Nonetheless, a similar approach named RevNet was proposed before in [11]. When regarding ResNet as a forward Euler method [2, 18], RevNet has an architecture regarded as the leapfrog integrator, and it recalculates the intermediate activations in the reverse direction.
3 Adjoint Method
Consider a system
(1) |
where , , and , respectively, denote the system state, an independent variable (e.g., time), and parameters of the function . Given an initial condition , the solution is given by
(2) |
The solution is evaluated at the terminal by a function as . Our main interest is in obtaining the gradients of with respect to the initial condition and the parameters .
Now, we introduce the adjoint method [2, 8, 17, 41, 44]. We first focus on the initial condition and omit the parameters . The adjoint method is based on the variational variable and the adjoint variable . The variational and adjoint variables respectively follow the variational system and adjoint system as follows.
(3) |
The variational variable represents the Jacobian of the state with respect to the initial condition ; the detailed derivation is summarized in Appendix A.
Remark 1.
The quantity is time-invariant, i.e., .
The proofs of most Remarks and Theorems in this paper are summarized in Appendix B.
Remark 2.
The adjoint variable represents the gradient if the final condition of the adjoint variable is set to .
This is because of the chain rule. Thus, the backward integration of the adjoint variable works as reverse-mode automatic differentiation. The adjoint method has been used for data assimilation, where the initial condition is optimized by a gradient-based method. For system identification (i.e., parameter adjustment), one can consider the parameters as a part of the augmented state of the system
(4) |
The variational and adjoint variables are augmented in the same way. Hereafter, we let denote the state or augmented state without loss of generality. See Appendix C for details.
According to the original implementation of the neural ODE [2], the final value of the system state is retained after forward integration, and the pair of the system state and the adjoint variable is integrated backward in time to obtain the gradients. The right-hand sides of the main system in Eq. (1) and the adjoint system in Eq. (3) are obtained by the forward and backward propagations of the neural network , respectively. Therefore, the computational cost of the adjoint method is twice that of the ordinary backpropagation algorithm.
After a numerical integrator discretizes the time, Remark 1 does not hold, and thus the adjoint variable is not equal to the exact gradient [10, 41]. Moreover, in general, the numerical integration backward in time is not consistent with that forward in time. Although a small step size (i.e., a small tolerance) suppresses numerical errors, it also leads to a longer computation time. These facts provide the motivation to obtain the exact gradient with a small memory, in the present study.
4 Symplectic Adjoint Method
4.1 Runge–Kutta Method
We first discretize the main system in Eq. (1). Let , , and denote the -th time step, step size, and state, respectively, where . Previous studies employed one of the Runge–Kutta methods, generally expressed as
(5) |
The coefficients , , and are summarized as the Butcher tableau [16, 17, 41]. If for , the intermediate state is calculable from to sequentially; then, the Runge–Kutta method is considered explicit. Runge–Kutta methods are not time-reversible in general, i.e., the numerical integration backward in time is not consistent with that forward in time.
Therefore, it is not necessary to solve the variational variable separately.
4.2 Symplectic Runge–Kutta Method for Adjoint System
We assume for . We suppose the adjoint system to be solved by another Runge–Kutta method with the same step size as that used for the system state , expressed as
(6) |
The final condition is set to . Because the time evolutions of the variational variable and the adjoint variable are expressible by two equations, the combined system is considered as a partitioned system. A combination of two Runge–Kutta methods for solving a partitioned system is called a partitioned Runge–Kutta method, where for . We introduce the following condition for a partitioned Runge–Kutta method.
Condition 1.
for , and and for .
Theorem 1 (Sanz-Serna, [41]).
Because the bilinear quantity (including ) is conserved, the adjoint system solved by the Runge–Kutta method in Eq. (6) under Condition 1 provides the exact gradient as the adjoint variable . The Dormand–Prince method, one of the most popular Runge–Kutta methods, has [7]. For such methods, the Runge–Kutta method under Condition 1 in Eq. (6) is generalized as
(7) |
where
(8) |
Note that this numerical integrator is no longer a Runge–Kutta method and is an alternative expression for the “fancy” integrator proposed in [41].
Theorem 2.
Remark 4.
We emphasize that Theorems 1 and 2 hold for any ODE systems even if the systems have discontinuity [19], stochasticity [29], or physics constraints [13]. This is because the Theorems are not the properties of a system but of Runge–Kutta methods.
A partitioned Runge–Kutta method that satisfies Condition 1 is symplectic [17, 16]. It is known that, when a symplectic integrator is applied to a Hamiltonian system using a fixed step size, it conserves a modified Hamiltonian, which is an approximation to the system energy of the Hamiltonian system. The bilinear quantity is associated with the symplectic structure but not with a Hamiltonian. Regardless of the step size, a symplectic integrator conserves the symplectic structure and thereby conserves the bilinear quantity . Hence, we named this method the symplectic adjoint method. For integrators other than Runge–Kutta methods, one can design the integrator for the adjoint system so that the pair of integrators is symplectic (see [32] for example).
4.3 Proposed Implementation
The theories given in the last section were mainly introduced for the numerical analysis in [41]. Because the original expression includes recalculations of intermediate variables, we propose the alternative expression in Eq. (7) to reduce the computational cost. The discretized adjoint system in Eq. (7) depends on the vector–Jacobian product (VJP) . To obtain it, the computation graph from the input to the output is required. When the computation graph in the forward integration is entirely retained, the memory consumption and computational cost are of the same orders as those for the naive backpropagation algorithm. To reduce the memory consumption, we propose the following strategy as summarized in Algorithms 1 and 2.
At the forward integration of a neural ODE component, the pairs of system states and time points at time steps are retained with a memory of as checkpoints, and all computation graphs are discarded, as shown in Algorithm 1. For neural ODE components, the memory for checkpoints is . The backward integration is summarized in Algorithm 2. The below steps are repeated from to . From the checkpoint , the intermediate states for stages are obtained following the Runge–Kutta method in Eq. (5) and retained as checkpoints with a memory of , while all computation graphs are discarded. Then, the adjoint system is integrated from to using Eq. (7). Because the computation graph of the neural network in line 5 is discarded, it is recalculated and the VJP is obtained using the backpropagation algorithm one-by-one in line 11, where only a single use of the neural network is recalculated at a time. This is why the memory consumption is proportional to the number of checkpoints plus the neural network size . By contrast, existing methods apply the backpropagation algorithm to the computation graph of a single step composed of stages or multiple steps. The memory consumption is proportional to the number of uses of the neural network between two checkpoints ( at least) times the neural network size , in addition to the memory for checkpoints (see Table 1). Due to the recalculation, the computational cost of the proposed strategy is , whereas those of the adjoint method [2] and ACA [46] are and , respectively. However, the increase in the computation time is much less than that expected theoretically because of other bottlenecks (as demonstrated later).
5 Experiments
We evaluated the performance of the proposed symplectic adjoint method and existing methods using PyTorch 1.7.1 [35]. We implemented the proposed symplectic adjoint method by extending the adjoint method implemented in the package 0.1.1 [2]. We re-implemented ACA [46] because the interfaces of the official implementation is incompatible with . In practice, the number of checkpoints for an integration can be varied; we implemented a baseline scheme that retains only a single checkpoint per neural ODE component. The source code is available at https://github.com/tksmatsubara/symplectic-adjoint-method.
5.1 Continuous Normalizing Flow
Experimental Settings:
We evaluated the proposed symplectic adjoint method on training continuous normalizing flows [12]. A normalizing flow is a neural network that approximates a bijective map and obtains the exact likelihood of a sample by the change of variables , where and denote the corresponding latent variable and its prior, respectively [5, 6, 38]. A continuous normalizing flow is a normalizing flow whose map is modeled by stacked neural ODE components, in particular, and for the case with . The log-determinant of the Jacobian is obtained by a numerical integration together with the system state as . The trace operation is approximated by the Hutchinson estimator [22]. We adopted the experimental settings of the continuous normalizing flow, FFJORD111https://github.com/rtqichen/ffjord (MIT License) [12], unless stated otherwise.
We examined five real tabular datasets, namely, MiniBooNE, GAS, POWER, HEPMASS, and BSDS300 datasets [33]. The network architectures were the same as those that achieved the best results in the original experiments; the number of neural ODE components varied across datasets. We employed the Dormand–Prince integrator, which is a fifth-order Runge–Kutta method with adaptive time-stepping, composed of seven stages [7]. Note that the number of function evaluations per step is because the last stage is reused as the first stage of the next step. We set the absolute and relative tolerances to and , respectively. The neural networks were trained using the Adam optimizer [27] with a learning rate of . We used a batch-size of 1000 for all datasets to put a mini-batch into a single NVIDIA GeForce RTX 2080Ti GPU with 11 GB of memory, while the original experiments employed a batch-size of 10 000 for the latter three datasets on multiple GPUs. When using multiple GPUs, bottlenecks such as data transfer across GPUs may affect performance, and a fair comparison becomes difficult. Nonetheless, the naive backpropagation algorithm and baseline scheme consumed the entire memory for BSDS300 dataset.
We also examined the MNIST dataset [28] using a single NVIDIA RTX A6000 GPU with 48 GB of memory. Following the original study, we employed the multi-scale architecture and set the tolerance to . We set the learning rate to and then reduced it to at the 250th epoch. While the original experiments used a batch-size of 900, we set the batch-size to 200 following the official code1. The naive backpropagation algorithm and baseline scheme consumed the entire memory.
MINIBOONE () | GAS () | POWER () | |||||||
NLL | mem. | time | NLL | mem. | time | NLL | mem. | time | |
adjoint method [2] | 10.590.17 | 170 | 0.74 | -10.530.25 | 24 | 4.82 | -0.310.01 | 8.1 | 6.33 |
backpropagation [2] | 10.540.18 | 4436 | 0.91 | -9.530.42 | 4479 | 12.00 | -0.240.05 | 1710.9 | 10.64 |
baseline scheme | 10.540.18 | 4457 | 1.10 | -9.530.42 | 1858 | 5.48 | -0.240.05 | 515.2 | 4.37 |
ACA [46] | 10.570.30 | 306 | 0.77 | -10.650.45 | 73 | 3.98 | -0.310.02 | 29.5 | 5.08 |
proposed | 10.490.11 | 95 | 0.84 | -10.890.11 | 20 | 4.39 | -0.310.02 | 9.2 | 5.73 |
HEPMASS () | BSDS300 () | MNIST () | |||||||
NLL | mem. | time | NLL | mem. | time | NLL | mem. | time | |
adjoint method [2] | 16.490.25 | 40 | 4.19 | -152.040.09 | 577 | 11.70 | 0.9180.011 | 1086 | 10.12 |
backpropagation [2] | 17.030.22 | 5254 | 11.82 | — | — | — | — | — | — |
baseline scheme | 17.030.22 | 1102 | 4.40 | — | — | — | — | — | — |
ACA [46] | 16.410.39 | 88 | 3.67 | -151.270.47 | 757 | 6.97 | 0.9190.003 | 4332 | 7.94 |
proposed | 16.480.20 | 35 | 4.15 | -151.170.15 | 283 | 8.07 | 0.9170.002 | 1079 | 9.42 |
Negative log-likelihoods (NLL), peak memory consumption [], and computation time per iteration []. See Table A2 in Appendix for standard deviations.
Performance:
The medians standard deviations of three runs are summarized in Table 2. In many cases, all methods achieved negative log-likelihoods (NLLs) with no significant difference because all but the adjoint method provide the exact gradients up to rounding error, and the adjoint method with a small tolerance provides a sufficiently accurate gradient. The naive backpropagation algorithm and baseline scheme obtained slightly worse results on the GAS, POWER, and HEPMASS datasets. Due to adaptive time-stepping, the numerical integrator sometimes makes the step size much smaller, and the backpropagation algorithm over time steps suffered from rounding errors. Conversely, ACA and the proposed symplectic adjoint method applied the backpropagation algorithm separately to a subset of the integration, thereby becoming more robust to rounding errors (see Appendix D.1 for details).
After the training procedure, we obtained the peak memory consumption during additional training iterations (mem. []), from which we subtracted the memory consumption before training (i.e., occupied by the model parameters, loaded data, etc.). The memory consumption still includes the optimizer’s states and the intermediate results of the multiply–accumulate operation. The results roughly agree with the theoretical orders shown in Table 1 (see also Table A2 for standard deviations). The symplectic adjoint method consumed much smaller memory than the naive backpropagation algorithm and the checkpointing schemes. Owing to the optimized implementation, the symplectic adjoint method consumed smaller memory than the adjoint method in some cases (see Appendix D.2).
On the other hand, the computation time per iteration (time []) during the additional training iterations does not agree with the theoretical orders. First, the adjoint method was slower in many cases, especially for the BSDS300 and MNIST datasets. For obtaining the gradients, the adjoint method integrates the adjoint variable , whose size is equal to the sum of the sizes of the parameters and the system state . With more parameters, the probability that at least one parameter does not satisfy the tolerance value is increased. An accurate backward integration requires a much smaller step size than the forward integration (i.e., much greater than ), leading to a longer computation time. Second, the naive backpropagation algorithm and baseline scheme were slower than that expected theoretically, in many cases. A method with high memory consumption may have to wait for a retained computation graph to be loaded or memory to be freed, leading to an additional bottleneck. The symplectic adjoint method is free from the above bottlenecks and performs faster in practice; it was faster than the adjoint method for all but MiniBooNE dataset.
The symplectic adjoint method is superior (or at least competitive) to the adjoint method, naive backpropagation, and baseline scheme in terms of both memory consumption and computation time. Between the proposed symplectic adjoint method and ACA, a trade-off exists between memory consumption and computation time.

Robustness to Tolerance:
The adjoint method provides gradients with numerical errors. To evaluate the robustness against tolerance, we employed MiniBooNE dataset and varied the absolute tolerance while maintaining the relative tolerance as . During the training, we obtained the computation time per iteration, as summarized in the upper panel of Fig. 1. The computation time reduced as the tolerance increased. After training, we obtained the NLLs with , as summarized in the bottom panel of Fig. 1. The adjoint method performed well only with . With , the numerical error in the backward integration was non-negligible, and the performance degraded. With , the adjoint method destabilized. The symplectic adjoint method performed well even with . Even with , it performed to a certain level, while the numerical error in the forward integration was non-negligible. Because of the exact gradient, the symplectic adjoint method is robust to a large tolerance compared with the adjoint method, and thus potentially works much faster with an appropriate tolerance.
, | , | , | , | |||||
---|---|---|---|---|---|---|---|---|
mem. | time | mem. | time | mem. | time | mem. | time | |
adjoint method [2] | 2100 | 247.4707.52 | 220 | 18.320.88 | 24000 | 5.340.31 | 28000 | 9.770.81 |
backpropagation [2] | — | — | — | — | 4433255 | 11.851.10 | — | — |
baseline scheme | — | — | — | — | 1858228 | 5.820.28 | 4108576 | 22.763.70 |
ACA [46] | 60730 | 232.9013.81 | 692 | 17.721.38 | 73000 | 4.150.21 | 138000 | 9.360.55 |
proposed | 58914 | 262.9905.19 | 432 | 18.590.75 | 20000 | 4.780.32 | 21000 | 11.410.23 |
Peak memory consumption [], and computation time per iteration [].
Different Runge–Kutta Methods:
The Runge–Kutta family includes various integrators characterized by the Butcher tableau [16, 17, 41], such as the Heun–Euler method (a.k.a. adaptive Heun), Bogacki–Shampine method (a.k.a. bosh3), fifth-order Dormand–Prince method (a.k.a. dopri5), and eighth-order Dormand–Prince method (a.k.a. dopri8). These methods have the orders of , and using , and function evaluations, respectively. We examined these methods using GAS dataset, and the results are summarized in Table 3. The naive backpropagation algorithm and baseline scheme consumed the entire memory in some cases, as denoted by dashes. We omit the NLLs because all methods used the same tolerance and achieved the same NLLs.
Compared to ACA, the symplectic adjoint method suppresses the memory consumption more significantly with a higher-order method (i.e., more function evaluations ), as the theory suggests in Table 1. With the Heun–Euler method, all methods were extremely slow, and all but the adjoint method consumed larger memory. A lower-order method has to use an extremely small step size to satisfy the tolerance, thereby increasing the number of steps , computation time, and memory for checkpoints. This result indicates the limitations of methods that depend on lower-order integrators, such as MALI [47]. With the eighth-order Dormand–Prince method, the adjoint method performs relatively faster. This is because the backward integration easily satisfies the tolerance with a higher-order method (i.e., ). Nonetheless, in terms of computation time, the fifth-order Dormand–Prince method is the best choice, for which the symplectic adjoint method greatly reduces the memory consumption and performs faster than all but ACA.

Memory for Checkpoints:
To evaluate the memory consumption with varying numbers of checkpoints, we used the fifth-order Dormand–Prince method and varied the number of steps for MNIST by manually varying the step size. We summarized the results in Fig. 2 on a log-log scale. Note that, with the adaptive stepping, FFJORD needs approximately steps for MNIST and fewer steps for other datasets. Because we set , but in practice, the adjoint method is expected to require a longer computation time.
The memory consumption roughly follows the theoretical orders summarized in Table 1. The adjoint method needs a memory of for the backpropagation, and the symplectic adjoint method needs an additional memory of for checkpoints. Until the number of steps exceeds a thousand, the memory for checkpoints is negligible compared to that for the backpropagation. Compared to the symplectic adjoint method, ACA needs a memory of for the backpropagation over stages. The increase in memory is significant until the number of steps reaches ten thousand. For a stiff (non-smooth) ODE for which a numerical integrator needs thousands of steps, one can employ a higher-order integrator such as the eighth-order Dormand–Prince method and suppress the number of steps. For a stiffer ODE, implicit integrators are commonly used, which are out of the scope of this study and the related works in Table 1. Therefore, we conclude that the symplectic adjoint method needs the memory at the same level as the adjoint method and much smaller than others in practical ranges.
A possible alternative to the proposed implementation in Algorithms 1 and 2 retains all intermediate states during the forward integration. Its computational cost and memory consumption are and , respectively. The memory for checkpoints can be non-negligible with a practical number of steps.
KdV Equation | Cahn–Hilliard System | |||||
MSE () | mem. | time | MSE () | mem. | time | |
adjoint method [2] | 1.613.23 | 93.70.0 | 27616 | 5.581.67 | 93.70.0 | 94224 |
backpropagation [2] | 1.613.40 | 693.90.0 | 10504 | 4.681.89 | 3047.10.0 | 42513 |
ACA [46] | 1.613.40 | 647.80.0 | 13705 | 5.822.33 | 648.00.0 | 48413 |
proposed | 1.614.00 | 79.80.0 | 16206 | 5.471.46 | 80.30.0 | 56822 |
Mean-squared errors (MSEs) in long-term predictions, peak memory consumption [], | ||||||
and computation time per iteration []. |
5.2 Continuous-Time Dynamical System
Experimental Settings:
We evaluated the symplectic adjoint method on learning continuous-time dynamical systems [13, 31, 40]. Many physical phenomena can be modeled using the gradient of system energy as , where is a coefficient matrix that determines the behaviors of the energy [9]. We followed the experimental settings of HNN++, provided in [31]222https://github.com/tksmatsubara/discrete-autograd (MIT License). A neural network composed of one convolution layer and two fully connected layers approximated the energy function and learned the time series by interpolating two successive samples. The deterministic convolution algorithm was enabled (see Appendix D.3 for discussion). We employed two physical systems described by PDEs, namely the Korteweg–De Vries (KdV) equation and the Cahn–Hilliard system. We used a batch-size of 100 to put a mini-batch into a single NVIDIA TITAN V GPU instead of the original batch-size of 200. Moreover, we used the eighth-order Dormand–Prince method [17], composed of 13 stages, to emphasize the efficiency of the proposed method. We omitted the baseline scheme because of . We evaluated the performance using mean squared errors (MSEs) in the system energy for long-term predictions.
Performance:
The medians standard deviations of 15 runs are summarized in Table 4. Due to the accumulated error in the numerical integration, the MSEs had large variances, but all methods obtained similar MSEs. ACA consumed much more memory than the symplectic adjoint method because of the large number of stages; the symplectic adjoint method is more beneficial for physical simulations, which often require extremely higher-order methods. Due to the severe nonlinearity, the adjoint method had to employ a small step size and thus performed slower than others (i.e., ).
6 Conclusion
We proposed the symplectic adjoint method, which solves the adjoint system by a symplectic integrator with appropriate checkpoints and thereby provides the exact gradient. It only applies the backpropagation algorithm to each use of the neural network, and thus consumes memory much less than the backpropagation algorithm and the checkpointing schemes. Its memory consumption is competitive to that of the adjoint method because the memory consumed by checkpoints is negligible in most cases. The symplectic adjoint method provides the exact gradient with the same step size as that used for the forward integration. Therefore, in practice, it performs faster than the adjoint method, which requires a small step size to suppress numerical errors.
As shown in the experiments, the best integrator and checkpointing scheme may depend on the target system and computational resources. For example, Kim et al., [26] has demonstrated that quadrature methods can reduce the computation cost of the adjoint system for a stiff equation in exchange for the additional memory consumption. Practical packages provide many integrators and can choose the best ones [20, 36]. In the future, we will provide the proposed symplectic adjoint method as a part of such packages for appropriate systems.
Acknowledgments and Disclosure of Funding
This study was partially supported by JST CREST (JPMJCR1914), JST PRESTO (JPMJPR21C7), and JSPS KAKENHI (19K20344).
References
- Bochev and Scovel, [1994] Bochev, P. B. and Scovel, C. (1994). On quadratic invariants and symplectic structure. BIT Numerical Mathematics, 34(3):337–345.
- Chen et al., [2018] Chen, T. Q., Rubanova, Y., Bettencourt, J., Duvenaud, D., Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. (2018). Neural Ordinary Differential Equations. In Advances in Neural Information Processing Systems (NeurIPS).
- Chen et al., [2020] Chen, Z., Zhang, J., Arjovsky, M., and Bottou, L. (2020). Symplectic Recurrent Neural Networks. In International Conference on Learning Representations (ICLR).
- Devlin et al., [2018] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv.
- Dinh et al., [2014] Dinh, L., Krueger, D., and Bengio, Y. (2014). NICE: Non-linear Independent Components Estimation. In Workshop on International Conference on Learning Representations.
- Dinh et al., [2017] Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2017). Density estimation using Real NVP. In International Conference on Learning Representations (ICLR).
- Dormand and Prince, [1986] Dormand, J. R. and Prince, P. J. (1986). A reconsideration of some embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 15(2):203–211.
- Errico, [1997] Errico, R. M. (1997). What Is an Adjoint Model? Bulletin of the American Meteorological Society, 78(11):2577–2591.
- Furihata and Matsuo, [2010] Furihata, D. and Matsuo, T. (2010). Discrete Variational Derivative Method: A Structure-Preserving Numerical Method for Partial Differential Equations. Chapman and Hall/CRC.
- Gholami et al., [2019] Gholami, A., Keutzer, K., and Biros, G. (2019). ANODE: Unconditionally accurate memory-efficient gradients for neural ODEs. In International Joint Conference on Artificial Intelligence (IJCAI).
- Gomez et al., [2017] Gomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. (2017). The Reversible Residual Network: Backpropagation Without Storing Activations. In Advances in Neural Information Processing Systems (NIPS).
- Grathwohl et al., [2018] Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. (2018). FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models. In International Conference on Learning Representations (ICLR).
- Greydanus et al., [2019] Greydanus, S., Dzamba, M., and Yosinski, J. (2019). Hamiltonian Neural Networks. In Advances in Neural Information Processing Systems (NeurIPS).
- Griewank and Walther, [2000] Griewank, A. and Walther, A. (2000). Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software, 26(1):19–45.
- Gruslys et al., [2016] Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., and Graves, A. (2016). Memory-efficient backpropagation through time. Advances in Neural Information Processing Systems (NIPS).
- Hairer et al., [2006] Hairer, E., Lubich, C., and Wanner, G. (2006). Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, volume 31 of Springer Series in Computational Mathematics. Springer-Verlag, Berlin/Heidelberg.
- Hairer et al., [1993] Hairer, E., Nørsett, S. P., and Wanner, G. (1993). Solving Ordinary Differential Equations I: Nonstiff Problems, volume 8 of Springer Series in Computational Mathematics. Springer Berlin Heidelberg, Berlin, Heidelberg.
- He et al., [2016] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Herrera et al., [2020] Herrera, C., Krach, F., and Teichmann, J. (2020). Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering. In International Conference on Learning Representations (ICLR).
- Hindmarsh et al., [2005] Hindmarsh, A. C., Brown, P. N., Grant, K. E., Lee, S. L., Serban, R., Shumaker, D. E., and Woodward, C. S. (2005). SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers. ACM Transactions on Mathematical Software, 31(3):363–396.
- Hochreiter and Schmidhuber, [1997] Hochreiter, S. and Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8):1735–1780.
- Hutchinson, [1990] Hutchinson, M. (1990). A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics - Simulation and Computation, 19(2):433–450.
- Jiang et al., [2020] Jiang, C. M., Huang, J., Tagliasacchi, A., and Guibas, L. (2020). ShapeFlow: Learnable Deformations Among 3D Shapes. In Advances in Neural Information Processing Systems (NeurIPS).
- Kidger et al., [2020] Kidger, P., Morrill, J., Foster, J., and Lyons, T. (2020). Neural Controlled Differential Equations for Irregular Time Series. In Advances in Neural Information Processing Systems (NeurIPS).
- Kim et al., [2020] Kim, H., Lee, H., Kang, W. H., Cheon, S. J., Choi, B. J., and Kim, N. S. (2020). WaveNODE: A Continuous Normalizing Flow for Speech Synthesis. In ICML2020 Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models.
- Kim et al., [2021] Kim, S., Ji, W., Deng, S., Ma, Y., and Rackauckas, C. (2021). Stiff Neural Ordinary Differential Equations. arXiv.
- Kingma and Ba, [2015] Kingma, D. P. and Ba, J. (2015). Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR).
- LeCun et al., [1998] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE, volume 86, pages 2278–2323.
- Li et al., [2020] Li, X., Wong, T.-K. L., Chen, R. T. Q., and Duvenaud, D. (2020). Scalable Gradients for Stochastic Differential Equations. In Artificial Intelligence and Statistics (AISTATS).
- Lu et al., [2018] Lu, Y., Zhong, A., Li, Q., and Dong, B. (2018). Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In International Conference on Machine Learning (ICML).
- Matsubara et al., [2020] Matsubara, T., Ishikawa, A., and Yaguchi, T. (2020). Deep Energy-Based Modeling of Discrete-Time Physics. In Advances in Neural Information Processing Systems (NeurIPS).
- Matsuda and Miyatake, [2021] Matsuda, T. and Miyatake, Y. (2021). Generalization of partitioned Runge–Kutta methods for adjoint systems. Journal of Computational and Applied Mathematics, 388:113308.
- Papamakarios et al., [2017] Papamakarios, G., Pavlakou, T., and Murray, I. (2017). Masked Autoregressive Flow for Density Estimation. In Advances in Neural Information Processing Systems (NIPS).
- Pascanu et al., [2013] Pascanu, R., Mikolov, T., and Bengio, Y. (2013). On the difficulty of training Recurrent Neural Networks. In International Conference on Machine Learning (ICML).
- Paszke et al., [2017] Paszke, A., Chanan, G., Lin, Z., Gross, S., Yang, E., Antiga, L., and Devito, Z. (2017). Automatic differentiation in PyTorch. In Autodiff Workshop on Advances in Neural Information Processing Systems.
- Rackauckas et al., [2020] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., and Edelman, A. (2020). Universal Differential Equations for Scientific Machine Learning. arXiv.
- Rana et al., [2020] Rana, M. A., Li, A., Fox, D., Boots, B., Ramos, F., and Ratliff, N. (2020). Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable Dynamical Systems. In Conference on Learning for Dynamics and Control (L4DC).
- Rezende and Mohamed, [2015] Rezende, D. J. and Mohamed, S. (2015). Variational Inference with Normalizing Flows. In International Conference on Machine Learning (ICML).
- Rumelhart et al., [1986] Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088):533–536.
- Saemundsson et al., [2020] Saemundsson, S., Terenin, A., Hofmann, K., Deisenroth, M. P., Sæmundsson, S., Hofmann, K., Terenin, A., and Deisenroth, M. P. (2020). Variational Integrator Networks for Physically Meaningful Embeddings. In Artificial Intelligence and Statistics (AISTATS).
- Sanz-Serna, [2016] Sanz-Serna, J. M. (2016). Symplectic Runge-Kutta schemes for adjoint equations, automatic differentiation, optimal control, and more. SIAM Review, 58(1):3–33.
- Takeishi and Kawahara, [2020] Takeishi, N. and Kawahara, Y. (2020). Learning dynamics models with stable invariant sets. AAAI Conference on Artificial Intelligence (AAAI).
- Teshima et al., [2020] Teshima, T., Tojo, K., Ikeda, M., Ishikawa, I., and Oono, K. (2020). Universal Approximation Property of Neural Ordinary Differential Equations. In NeurIPS Workshop on Differential Geometry meets Deep Learning (DiffGeo4DL).
- Wang, [2013] Wang, Q. (2013). Forward and adjoint sensitivity computation of chaotic dynamical systems. Journal of Computational Physics, 235:1–13.
- Yang et al., [2019] Yang, G., Huang, X., Hao, Z., Liu, M. Y., Belongie, S., and Hariharan, B. (2019). Pointflow: 3D point cloud generation with continuous normalizing flows. International Conference on Computer Vision (ICCV).
- Zhuang et al., [2020] Zhuang, J., Dvornek, N., Li, X., Tatikonda, S., Papademetris, X., and Duncan, J. (2020). Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE. In International Conference on Machine Learning (ICML).
- Zhuang et al., [2021] Zhuang, J., Dvornek, N. C., Tatikonda, S., and Duncan, J. S. (2021). MALI: A memory efficient and reverse accurate integrator for Neural ODEs. In International Conference on Learning Representations (ICLR).
Supplementary Material: Appendices
Appendix A Derivation of Variational System
Let us consider a perturbed initial condition , from which the solution arises. Suppose that the solution satisfies . Then,
(9) |
Dividing by and taking the limit as , we define the variational variable as and the variational system as
(10) |
Appendix B Complete Proofs
Proof of Remark 1:
(11) |
Proof of Remark 2:
Because and is time-invariant,
(12) |
Proof of Remark 3:
Differentiating each term in the Runge–Kutta method in Eq. (5) by the initial condition gives the Runge–Kutta method applied to the variational variable , as follows.
(13) |
Proof of Theorem 1:
Because the quantity is conserved in continuous time,
(14) |
Because the quantity is bilinear,
(15) |
which implies
(16) |
The change in the bilinear quantity is
(17) |
If and , the change vanishes, i.e., the partitioned Runge–Kutta conserves a bilinear quantity . Note that must not vanish because . Therefore, the bilinear quantity is conserved as
(18) |
Proof of Theorem 2:
Proof of Remark 4:
Appendix C Gradients in General Cases
C.1 Gradient w.r.t. Parameters
For the parameter adjustment, one can consider the parameters as a part of the augmented state of the system
(23) |
The variational and adjoint variables are augmented in the same way. For the augmented adjoint variable , the augmented adjoint system is
(24) |
Hence, the adjoint variable for the system state is unchanged from Eq. (3), and the one for the parameters depends on the former as
(25) |
and .
C.2 Gradient of Functional
When the solution is evaluated by a functional as
(26) |
the adjoint variable that denotes the gradient of the functional is given by
(27) |
Appendix D Implementation Details
D.1 Robustness to Rounding Error
By definition, the naive backpropagation algorithm, baseline scheme, ACA, and the proposed symplectic adjoint method provide the exact gradient up to rounding error. However, the naive backpropagation algorithm and baseline scheme obtained slightly worse results on the GAS, POWER, and HEPMASS datasets. Due to the repeated use of the neural network, each method accumulates the gradient of the parameters for each use. Let denote the parameters used in the -th stage of -th step even though their values are unchanged. The backpropagation algorithm obtains the gradient with respect to the parameters by accumulating the gradient over all stages and steps one-by-one as
(28) |
When the step size at the -th step is sufficiently small, the gradient at the -th stage may be insignificant compared with the accumulated gradient and be rounded off during the accumulation.
Conversely, ACA accumulates the gradient within a step and then over time steps; this process can be expressed informally as
(29) |
Further, according to Eqs. (6) and (25), the (symplectic) adjoint method accumulates the adjoint variable (i.e., the transpose of the gradient) within a step and then over time steps as
(30) |
In these cases, even when the step size at the -th step is small, the gradient summed within a step (over stages) may still be significant and robust to rounding errors. This is the reason the adjoint method, ACA, and the symplectic adjoint method performed better than the naive backpropagation algorithm and baseline scheme for some datasets. Note that this approach requires additional memory consumption to store the gradient summed within a step, and it is applicable to the backpropagation algorithm with a slight modification.
D.2 Memory Consumption Optimization
Following Eqs. (21) and (22), a naive implementation of the adjoint method retains the adjoint variables at all stages to obtain their time-derivatives , and then adds them up to obtain the adjoint variable at the -th time step. However, as Eq. (25) shows, the adjoint variable for the parameters is not used for obtaining its time-derivative . One can add up the adjoint variable for the parameters at stage one-by-one without retaining it, thereby reducing the memory consumption proportionally to the number of parameters times the number of stages. A similar optimization is applicable to the adjoint method.
KdV Equation | Cahn–Hilliard System | |||||
MSE () | mem. | time | MSE () | mem. | time | |
adjoint method [2] | 1.613.23 | 181.400.0 | 24016 | 5.582.12 | 181.400.0 | 80525 |
backpropagation [2] | 1.613.24 | 733.915.6 | 9404 | 5.451.55 | 3053.522.9 | 38211 |
ACA [46] | 1.613.24 | 734.520.3 | 12004 | 6.003.27 | 780.422.9 | 42216 |
proposed | 1.613.58 | 182.100.0 | 14107 | 5.481.90 | 182.100.0 | 48019 |
Mean-squared errors (MSEs) in long-term predictions, peak memory consumption [], | ||||||
and computation time per iteration []. |
D.3 Parallelization
The memory consumption and computation time depend highly on the implementations and devices. Being implemented on a GPU, the convolution operation can be easily parallelized in space and exhibits a non-deterministic behavior. To avoid the non-deterministic behavior, PyTorch provides an option torch.backends.cudnn.deterministic, which was used to obtain the results in Section 5.2, following the original implementation [31]. Without this option, the memory consumption increased by a certain amount, and the computation times reduced due to the aggressive parallelization, as shown by the results in Table A1. Even then, the proposed symplectic adjoint method occupied the smallest memory among the methods for the exact gradient. The increase in the memory consumption is proportional to the width of a neural network; therefore, it is negligible when the neural network is sufficiently deep.
Note that the results in Section 5.1 were obtained without the deterministic option.
MINIBOONE () | GAS () | POWER () | |||||||
NLL | mem. | time | NLL | mem. | time | NLL | mem. | time | |
adjoint method [2] | 10.590.17 | 170000 | 0.740.04 | -10.530.25 | 24000 | 4.820.29 | -0.310.01 | 8.1000.0 | 6.330.18 |
backpropagation [2] | 10.540.18 | 4,436115 | 0.910.05 | -9.530.42 | 4,479250 | 12.000.93 | -0.240.05 | 1710.9193.1 | 10.642.73 |
baseline scheme | 10.540.18 | 4,457115 | 1.100.04 | -9.530.42 | 1,858228 | 5.480.25 | -0.240.05 | 515.2122.0 | 4.370.70 |
ACA [46] | 10.570.30 | 306000 | 0.770.02 | -10.650.45 | 73000 | 3.980.14 | -0.310.02 | 29.5000.5 | 5.080.88 |
proposed | 10.490.11 | 95000 | 0.840.03 | -10.890.11 | 20000 | 4.390.23 | -0.310.02 | 9.2000.0 | 5.730.43 |
HEPMASS () | BSDS300 () | MNIST () | |||||||
NLL | mem. | time | NLL | mem. | time | NLL | mem. | time | |
adjoint method [2] | 16.490.25 | 40000 | 4.190.15 | -152.040.09 | 5770 | 11.700.44 | 0.9180.011 | 1,0864 | 10.120.88 |
backpropagation [2] | 17.030.22 | 5,254137 | 11.821.33 | — | — | — | — | — | — |
baseline scheme | 17.030.22 | 1,102174 | 4.400.40 | — | — | — | — | — | — |
ACA [46] | 16.410.39 | 88000 | 3.670.12 | -151.270.47 | 7571 | 6.970.25 | 0.9190.003 | 4,3321 | 7.940.63 |
proposed | 16.480.20 | 35000 | 4.150.13 | -151.170.15 | 2832 | 8.070.72 | 0.9170.002 | 1,0791 | 9.420.32 |
Negative log-likelihoods (NLL), peak memory consumption [], and computation time per iteration []. The medians standard deviations of three runs.