Incorporating Symmetry into Deep Dynamics Models for Improved Generalization
Abstract
Recent work has shown deep learning can accelerate the prediction of physical dynamics relative to numerical solvers. However, limited physical accuracy and an inability to generalize under distributional shift limits its applicability to the real world. We propose to improve accuracy and generalization by incorporating symmetries into convolutional neural networks. Specifically, we employ a variety of methods each tailored to enforce a different symmetry. Our models are both theoretically and experimentally robust to distributional shift by symmetry group transformations and enjoy favorable sample complexity. We demonstrate the advantage of our approach on a variety of physical dynamics including Rayleigh–Bénard convection and real-world ocean currents and temperatures. Compared with image or text applications, our work is a significant step towards applying equivariant neural networks to high-dimensional systems with complex dynamics. We open-source our simulation, data and code at https://github.com/Rose-STL-Lab/Equivariant-Net.
1 Introduction
Modeling dynamical systems in order to forecast the future is of critical importance in a wide range of fields including, e.g., fluid dynamics, epidemiology, economics, and neuroscience [2, 21, 45, 22, 14]. Many dynamical systems are described by systems of non-linear differential equations that are difficult to simulate numerically. Accurate numerical computation thus requires long run times and manual engineering in each application.
Recently, there has been much work applying deep learning to accelerate solving differential equations [46, 6]. However, current approaches struggle with generalization. The underlying problem is that physical data has no canonical frame of reference to use for data normalization. For example, it is not clear how to rotate samples of fluid flow such that they share a common orientation. Thus real-world out-of-distribution test data is difficult to align with training data. Another limitation of current approaches is low physical accuracy. Even when mean error is low, errors are often spatially correlated, producing a different energy distribution from the ground truth.
We propose to improve the generalization and physical accuracy of deep learning models for physical dynamics by incorporating symmetries into the forecasting model. In physics, Noether’s Law gives a correspondence between conserved quantities and groups of symmetries. By building a neural network which inherently respects a given symmetry, we thus make conservation of the associated quantity more likely and consequently the model’s prediction more physically accurate.
A function is equivariant if when its input is transformed by a symmetry group , the output is transformed by the same symmetry,
See Figure 1 for an illustration. In the setting of forecasting, approximates the underlying dynamical system. The set of valid transformations is called the symmetry group of the system.

By designing a model that is inherently equivariant to transformations of its input, we can guarantee that our model generalizes automatically across these transformations, making it robust to distributional shift. The symmetries we consider, translation, rotation, uniform motion, and scale, have different properties, and thus we tailor our methods for incorporating each symmetry.
Specifically, for scale equivariance, we replace the convolution operation with group correlation over the group generated by translations and rescalings. Our method builds on that of Worrall and Welling [51], with significant novel adaptations to the physics domain: scaling affecting time, space, and magnitude; both up and down scaling; and scaling by any real number. For rotational symmetries, we leverage the key insight of Cohen and Welling [9] that the input, output, and hidden layers of the network are all acted upon by the symmetry group and thus should be treated as representations of the symmetry group. Our rotation-equivariant model is built using the flexible E(2)-CNN framework developed by Weiler and Cesa [49]. In the case of a uniform motion, or Galilean transformation, we show the above methods are too constrained. We use the simple but effective technique of convolutions conjugated by averaging operations.
Research into equivariant neural networks has mostly been applied to tasks such as image classification and segmentation [27, 50, 49]. In contrast, we design equivariant networks in a completely different context, that of a time series representing a physical process. Forecasting high-dimensional turbulence is a significant step for equivariant neural networks compared to the low-dimensional physics examples and computer vision problems treated in other works.
We test on a simulated turbulent convection dataset and on real-world ocean current and temperature data. Ocean currents are difficult to predict using numerical methods due to unknown external forces and complex dynamics not fully captured by simplified mathematical models. These domains are chosen as examples, but since the symmetries we focus on are pervasive in almost all physics problems, we expect our techniques will be widely applicable. Our contributions include:
-
•
We study the problem of improving the generalization capability and physical accuracy of deep learning models for learning complex physical dynamics such as turbulence and ocean currents.
-
•
We design tailored methods with theoretical guarantees to incorporate various symmetries, including uniform motion, rotation, and scaling, into convolutional neural networks.
-
•
When evaluated on turbulent convection and ocean current prediction, our models achieve significant improvement on generalization of both predictions and physical consistency.
-
•
For different symmetries, our methods have an average and maximum reduction in energy error when evaluated on turbulent convection with no distributional shift.
2 Mathematical Preliminaries
2.1 Symmetry Groups and Equivariant Functions
Formal discussion of symmetry relies on the concept of an abstract symmetry group. We give a brief overview, for a more formal treatment see Appendix A, or Lang [28].
A group of symmetries or simply group consists of a set together with a composition map . The composition map is required to be associative and have an identity . Most importantly, composition with any element of is required to be invertible.
Groups are abstract objects, but they become concrete when we let them act. A group has an action on a set if there is an action map which is compatible with the composition law. We say further that is a -representation if the set is a vector space and the group acts on by linear transformations.
Definition 1 (invariant, equivariant).
Let be a function and be a group. Assume acts on and . The function is -equivariant if for all and . The function is -invariant if for all and .
2.2 Physical Dynamical Systems
We investigate two dynamical systems: Rayleigh–Bénard convection and real-world ocean current and temperature. These systems are governed by Navier-Stokes equations.
2D Navier-Stokes (NS) Equations. Let be the velocity vector field of a flow. The field has two components , velocities along the and directions. The governing equations for this physical system are the momentum equation, continuity equation, and temperature equation,
() |
where is temperature, is pressure, is the heat conductivity, is initial density, is the coefficient of thermal expansion, is the kinematic viscosity, and is the buoyant force.
2.3 Symmetries of Differential Equations
By classifying the symmetries of a system of differential equations, the task of finding solutions is made far simpler, since the space of solutions will exhibit those same symmetries. Let be a group equipped with an action on 2-dimensional space and 3-dimensional spacetime . Let be a -representation. Denote the set of all -fields on as Define similarly to be -fields on . Then has an induced action on by and on analogously.
Consider a system of differential operators acting on . Denote the set of solutions We say is a symmetry group of if preserves . That is, if is a solution of , then for all , is also. In order to forecast the evolution of a system , we model the forward prediction function . Let . The input to is a collection of snapshots at times denoted . The prediction function is defined . It predicts the solution at a time based on the solution in the past. Let be a symmetry group of . Then for , is also a solution of . Thus . Consequently, is -equivariant.
2.4 Symmetries of Navier-Stokes equations
The Navier-Stokes equations are invariant under the following five different transformations. Individually each of these types of transformations generates a group of symmetries of the system. The full list of symmetry groups of NS equations and Heat equations are shown in Appendix B.6.
-
•
Space translation: , ,
-
•
Time translation: , ,
-
•
Uniform motion: , ,
-
•
Rotation/Reflection: ,
-
•
Scaling: , .
3 Methodology
We prescribe equivariance by training within function classes containing only equivariant functions. Our models can thus be theoretically guaranteed to be equivariant up to discretization error. We incorporate equivariance into two state-of-the-art architectures for dynamics prediction, ResNet and U-net [48]. Below, we describe how we modify the convolution operation in these models for different symmetries to form four EquG-ResNet and four EquG-Unet models.
3.1 Equivariant Networks
The key to building equivariant networks is that the composition of equivariant functions is equivariant. Hence, if the maps between layers of a neural network are equivariant, then the whole network will be equivariant. Note that both the linear maps and activation functions must be equivariant. An important consequence of this principle is that the hidden layers must also carry a -action. Thus, the hidden layers are not collections of scalar channels, but vector-valued -representations.
Equivariant Convolutions. Consider a convolutional layer with kernel from a -field to a -field. Let and be -representations with action maps and respectively. Cohen et al. [11, Theorem 3.3] prove the network is -equivariant if and only if
(1) |
A network composed of such equivariant convolutions is called a steerable CNN.
Equivariant ResNet and U-net. Equivariant ResNet architectures appear in [9, 10], and equivariant transposed convolution, a feature of U-net, is implemented in [49]. We prove in general that adding skip connections to a network does not affect its equivariance with respect to linear actions and also give a condition for ResNet or Unet to be equivariant in Appendix B.2.
Relation to Data Augmentation. To improve generalization, equivariant networks offer a better performing alternative to the popular technique of data augmentation [13]. Large symmetry groups normally require augmentation with many transformed examples. In contrast, for equivariant models, we have following proposition. (See Appendix B.1 for proof.)
Proposition 1.
-equivariant models with equivariant loss learn equally (up to sample weight) from any transformation of a sample . Thus data augmentation does not help during training.
3.2 Time and Space Translation Equivariance
CNNs are time translation-equivariant as long as we predict in an autoregressive manner. Convolutional layers are also naturally space translation-equivariant (if cropping is ignored). Any activation function which acts identically pixel-by-pixel is equivariant.
3.3 Rotational Equivariance
To incorporate rotational symmetry, we model using -equivariant convolutions and activations within the E(2)-CNN framework of Weiler and Cesa [49]. In practice, we use the cyclic group instead of as for large enough the difference is practically indistinguishable due to space discretization. We use powers of the regular representation for hidden layers. The representation has basis given by elements of and -action by permutation matrices. It has good descriptivity since it contains all irreducible representations of , and it is compatible with any activation function applied channel-wise.
3.4 Uniform Motion Equivariance
Uniform motion is part of Galilean invariance and is relevant to all non-relativistic physics modeling. For a vector field and vector , uniform motion transformation is adding a constant vector field to the vector field , . By the following corollary, proved in Appendix B.3, enforcing uniform motion equivariance as above by requiring all layers of the CNN to be equivariant severely limits the model.
Corollary 2.
If is a CNN alternating between convolutions and channel-wise activations and the combined layers are uniform motion equivariant, then is affine.
To overcome this limitation, we relax the requirement by conjugating the model with shifted input distribution. For each sliding local block in each convolutional layer, we shift the mean of input tensor to zero and shift the output back after convolution and activation function per sample. In other words, if the input is and the output is for one sliding local block, where is batch size, is number of channels, is the kernel size, and is the kernel, then
(2) |
This will allow the convolution layer to be equivariant with respect to uniform motion. If the input is a vector field, we apply this operation to each element.
Proposition 3.
A residual block is uniform motion equivariant if the residual connection is uniform motion invariant.
By the proposition 3 above that is proved in Appendix B.3, within ResNet, residual mappings should be invariant, not equivariant, to uniform motion. That is, the skip connection is equivariant and the residual function should be invariant. Hence, for the first layer in each residual block, we omit adding the mean back to the output . In the case of Unet, when upscaling, we pad with the mean to preserve the overall mean.
3.5 Scale Equivariance
Scale equivariance in dynamics is unique as the physical law dictates the scaling of magnitude, space and time simultaneously. This is very different from scaling in images regarding resolutions [51]. For example, the Navier-Stokes equations are preserved under a specific scaling ratio of time, space, and velocity given by the transformation
(3) |
where . We implement two different approaches for scale equivariance, depending on whether we tie the physical scale with the resolution of the data.
Resolution Independent Scaling. We fix the resolution and scale the magnitude of the input by varying the discretization step size. An input with step size and can be scaled by scaling the magnitude of vector alone, provided the discretization constants are now assumed to be and . We refer to this as magnitude equvariance hereafter.
To obtain magnitude equivariance, we divide the input tensor by the MinMax scaler (the maximum of the tensor minus the minimum) and scale the output back after convolution and activation per sliding block. We found that the standard deviation and mean L2 norm may work as well but are not as stable as the MinMax scaler. Specifically, using the same notation as in Section 3.4,
(4) |
Resolution Dependent Scaling. If the physical scale of the data is fixed, then scaling corresponds to a change in resolution and time step size. To achieve this, we replace the convolution layers with group correlation layers over the group of scaling and translations. In convolution, we translate a kernel across an input as such The -correlation upgrades this operation by both translating and scaling the kernel relative to the input,
(5) |
where and denote the indices of output and input channels respectively. We add an axis to the tensors corresponding the scale factor . Note that we treat the channel as a time dimension both with respective to our input and scaling action. As a consequence, as the number of channels increases in the lower layers of Unet and ResNet, the temporal resolution increases, which is analogous to temporal refinement in numerical methods [24, 31]. For the input of first layer where has no levels originally, .
Our model builds on the methods of Worrall and Welling [51], but with important adaptations for the physical domain. Our implementation of group correlation equation 5 directly incorporates the physical scaling law equation 3 of the system equation . This affects time, space, and magnitude. (For heat, we drop the magnitude scaling.) The physical scaling law dictates our model should be equivariant to both up and down scaling and by any . Practically, the sum is truncated to 7 different and discrete data is continuously indexed using interpolation. Note equation 3 demands we scale anisotropically, i.e. differently across time and space.
4 Related work
Equivariance and Invariance.
Developing neural nets that preserve symmetries has been a fundamental task in image recognition [12, 49, 9, 7, 29, 27, 3, 52, 10, 19, 50, 16, 42]. But these models have never been applied to forecasting physical dynamics. Jaiswal et al. [23], Moyer et al. [37] proposed approaches to find representations of data that are invariant to changes in specified factors, which is different from our physical symmetries. Ling et al. [30] and Fang et al. [17] studied tensor invariant neural networks to learn the Reynolds stress tensor while preserving Galilean invariance, and Mattheakis et al. [34] embedded even/odd symmetry of a function and energy conservation into neural networks to solve differential equations. But these two papers are limited to fully connected neural networks. Sosnovik et al. [44] extend Worrall and Welling [51] to group correlation convolution. But these two papers are limited to 2D images and are not magnitude equivariant, which is still inadequate for fluid dynamics. Bekkers [4] describes principles for endowing a neural architecture with invariance with respect to a Lie group.
Physics-informed Deep Learning.
Deep learning models have been used often to model physical dynamics. For example, Wang et al. [48] unified the CFD technique and U-net to generate predictions with higher accuracy and better physical consistency. Kim and Lee [25] studied unsupervised generative modeling of turbulent flows but the model is not able to make real time future predictions given the historic data. Anderson et al. [1] designed rotationally covariant neural network for learning molecular systems. Raissi et al. [40, 41] applied deep neural networks to solve PDEs automatically but these approaches require explicit input of boundary conditions during inference, which are generally not available in real-time. Mohan et al. [35] proposed a purely data-driven DL model for turbulence, but the model lacks physical constraints and interpretability. Wu et al. [53] and Beucler et al. [5] introduced statistical and physical constraints in the loss function to regularize the predictions of the model. However, their studies only focused on spatial modeling without temporal dynamics. Morton et al. [36] incorporated Koopman theory into a encoder-decoder architecture but did not study the symmetry of fluid dynamics.
Video Prediction.
Our work is related to future video prediction. Conditioning on the observed frames, video prediction models are trained to predict future frames, e.g., [33, 18, 54, 47, 39, 18]. Many of these models are trained on natural videos with complex noisy data from unknown physical processes. Therefore, it is difficult to explicitly incorporate physical principles into these models. Our work is substantially different because we do not attempt to predict object or camera motions.
5 Experiments
We test our models on Rayleigh-Bénard convection and real-world ocean currents. We also evaluated on the heat diffusion systems, see Appendix C for more results. The implementation details and a detailed description of energy spectrum error can be found in Appendices D and B.7.
Evaluation Metrics.
Our goal is to show that adding symmetry improves both the accuracy and the physical consistency of predictions. For accuracy, we use Root Mean Square Error (RMSE) between the forward predictions and the ground truth over all pixels. For physical consistency, we calculate the Energy Spectrum Error (ESE) which is the RMSE of the log of energy spectrum. ESE can indicate whether the predictions preserve the correct statistical distributions of the fluids and obey the energy conservation law, which is a critical metric for physical consistency.
Experimental Setup.
ResNet[20] and U-net[43] are the best-performing models for our tasks [48] and are well-suited for our tasks. Thus, we implemented these two convolutional architectures equipped with four different symmetries, which we name Equ-ResNet(U-net). We use a rolling window approach to generate sequences with step size 1 for the RBC data and step size 3 for the ocean data. All models predict raw velocity and temperature fields up to steps ahead auto-regressively. We use the MSE loss function that accumulates the forecasting errors. We split the data 60%-20%-20% for training-validation-test across time and report mean errors over five random runs.
5.1 Equivariance Errors
The equivariance errors can be defined as , where is an input, is a neural net, is a transformation from a symmetry group. We empirically measure the equivariance errors of all equivariant models we have designed. Table 1 shows the equivariance errors of ResNet and Equ-ResNet. The transformation is sampled in the same way as we generated the transformed Rayleigh-Bénard Convection test sets. See more details in Appendix B.5.
5.2 Experiments on Simulated Rayleigh-Bénard Convection Dynamics
Data Description. Rayleigh-Bénard Convection occurs in a horizontal layer of fluid heated from below and is a major feature of the El Niño dynamics. The dataset comes from two-dimensional turbulent flow simulated using the Lattice Boltzmann Method [8] with Rayleigh number . We divide each 1792 256 image into 7 square subregions of size 256 256, then downsample to 64 64 pixels. To test the models’ generalization ability, we generate additional four test sets : 1) UM: added random vectors drawn from ; 2) Mag: multiplied by random values sampled from ; 3) Rot: randomly rotated by the multiples of ; 4) Scale: scaled by sampled from . Due to lack of a fixed reference frame, real-world data would be transformed relative to training data. We use transformed data to mimic this scenario.
P[1]>\arraybackslashp#1
UM | Mag | Rot | Scale | |
---|---|---|---|---|
ResNets | 2.010 | 1.885 | 5.895 | 1.658 |
Equ | 0.0 | 0.0 | 1.190 | 0.579 |
Unets | 1.070 | 0.200 | 1.548 | 1.809 |
Equ | 0.0 | 0.0 | 0.794 | 0.481 |
Energy Spectrum Errors | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Orig | UM | Mag | Rot | Scale | Orig | UM | Mag | Rot | Scale | |
ResNet | 0.670.24 | 2.940.84 | 4.301.27 | 3.460.39 | 1.960.16 | 0.460.19 | 0.560.29 | 0.260.14 | 1.590.42 | 4.322.33 |
Augm | 1.100.20 | 1.540.12 | 0.920.09 | 1.010.11 | 1.370.02 | 1.140.32 | 1.920.21 | 1.550.14 | ||
Equ | 0.710.26 | 0.710.26 | 0.330.11 | 0.330.11 | ||||||
Equ | 0.690.24 | 0.670.14 | 0.340.09 | 0.190.02 | ||||||
Equ | 0.650.26 | 0.760.02 | 0.310.06 | 1.230.04 | ||||||
Equ | 0.700.02 | 0.850.09 | 0.440.22 | 0.680.26 | ||||||
U-net | 0.640.24 | 2.270.82 | 3.591.04 | 2.780.83 | 1.650.17 | 0.500.04 | 0.340.10 | 0.550.05 | 0.910.27 | 4.250.57 |
Augm | 0.750.28 | 1.330.33 | 0.860.04 | 1.110.07 | 0.960.23 | 0.440.21 | 1.240.04 | 1.470.11 | ||
Equ | 0.680.26 | 0.710.24 | 0.230.06 | 0.140.05 | ||||||
Equ | 0.670.11 | 0.680.14 | 0.420.04 | 0.340.06 | ||||||
Equ | 0.680.25 | 0.740.01 | 0.110.02 | 1.160.05 | ||||||
Equ | 0.690.13 | 0.900.25 | 0.450.32 | 0.890.29 |
Prediction Performance. Table 2 shows the prediction RMSE and ESE on the original and four transformed test sets by the non-equivariant ResNet(Unet) and four Equ-ResNets(Unets). Augm is ResNet(Unet) trained on the augmented training set with additional samples with random transformations applied from the relevant symmetry group. The augmented training set contains additional transformed samples and is three times the size of the original training set. Each column contains the prediction errors by the non-equivariant and equivariant models on each test set. On the original test set, all models have similar RMSE, yet the equivariant models have lower ESE. This demonstrates that incorporating symmetries preserves the representation powers of CNNs and even improves models’ physical consistency.




On the transformed test sets, we can see that ResNet(Unet) fails, while Equ-ResNets(Unets) performs even much better than Augm-ResNets(Unets). This demonstrates the value of equivariant models over data augmentation for improving generalization. Figure 2 shows the ground truth and the predicted velocity fields at time step , and by the ResNet and four Equ-ResNets on the four transformed test samples.
RMSE | ESE | |
---|---|---|
ResNet | 1.030.05 | 0.960.10 |
Equ | 0.690.01 | 0.350.13 |
ResNet | 1.500.02 | 0.550.11 |
Equ | 0.750.04 | 0.390.02 |
ResNet | 1.180.05 | 1.210.04 |
Equ | 0.770.01 | 0.680.01 |
ResNet | 0.920.01 | 1.340.07 |
Equ | 0.740.03 | 1.020.02 |
Generalization.
In order to evaluate models’ generalization ability with respect to the extent of distributional shift, we created additional test sets with different scale factors from to 1. Figure 3 shows ResNet and Equ-ResNet prediction RMSEs (left) and ESEs (right) on the test sets upscaled by different factors. We observed that Equ-ResNet is very robust across various scaling factors while ResNet does not generalize.
We also compare ResNet and Equ-ResNet when both train and test sets have random transformations from the relevant symmetry group applied to each sample. This mimics real-world data in which each sample has unknown reference frame. As shown in Table 3 shows Equ-ResNet outperforms ResNet on average by 34% RMSE and 40% ESE.



5.3 Experiments on Real World Ocean Dynamics
Data Description.
We use the reanalysis ocean current velocity data generated by the NEMO ocean engine [32].111The data are available at https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024 We selected an area from each of the Atlantic, Indian and North Pacific Oceans from 01/01/2016 to 08/18/2017 and extracted 6464 sub-regions for our experiments. The corresponding latitude and longitude ranges for the selected regions are (-44-23, 2546), (5576, -39-18) and (-174-153, 526) respectively. We not only test all models on the future data but also on a different domain (-180-159, -40-59) in South Pacific Ocean from 01/01/2016 to 12/15/2016.
Prediction Performance.
Table 4 shows the RMSE and ESE of ResNets(Unets), and equivariant Equ-ResNets(Unets) on the test sets with different time range and spatial domain from the training set. All the equivariant models outperform the non-equivariant baseline on RMSE, and Equ-ResNet achieves the lowest RMSE. For ESE, only the Equ-ResNet(Unet) is worse than the baseline. Also, it is remarkable that the Equ models have significantly lower ESE than others, suggesting that they correctly learn the statistical distribution of ocean currents.
Comparison with Data Augmentation.
We also compare Equ-ResNets(Unets) ResNets(Unets) that are trained with data-augmentation (Augm) in Table 4. In all cases, equivariant models outperforms the baselines trained with data augmentation. We find that data augmentation sometimes improves slightly on RMSE but not as much as the equivariant models. And, in fact, ESE is uniformly worse for models trained with data augmentation than even the baselines. In contrast, the equivariant models have much better ESE than the baselines with or without augmentation. We believe data augmentation presents a trade-off in learning. Though the model may be less sensitive to the various transformations we consider, we need to train bigger models longer on many more samples. The models may not have enough capacity to learn the symmetry from the augmented data and the dynamics of the fluids at the same time. By comparison, equivariant architectures do not have this issue.
RMSE | ESE | |||
---|---|---|---|---|
Test | Test | Test | Test | |
ResNet | 0.710.07 | 0.720.04 | 0.830.06 | 0.750.11 |
Augm | 0.700.01 | 0.700.07 | 1.060.06 | 1.060.04 |
Augm | 0.760.02 | 0.710.01 | 1.080.08 | 1.050.8 |
Augm | 0.730.01 | 0.690.01 | 0.940.01 | 0.860.01 |
Augm | 0.970.06 | 0.920.04 | 0.850.03 | 0.950.11 |
Equ | 0.680.06 | 0.680.16 | 0.750.06 | 0.730.08 |
Equ | 0.660.14 | 0.680.11 | 0.840.04 | 0.850.14 |
Equ | 0.690.01 | 0.700.08 | 0.430.15 | 0.280.20 |
Equ | 0.630.02 | 0.680.21 | 0.440.05 | 0.420.12 |
U-net | 0.700.13 | 0.730.10 | 0.770.12 | 0.730.07 |
Augm | 0.680.02 | 0.680.01 | 0.850.04 | 0.830.04 |
Augm | 0.690.02 | 0.670.10 | 0.780.03 | 0.860.02 |
Augm | 0.790.01 | 0.700.01 | 0.790.01 | 0.780.02 |
Augm | 0.710.01 | 0.770.02 | 0.840.01 | 0.770.02 |
Equ | 0.660.10 | 0.670.03 | 0.730.03 | 0.820.13 |
Equ | 0.630.08 | 0.660.09 | 0.740.05 | 0.790.04 |
Equ | 0.680.05 | 0.690.02 | 0.420.02 | 0.470.07 |
Equ | 0.650.09 | 0.690.05 | 0.450.13 | 0.430.05 |
Figure 3 shows the ground truth and the predicted ocean currents at time step by different models. We can see that equivariant models’ predictions are more accurate and contain more details than the baselines. Thus, incorporating symmetry into deep learning models can improve the prediction accuracy of ocean currents. The most recent work on this dataset is de Bezenac et al. [15], which combines a warping scheme and a U-net to predict temperature. Since our models can also be applied to advection-diffusion systems, we also investigated the task of ocean temperature field predictions. We observe that Equ-Unet performs slightly better than de Bezenac et al. [15]. For additional results, see Appendix E.
6 Conclusion and Future work
We develop methods to improve the generalization of deep sequence models for learning physical dynamics. We incorporate various symmetries by designing equivariant neural networks and demonstrate their superior performance on 2D time series prediction both theoretically and experimentally. Our designs obtain improved physical consistency for predictions. In the case of transformed test data, our models generalize significantly better than their non-equivariant counterparts. Importantly, all of our equivariant models can be combined and can be extended to 3D cases. The group also acts on the boundary conditions and external forces of a system . If these are -invariant, then the system is strictly invariant as in Section 2.3. If not, one must consider a family of solutions to retain equivariance. To the best of our best knowledge, there does not exist a single model with equivariance to the full symmetry group of the Navier-Stokes equations. It is possible but non-trivial, and we continue to work on combining different equivariances. Future work also includes speeding up the the scale-equivariant models and incorporating other symmetries into DL models.
Acknowledgments
This work was supported in part by Google Faculty Research Award, NSF Grant #2037745, and the U. S. Army Research Office under Grant W911NF-20-1-0334. The Titan Xp used for this research was donated by the NVIDIA Corporation. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We also thank Dragos Bogdan Chirila for providing the turbulent flow data.
References
- Anderson et al. [2019] Brandon Anderson, Truong-Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. In Advances in neural information processing systems (NeurIPS), 2019.
- Anderson and Wendt [1995] John David Anderson and J Wendt. Computational fluid dynamics, volume 206. Springer, 1995.
- Bao and Song [2019] Erkao Bao and Linqi Song. Equivariant neural networks and equivarification. arXiv preprint arXiv:1906.07172, 2019.
- Bekkers [2020] Erik J Bekkers. B-spline cnns on lie groups. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=H1gBhkBFDH.
- Beucler et al. [2019] Tom Beucler, Michael Pritchard, Stephan Rasp, Pierre Gentine, Jordan Ott, and Pierre Baldi. Enforcing analytic constraints in neural-networks emulating physical systems. arXiv preprint arXiv:1909.00912, 2019.
- Chen et al. [2018] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems, pages 6571–6583, 2018.
- Chidester et al. [2018] Benjamin Chidester, Minh N. Do, and Jian Ma. Rotation equivariance and invariance in convolutional neural networks. arXiv preprint arXiv:1805.12301, 2018.
- Chirila [2018] Dragos Bogdan Chirila. Towards lattice Boltzmann models for climate sciences: The GeLB programming language with applications. PhD thesis, University of Bremen, 2018.
- Cohen and Welling [2016a] Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning (ICML), pages 2990–2999, 2016a.
- Cohen and Welling [2016b] Taco S. Cohen and Max Welling. Steerable CNNs. arXiv preprint arXiv:1612.08498, 2016b.
- Cohen et al. [2019a] Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. In Advances in Neural Information Processing Systems, pages 9142–9153, 2019a.
- Cohen et al. [2019b] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral CNN. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97, pages 1321–1330, 2019b.
- Dao et al. [2019] Tri Dao, Albert Gu, Alexander J Ratner, Virginia Smith, Christopher De Sa, and Christopher Ré. A kernel theory of modern data augmentation. Proceedings of machine learning research, 97:1528, 2019.
- Day [1994] Richard H. Day. Complex economic dynamics-vol. 1: An introduction to dynamical systems and market mechanisms. MIT Press Books, 1, 1994.
- de Bezenac et al. [2018] Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=By4HsfWAZ.
- Dieleman et al. [2016] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional neural networks. In International Conference on Machine Learning (ICML), 2016.
- Fang et al. [2018] Rui Fang, David Sondak, Pavlos Protopapas, and Sauro Succi. Deep learning for turbulent channel flow. arXiv preprint arXiv:1812.02241, 2018.
- Finn et al. [2016] Chelsea Finn, Ian Goodfellow, and Sergey Leine. Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pages 64–72, 2016.
- Finzi et al. [2020] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. arXiv preprint arXiv:2002.12880, 2020.
- He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv:1505.04597, 2015.
- Hethcote [2000] Herbert W Hethcote. The mathematics of infectious diseases. SIAM review, 42(4):599–653, 2000.
- Izhikevich [2007] Eugene M. Izhikevich. Dynamical systems in neuroscience. MIT press, 2007.
- Jaiswal et al. [2019] Ayush Jaiswal, Daniel Moyer, Greg Ver Steeg, Wael AbdAlmageed, and Premkumar Natarajan. Invariant representations through adversarial forgetting. arXiv preprint arXiv:1911.04060, 2019.
- Kim and Hoefer [1990] Ihn S Kim and Wolfgang JR Hoefer. A local mesh refinement algorithm for the time domain-finite difference method using maxwell’s curl equations. IEEE Transactions on Microwave Theory and Techniques, 38(6):812–815, 1990.
- Kim and Lee [2020] Junhyuk Kim and Changhoon Lee. Deep unsupervised learning of turbulence for inflow generation at various Reynolds numbers. Journal of Computational Physics, page 109216, 2020.
- Knapp [2002] Anthony W. Knapp. Lie Groups Beyond an Introduction, volume 140 of Progress in Mathematics. Birkhäuser, Boston, 2nd edition, 2002.
- Kondor and Trivedi [2018] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In Proceedings of the 35th International Conference on Machine Learning (ICML), volume 80, pages 2747–2755, 2018.
- Lang [2002] Serge Lang. Algebra. Springer, Berlin, 3rd edition, 2002.
- Lenc and Vedaldi [2015] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 991–999, 2015.
- Ling et al. [2017] Julia Ling, Andrew Kurzawskim, and Jeremy Templeton. Reynolds averaged turbulence modeling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 2017.
- Lisitsa et al. [2012] Vadim Lisitsa, Galina Reshetova, and Vladimir Tcheverda. Finite-difference algorithm with local time-space grid refinement for simulation of waves. Computational geosciences, 16(1):39–54, 2012.
- Madec et al. [2015] Gurvan Madec et al. NEMO ocean engine, 2015. Technical Note. Institut Pierre-Simon Laplace (IPSL), France. https://epic.awi.de/id/eprint/39698/1/NEMO_book_v6039.pdf.
- Mathieu et al. [2015] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
- Mattheakis et al. [2019] Marios Mattheakis, Pavlos Protopapas, D. Sondak, Marco Di Giovanni, and Efthimios Kaxiras. Physical symmetries embedded in neural networks. arXiv preprint arXiv:1904.08991, 2019.
- Mohan et al. [2019] Arvind Mohan, Don Daniel, Michael Chertkov, and Daniel Livescu. Compressed convolutional LSTM: An efficient deep learning framework to model high fidelity 3D turbulence. arXiv preprint arXiv:1903.00033, 2019.
- Morton et al. [2018] Jeremy Morton, Antony Jameson, Mykel J. Kochenderfer, and Freddie Witherden. Deep dynamical modeling and control of unsteady fluid flows. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
- Moyer et al. [2018] Daniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, and Greg Ver Steeg. Invariant representations without adversarial training. In Advances in Neural Information Processing Systems (NeurIPS), pages 9084–9093, 2018.
- Olver [2000] Peter J. Olver. Applications of Lie groups to differential equations, volume 107. Springer Science & Business Media, 2000.
- Oprea et al. [2020] Sergiu Oprea, P. Martinez-Gonzalez, A. Garcia-Garcia, John Alejandro Castro-Vargas, S. Orts-Escolano, J. Garcia-Rodriguez, and Antonis A. Argyros. A review on deep learning techniques for video prediction. ArXiv, abs/2004.05214, 2020.
- Raissi et al. [2017] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part I): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017.
- Raissi et al. [2019] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
- Rohan Ghosh [2019] Anupam K. Gupta Rohan Ghosh. Scale steerable filters for locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1906.03861, 2019.
- Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
- Sosnovik et al. [2020] Ivan Sosnovik, Michał Szmaja, and Arnold Smeulders. Scale-equivariant steerable networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJgpugrKPS.
- Strogatz [2018] Steven H. Strogatz. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. CRC press, 2018.
- Tompson et al. [2017] Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. Accelerating Eulerian fluid simulation with convolutional networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, pages 3424–3433, 2017.
- Villegas et al. [2017] Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, and Honglak Lee. Decomposing motion and content for natural video sequence prediction. In International Conference on Learning Representations (ICLR), 2017.
- Wang et al. [2019] Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards physics-informed deep learning for turbulent flow prediction. arXiv preprint arXiv:1911.08655, 2019.
- Weiler and Cesa [2019] Maurice Weiler and Gabriele Cesa. General E(2)-equivariant steerable CNNs. In Advances in Neural Information Processing Systems (NeurIPS), pages 14334–14345, 2019.
- Weiler et al. [2018] Maurice Weiler, Fred A. Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant CNNs. Computer Vision and Pattern Recognition (CVPR), 2018.
- Worrall and Welling [2019] Daniel Worrall and Max Welling. Deep scale-spaces: Equivariance over scale. In Advances in Neural Information Processing Systems (NeurIPS), pages 7364–7376, 2019.
- Worrall et al. [2017] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5028–5037, 2017.
- Wu et al. [2019] Jin-Long Wu, Karthik Kashinath, Adrian Albert, Dragos Chirila, Prabhat, and Heng Xiao. Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. Journal of Computational Physics, page 109209, 2019.
- Xue et al. [2016] Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in neural information processing systems (NeurIPS), pages 91–99, 2016.
Appendix A Additional Background on Group Theory
We give a brief overview of group theory and representation theory. For a more complete introduction to the topic see Lang [28]. We start with the definition of an abstract symmetry group.
Definition 2 (group).
A group of symmetries or simply group is a set together with a binary operation called composition satisfying three properties:
-
1.
(identity) There is an element such that for all ,
-
2.
(associativity) for all ,
-
3.
(inverses) if , then there is an element such that .
Definition 3 (Lie group).
A group is a Lie group if it is also a smooth manifold over and the composition and inversion maps are smooth, i.e. infinitely differentiable.
Example 1.
Let be the set of invertible real matrices. The set is closed under inversion and matrix multiplication gives a well-defined composition. This a 4-dimensional real Lie group.
Example 2.
Let where is rotation by and is reflection over the -axis. This is the group of symmetries of an equilateral triangle pointing along the -axis, see Figure 4.

Groups are abstract objects, but they become concrete when we let them act.
Definition 4 (action).
A group acts on a set if there is an action map satisfying
-
1.
for all , ,
-
2.
for all , .
Definition 5 (representation).
We say is a -representation if is an -vector space and acts on by linear transformations, that is,
-
1.
for all , ,
-
2.
for all , , .
Example 3.
The group acts on , the set of points in an equilateral triangle, as in Figure 4. The vector space is both a -representation and a -representation.
The language of group theory allows us to formally define equivariance and invariance.
Definition 6 (invariant, equivariant).
Let be a function and be a group.
-
1.
Assume acts on . The function is -invariant if for all and .
-
2.
Assume acts on and . The function is -equivariant if for all and .
See Figure 1 for an illustration. Note that we often omit the different action maps of on and on in our notion when they are clear from context.
We can combine and decompose representations in different ways.
Definition 7 (direct sum, tensor product).
Let and be -representations.
-
1.
The direct sum has underlying set . As a vector space it has scalars and addition . It is a -representation with action .
-
2.
The tensor product
is a -representation with action .
Definition 8 (irreducible).
Let be a -representation.
-
1.
If is a subspace of and is closed under the action of , i.e. for all , then we say it is a subrepresentation.
-
2.
If and itself are the only subrepresentations of , then it is irreducible.
Irreducible representations are the “prime” building blocks of representations. A compact Lie group is one which is closed and bounded. The rotation group is compact, but the group is not. All finite groups are also compact Lie groups. The following theorem vastly simplifies our understanding of possible representations of compact Lie groups (see e.g. Knapp [26]).
Theorem 4 (Weyl’s Complete Reducibility Theorem).
Let be a compact real Lie group. Every finite-dimensional representation of is a direct sum of irreducible representations .
Thus to classify the possible finite-dimensional representations of , one need only to find all possible irreducible representations of .
Appendix B Additional Theory
B.1 Equivariant Networks and Data Augmentation
A classic strategy for dealing with distributional shift by transformations in a group is to augment the training set by adding samples transformed under . That is, using the new training set . We show that data augmentation has no advantage for a perfectly equivariant parameterized function since training samples and are equivalent. That is, learns the same from as from but with only possibly different sample weight. The following is a more formal statement of Proposition 1.
Proposition 5.
Let act on and . Let be a parameterized class of -equivariant functions differentiable with respect to . Let be a -equivariant loss function where acts on by , we have,
Proof.
Equality of the gradients follows equality of the functions ∎
In the case of RMSE and rotation or uniform motion, the loss function is invariant. That is, equivariant with . Thus the gradient for sample and is equal. In the case of scale, the loss function is equivariant with and . In that case, the sample is the same as the sample but with sample weight .
B.2 Adding Skip Connections Preserves Equivariance
We prove in general that adding skip connections to a network does not affect its equivariance with respect to linear actions in the following proposition 6. Define as the functional mapping between layer and layer .
Proposition 6.
Let the layer be a -representations for . Let be -equivariant for . Define recursively . Then is -equivariant.
Proof.
Assume is an equivariant function of for . Then by equivariance of and by linearity of the -action,
for . By induction, is equivariant with respect to . ∎
Both ResNet and U-net may be modeled as in Proposition 6 with some convolutional and activation components and some skip connections with . Since is equivariant for any , we thus have:
Corollary 7.
If the layers of ResNet or U-net are -representations and the convolutional mappings and activation functions are -equivariant, then the entire network is -equivariant. ∎
Corollary 7 allows us to build equivariant convolutional networks for rotational and scaling transformations, which are linear actions.
B.3 Results on Uniform Motion Equivariance
In this section, we prove that for the combined convolution-activation layers of a CNN to be uniform motion equivariant, the CNN must be an affine function. We assume that the activation function is applied pointwise. That is, the same activation function is applied to every one-dimensional channel independently.
Proposition 8.
Let be a tensor of shape and be convolutional kernel of shape . Let be a convolutional layer which is equivariant with respect to arbitrary uniform motion for a constant tensor of the same shape as . That is for all for some fixed . Then the sum of the weights of is 1.
Proof.
Since is equivariant, . By linearity, . Then because is a constant vector field, . As is arbitrary, . ∎
For an activation function to be uniform motion equivariant, it must be a translation.
Proposition 9.
Let be a function satisfying . Then is a translation.
Proof.
Let . Then . Choosing gives ∎
Proposition 10.
Let and be as in Prop 8. Let be a convolutional layer with kernel and an activation function. Assume is piecewise differentiable. Then if the composition is equivariant with respect to arbitrary uniform motions, it is an affine map of the form where is a real number and .
Proof.
If is non-zero, then we can choose a tensor , and constant tensor full of , and such that and are any two real numbers. Let . As before . Equivariance thus implies
Note , since if , then implies . However is arbitrary. Let . Then
This holds for arbitrary and , and thus we find is everywhere differentiable with slope . So for some . We can then rescale the convolution kernel to get . ∎
Corollary 11 (Corollary 2).
If is a CNN alternating between convolutions and pointwise activations and the combined layers are uniform motion equivariant, then is affine.
Proof.
This follows from Proposition 9 and the fact that composition of affine functions is affine. ∎
Since our treatment is only for pointwise activation functions, it remains a possibility that more descriptive networks can be constructed using activation functions which span multiple channels.
Proposition 12 (Proposition 3).
A residual block is uniform motion equivariant if the residual connection is uniform motion invariant.
Proof.
We denote the uniform motion transformation by by . Let be an invariant residual connection which is a composition of convolution layers and activation functions. Then we compute
as desired. ∎
B.4 Results on Scale Equivariance
We show that a scale-invariant CNN in the sense of equation 1 would be extremely limited. Let be the rescaling group. It is isomorphic to . For a real number, gives an action of on . There is also, e.g., a two-dimensional representation
Proposition 13.
Let be a -equivariant kernel for a convolutional layer. Assume acts on the input layer by and output layer by . Assume that the input layer is padded with 0s. Then is 1x1.
Proof.
If then there exists such that is outside the radius of the kernel. So . Thus by equivariance, for some ,
∎
B.5 Equivariance Error.
In practice it is difficult to implement a model which is perfectly equivariant. This results in equivariance error Given an input with true output and transformed data , the transformed test error can be bounded using the untransformed test error and .
Proposition 14.
The transformed test error is bounded
(6) |
Proof.
By the triangle inequality
∎
For uniform motion since . Consider and as flattened into a vector. denotes the operator norm. For , acting by on vector fields, . For scaling , .
B.6 Full Lists of Symmetries of Heat and NS Equations.
Symmetries of NS Equations. The Navier-Stokes equations are invariant under five different transformations (see e.g. [38]),
-
•
Space translation: , ,
-
•
Time translation: , ,
-
•
Uniform motion: , ,
-
•
Reflect/rotation: ,
-
•
Scaling: , .
Individually each of these types of transformations generates a group of symmetries of the system. Collectively, they form a 7-dimensional symmetry group.
Symmetries of Heat Equation. The heat equation has an even larger symmetry group than the NS equations [38]. Let be a solution to equation . Then the following are also solutions:
-
•
Space translation: , ,
-
•
Time translation: , ,
-
•
Galilean: ,
-
•
Reflect/Rotation: ,
-
•
Scaling: ,
-
•
Linearity: , and ,
-
•
Inversion: where .

B.7 Turbulence kinetic energy spectrum
The turbulence kinetic energy spectrum is related to the mean turbulence kinetic energy as
where the is the wavenumber and is the time step. Figure 5 shows a theoretical turbulence kinetic energy spectrum plot. The spectrum can describe the transfer of energy from large scales of motion to the small scales and provides a representation of the dependence of energy on frequency. Thus, the Energy Spectrum Error can indicate whether the predictions preserve the correct statistical distribution and obey the energy conservation law. A trivial example that can illustrate why we need ESE is that if a model simply outputs moving averages of input frames, the accumulated RMSE of predictions might not be high but the ESE would be really big because all the small or even medium eddies are smoothed out.
Appendix C Heat diffusion
2D Heat Equation. Let be a scalar field representing temperature. Then satisfies
() |
Here is the two-dimensional Laplacian and is the diffusivity.
The Heat Equation plays a major role in studying heat transfer, Brownian motion and particle diffusion. We simulate the heat equation at various initial conditions and thermal diffusivity using the finite difference method and generate 6 scalar temperature fields. Figure 6 shows a heat diffusion process where the temperature inside the circle is higher than the outside and the thermal diffusivity is 4. Since the heat equation is much simpler than the NS equations, a shallow CNN suffices to forecast the heat diffusion process.

For heat diffusion, due to the law of energy conservation, the sum of each temperature field should be consistent over the entire heat diffusion process. We evaluate the physical characteristics of the predictions using the L1 loss of the thermal energy. Table 5 shows the prediction RMSE and thermal energy loss of the CNNs and three Equ-CNNs on three transformed test sets. We can see that Equ-CNNs consistently outperform CNNs over three test sets.
RMSE (Thermal Energy Loss) | |||
---|---|---|---|
Mag | Rot | Scale | |
CNNs | 0.103 (4696.3) | 0.308 (1125.6) | 0.357 (1447.6) |
Equ-CNNs | 0.028 (107.7) | 0.153 (127.3) | 0.045 (396.6) |
Appendix D Implementation details
D.1 Datasets Description
Rayleigh-Bénard convection
Rayleigh-Bénard convection results from a horizontal layer of fluid heated from below, which is a major feature of the El Nino dynamics. The dataset comes from two dimensional turbulent flow simulated using the Lattice Boltzmann Method [8] with Rayleigh number . We divided each 1792 256 image into 7 square sub-regions of size 256 256, then downsample them into 64 64 pixels sized images. Figure 7 in appendix shows a snapshot in our RBC flow dataset. We generate the following test sets to test the models’ generalization ability.
-
•
Uniform motion (UM): transformed test sets by adding random vectors drawn from .
-
•
Magnitude (Mag): transformed test sets by multiplying random values sampled from .
-
•
Rotation (Rot): transformed test sets by randomly rotated by the multiples of .
-
•
Scale: transformed test sets by scaling each sample sampled from .


Ocean Currents
We used the reanalysis ocean currents velocity data generated by the NEMO (Nucleus for European Modeling of the Ocean) simulation engine 222The data are available at https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024. We selected an area from each of the Atlantic, Indian and North Pacific Oceans from 01/01/2016 to 08/18/2017 and extracted 6464 sub-regions for our experiments. The corresponding latitude and longitude ranges for the selected regions are (-44-23, 2546), (5576, -39-18) and (-174-153, 526) respectively. We not only test all models on the future data but also on a different domain (-180-159, -40-59) in South Pacific Ocean from 01/01/2016 to 12/15/2016. Also, the most recent work on this dataset is [15], which unified a warping scheme and an U-net to predict temperature. So to compare our equivariant models with state-of-arts, we also investigate our models on the task of temperature field predictions. Since the data back to year 2006 that [15] used is no longer available, we collect more recent temperature data from a square region (-50-20, 2050) in Atlantic Ocean from 01/01/2016 to 12/31/2017.
D.2 Experiments Setup
We tested our convolutional equivariant layers in two architecture, 18-layer ResNet and 13-layer U-net. One of our goals is to show that adding equivariance improves the physical accuracy of state-of-the-art dynamics prediction. ResNet and U-net are the popular state-of-the-art methods at the moment and our equivariance techniques are well-suited for their architecture. The reason we did not use recurrent models, such as Convolutional LSTM, is that they are slow to train especially for our case where the input length is large. This does not fit our long-term goal of accelerating computation.
The input to each model is a -size tensor representing the past timesteps of the velocity field. The output is a single velocity field. The value of is a hyper-parameter we tuned. We found the optimal value of to be around . To predict more timesteps, we apply the model autoregressively, dropping the oldest timestep and concatenating the prediction to the input.
To make this a fair comparison, we adjust the hidden dimensions for different equivariant models to make sure that the number of parameters in all models are about the same for either architecture, which can be found in Table 6. Table 7 gives the hyper-parameter tuning ranges for our models. Note that the hidden dimension and the number of layers of the shallow CNNs for the heat diffusion task are also well-tuned.
The loss function used is the MSE between the predicted frames and the ground truth for next steps, where is a parameter we tuned. We found or give the best performance. We use 60%-20%-20% training-validation-test split in time and use the validation set for hyper-parameters tuning based on the average error of predictions. The training set corresponds to the first 60% of the entire dataset in time and the validation/test sets contains the following 40%. For fluid flows, we standardize the data by the average of velocity vectors and the standard deviation of the L2 norm of velocity vectors. For sea surface temperature, we did the exact same data preprocessing described in de Bezenac et al. [15].
ResNet | Reg | UM | Mag | Rot | Scale | U-net | Reg | UM | Mag | Rot | Scale |
---|---|---|---|---|---|---|---|---|---|---|---|
Params () | 11.0 | 11.0 | 11.0 | 10.2 | 10.7 | 6.2 | 6.2 | 6.2 | 7.1 | 5.9 | |
3.04 | 5.21 | 5.50 | 14.31 | 160.32 | 2.15 | 4.32 | 4.81 | 11.32 | 135.72 |
Learning rate | #Accum Errors | #Input frames | Batch Size | Hidden dim (CNNs) | #Layers (CNNs) |
1e-1 1e-6 | 110 | 130 | 464 | 8128 | 110 |
Appendix E Additional results
Table 8 shows the RMSEs of temperature predictions. Figure 8 shows the ground truth and the predicted velocity norm fields () at time step , and by the U-net and four Equ-Unet on the four transformed test samples. Figure 9 shows the ground truth and the predicted ocean currents () at time step and by the regular ResNet and four Equ-ResNets on the test set of future time.
CLSTM | Bézenac | ResNet | U-net | Equ | Equ | Equ | Equ | |
RMSE | 0.46 | 0.38 | 0.41 | 0.391 | 0.38 | 0.37 | 0.39 | 0.37 | 0.38 | 0.40 | 0.42 | 0.41 |




