A nonlocal physics-informed deep learning framework using the peridynamic differential operator
Abstract
The Physics-Informed Neural Network (PINN) framework introduced recently incorporates physics into deep learning, and offers a promising avenue for the solution of partial differential equations (PDEs) as well as identification of the equation parameters. The performance of existing PINN approaches, however, may degrade in the presence of sharp gradients, as a result of the inability of the network to capture the solution behavior globally. We posit that this shortcoming may be remedied by introducing long-range (nonlocal) interactions into the network’s input, in addition to the short-range (local) space and time variables. Following this ansatz, here we develop a nonlocal PINN approach using the Peridynamic Differential Operator (PDDO)—a numerical method which incorporates long-range interactions and removes spatial derivatives in the governing equations. Because the PDDO functions can be readily incorporated in the neural network architecture, the nonlocality does not degrade the performance of modern deep-learning algorithms. We apply nonlocal PDDO-PINN to the solution and identification of material parameters in solid mechanics and, specifically, to elastoplastic deformation in a domain subjected to indentation by a rigid punch, for which the mixed displacement–traction boundary condition leads to localized deformation and sharp gradients in the solution. We document the superior behavior of nonlocal PINN with respect to local PINN in both solution accuracy and parameter inference, illustrating its potential for simulation and discovery of partial differential equations whose solution develops sharp gradients.
keywords:
Deep learning , Peridynamic Differential Operator , Physics-Informed Neural Networks , Surrogate Models1 Introduction
Deep learning has emerged as a powerful approach to computing-enabled knowledge in many fields [1], such as image processing and classification [2, 3, 4], search and recommender systems [5, 6], speech recognition [7], autonomous driving [8], and healthcare [9]. The particular needs of each application, collectively, have led to many different neural-network architectures, including Deep Neural Networks (DNN), Convolutional NNs (CNN), Recurrent NNs(RNN) and its variants including Long Short-Term Memory RNNs (LSTM). Some of these frameworks have also been employed for data-driven modeling in computational mechanics [10], including fluid mechanics and turbulent flow modeling [11], solid mechanics and constitutive modeling [12, 13, 14], and earthquake prediction and detection [15, 16]. These efforts have resulted in the availability of open-source deep-learning platforms, including Theano [17], Tensorflow [18], and PyTorch [19]. These software packages are highly efficient and ready to use on different platforms, from mobile devices to massively parallel cloud-based clusters, features that can be inherited in the development of tools for physics-informed deep learning [20].
Of particular interest to us are recent applications of deep learning in computational science and engineering, concerning the solution and discovery (identification) of partial differential equations describing various physical systems [21, 22, 23, 24, 25, 26]. Among these applications, a specific framework called Physics-Informed Neural Networks (PINN) [24] enables the construction of the solution space using feed-forward neural networks with space and time variables as the network’s input. The governing equations are enforced in the loss function using automatic differentiation [27]. It is a framework that permits solving partial differential equations (PDEs) and conducting parameter identification (inversion) from data. Multiple variations of this framework exist, such as Variational PINNs [28] and Parareal PINNs [29], which have been used for physics-informed learning of the Burgers equation, the Navier–Stokes equations, and the Schrödinger equation.
Recently, PINN has been applied for inversion and discovery in solid mechanics [14]. While the method provides accurate and robust reconstructions and parameter estimates when the solution is smooth, the performance degrades in the presence of sharp gradients in the strain or stress fields. The emergence of near-discontinuities in the solution can occur for several reasons, including shear-band localization, crack propagation, and the presence of “mixed” displacement–traction boundary conditions. In the latter case, the point at which the boundary condition changes type often gives rise to stress concentration or even a stress singularity. In these cases, existing PINN approaches are much less accurate as a result of the inability of the network to capture the solution behavior globally. We posit that this shortcoming may be remedied by introducing long-range (nonlocal) interactions into the network’s input, in addition to the short-range (local) space and time variables.
Here, we propose to use the Peridynamic Differential Operator (PDDO) [30, 31] to construct nonlocal neural networks with long-range interactions. Peridynamics, a nonlocal theory, was first introduced as an alternative to the local classical continuum mechanics to incorporate long-range interactions and to remove spatial derivatives in the governing equations [32, 33, 34, 35]. It has been shown to be well suited to model crack initiation and propagation [36]. It has also been shown that the peridynamic governing equations can be derived by replacing the local spatial derivatives in the Navier displacement equilibrium equations with their nonlocal representation using PDDO [30, 37, 31]. PDDO has an analytical form in terms of spatial integrals for a point with a symmetric interaction domain or support region. The PD functions can be easily incorporated into the nonlocal physics-informed deep learning framework: they are generated in discrete form during the preprocessing phase, and therefore they do not interfere with the deep-learning architectures, keeping their performance intact.
The outline of the paper is as follows. In Section 2 we give a brief overview of the established (local) PINN framework, and its application to solid mechanics problems. In Section 3 we propose and describe an extension of the local (short-range) PINN to a nonlocal (long-range) PINN framework using PDDO. In Section 4 we present the application of both local and nonlocal PINN to a representative example of elastoplastic deformation, corresponding to the indentation of a body by a rigid punch—an example that illustrates the effects of sharp gradients as a result of mixed displacement–traction boundary conditions. Finally, in Section 5 we discuss the results and summarize the main conclusions.
2 Physics-Informed Deep Learning in Solid Mechanics
In this section we provide a brief overview of the established (local) PINN framework [24], and its application to forward modeling and parameter identification in solid mechanics, as described by elastoplasticity.
2.1 Basics of the PINN framework
In the PINN framework [24], the solution space is constructed by a deep neural network with the independent variables (e.g., coordinates ) as the network inputs. In this feed-forward network, each layer outputs data as inputs for the next layer through nested transformations. Corresponding to the vector of input variables, , the output values, can be mathematically expressed as
(2.1) |
where , and represent the inputs, final outputs and hidden layer outputs of the network, and and represent the weights and biases of each layer, respectively. Note that lowercase and capital boldface letters are used to reflect vector and matrix components while scalars are shown with italic fonts. The activation function is denoted by actf; it renders the network nonlinear with respect to the inputs.
The ‘trained’ can be considered as an approximate solution to the governing PDE. It defines a mapping from inputs to the field variable in the form of a multi-layer deep neural network, i.e., , with and representing the set of all network parameters. The network inputs can be temporal and spatial variables in reference to a Cartesian coordinate system, i.e., in 2D.
In the PINN framework, the physics, described by a partial differential equation with as the partial differential operator, is incorporated in the loss or cost function along with the training data as
(2.2) |
where is the training dataset (which can be inside the domain or on the boundary), and represents the expected (true) value for the differential operation at any given training or sampling point. In all modern implementations of the deep-learning framework, such as Theano [17], Tensorflow [18] and MXNet [38], the partial derivatives in can be performed using automatic differentiation (AD) [27]—a fundamental aspect of the PINN architecture.

Different optimization algorithms exist for training a neural network; these include Adagrad [39] and Adam [40]. Several algorithmic parameters affect the rate of convergence of network training. The algorithmic parameters include batch-size, epochs, shuffle and patience. Batch-size controls the number of samples from a dataset used to evaluate one gradient update. A batch-size of 1 is associated with a full stochastic gradient descent optimization. One epoch is one round of training on a dataset. If a dataset is shuffled, then a new round of training (epoch) results in an updated parameter set because the batch-gradients are evaluated on different batches. It is common to reshuffle a dataset many times and perform the back-propagation updates.
The optimizer may, however, stop earlier if it finds that new rounds of epochs are not improving the loss function. This situation is described with the keyword patience. It primarily occurs because the loss function is nonconvex. Therefore, the training needs to be tested with different starting points and in different directions to build confidence on the parameters evaluated from minimization of the loss function on a given dataset. Patience is the parameter that controls when the optimizer should stop the training. There are three basic strategies to train the network: (1) generate a sufficiently large number of datasets and perform a one-epoch training on each dataset, (2) work on one dataset over many epochs by reshuffling the data, and (3) a combination of these. When dealing with synthetic data, all approaches are feasible. In the original work on PINN [24], the first strategy was used to train the model, with datasets being generated at random space locations at each epoch. This strategy, however, is often not applicable in real-world applications, especially when measurements are collected from sensors that are installed at fixed and limited spatial locations.
2.2 Solid mechanics with elastoplastic deformation
In the absence of body forces and neglecting inertia, the balance of linear momentum takes the form:
(2.3) |
for , where is the Cauchy stress tensor, the subscript after a comma denotes differentiation, and a repeated subscript implies summation.
The linear elastic response of the material can be described by the stress–strain relations as
(2.4) |
where the pressure or volumetric stress is
(2.5) |
and the deviatoric stress tensor is
(2.6) |
in which
(2.7) |
with the strain tensor defined as
(2.8) |
where are the components of the displacement field.
The nonlinear material response follows the classical description of elastoplastic behavior [42]. In particular, we adopt the von Mises flow theory with a yield surface defined as
(2.9) |
in which is the initial yield stress, is the work hardening parameter, is the equivalent plastic strain, and is the effective stress. The plastic deformation occurs in the direction normal to the yield surface, i.e., . We decompose the deviatoric strain tensor into its elastic and plastic components,
(2.10) |
To account for plastic deformation, the equations describing the linear elastic material response, Eq. (2.6) can be rewritten as
(2.11) |
and
(2.12) |
The effective stress is defined as
(2.13) |
with
(2.14) |
The equivalent plastic strain, can be obtained from Eq. (2.9), Eq. (2.11) and Eq. (2.12) as
(2.15) |
where
(2.16) |
For linear elasticity under plane-strain conditions, the transverse component of strain, , is identically equal to zero, and the transverse normal component of stress is evaluated as . For elastoplasticity, however, is not predefined while remains identically equal to zero.
2.3 Local PINN for elastoplasticity
Here we apply the PINN framework to the solution and inference of two-dimensional quasi-static mechanics. The input variables to the feed-forward neural network are the coordinates, and , and the output variables are the components of the displacement, , , strain tensor, , , , and stress tensor, , , . We define the loss function for linear elasticity as:
(2.17) |
where the and components refer to predicted and true values, respectively. The set contains all sampling nodes. The set contains all sampling nodes for variable where actual data exist. The terms in the loss function represent measures of the error in the displacement, strain and stress fields, the equilibrium equations, and the constitutive relations.
Similarly, the loss function for elastoplasticity is:
(2.18) |
These loss functions are used for deep-learning-based solution of the governing PDEs as well as for identification of the model parameters. The constitutive relations and governing equations are tested at all sampling (collocation) points, while data can be selectively imposed. The material parameters are treated as constant values in the network for the solution of governing PDEs. However, they are treated as network parameters, which change during the training phase, during model identification (see Fig. 1). TensorFlow [18] permits such variables to be defined as Constant (PDE solution) or Variable (parameter identification) objects, respectively.
3 Nonlocal PINN Architecture with the Peridynamics Differential Operator
Here we propose and describe an extension of the local (short-range) PINN with a single input to a nonlocal neural network that employs input variables in the form of family members of point , defined as . Each point has its own unique family in its domain of interaction (an area in two-dimensional analysis). Given the relative position with reference to point , , the nondimensional weight function represents the degree of interaction between the material points in each family. We define it as:
(3.1) |
where the parameter , referred to as the horizon, defines the extent of the interaction domain (long-range interactions). In discrete form, the family members of point are denoted as , and their relative positions are defined as .

Silling [32] and Silling et al. [33] introduced the Peridynamic (PD) theory for failure initiation and growth in materials under complex loading conditions. Recently, Madenci et al. [30, 31] introduced the Peridynamic Differential Operator (PDDO) to approximate the nonlocal representation of any function, such as a scalar field and its derivatives at point , by accounting for the effect of its interactions with the other points, in the domain of interaction (Fig. 2).
The derivation of PDDO employs Taylor Series Expansion (TSE), weighted integration and orthogonality (see A). The major difference between PDDO and other existing local and nonlocal numerical differentiation methods is that PDDO leads to analytical expressions for arbitrary-order derivatives in integral form for a point with symmetric location in a circle. These analytical expressions, when substituted into the Navier displacement equilibrium equation, allow one to recover the PD equation of motion derived by Silling et al. [32], which was based on the balance laws of continuum mechanics. The spatial integration can be performed numerically with simple quadrature techniques. As shown by Madenci et al. [30, 31, 37], PDDO provides accurate evaluation of derivatives in the interior as well as the near the boundaries of the domain.
The nonlocal PD representation of function and its derivatives can be expressed in continuous and discrete forms as
(3.2) |
(3.3) |
(3.4) |
where with () represent the PD functions obtained by enforcing the orthogonality condition of PDDO [30, 31], and the integration is performed over the interaction domain. The subscript reflects the discrete value of , , and a family of point .
A nonlocal neural network for a point and its family members can then be expressed as
(3.5) |
This network maps and its family members to the corresponding values of , i.e., . With these output values, the nonlocal value of the field and its derivatives can be readily constructed as
(3.6) |
where, . Here, the summation over discrete family points in Eq. (3.2) is expressed as a dot product. Note that if in the influence function (3.1) approaches zero, then and , and we recover the local PINN architecture in Fig. 1.

In our applications of PINN for solution and parameter inference, we make the distinction between two cases:
-
1.
We can use PDDO to approximate only the function, and use automatic differentiation (AD) to evaluate the derivatives. We refer to this case as AD-PDDO-PINN.
-
2.
We can instead use PDDO to approximate the function as well as its derivatives, instead of evaluating them from the network. We refer to this case as PDDO-PINN.
As we will see, the use of PDDO-PINN enables the use of activation functions (such as ) and network architectures that cannot be used with local PINN—a capability that may lead to increased computational efficiency (per epoch) since it does not rely on extended graph computations associated with AD.
4 Representative Example: Indentation of an Elastic or Elastoplastic Body
In this section, we apply the different PINN formalisms to the solution and inference of a solid mechanics problem described by plane-strain elastoplastic deformation. The problem is designed to reflect a common scenario in boundary value problems: the presence of mixed boundary conditions, in which part of the boundary is subject to Dirichlet (displacement) boundary conditions, while part of the boundary is subject to Neumann (traction) boundary conditions. The sharp transition in type of boundary condition often leads to stress concentration (and sometimes a stress singularity). The problem we study here—indentation of a elastoplastic body—is subject to this stress concentration phenomenon, which poses a significant challenge to the application of existing deep learning techniques.
4.1 Problem description
We simulate the deformation of a square domain under plane-strain conditions, as a result of the indentation by a rigid punch (Fig. 4). The body is constrained by roller support conditions along the lateral boundaries, and subject to fixed zero displacement along the bottom boundary. The dimensions of the domain are m, and thickness m. The width of the rigid punch is m, which indents the body at the top boundary a prescribed vertical displacement mm. These boundary conditions can be expressed as
(4.1a) | ||||
(4.1b) | ||||
(4.1c) | ||||
(4.1d) | ||||
(4.1e) | ||||
(4.1f) | ||||
(4.1g) | ||||
(4.1h) |
The material exhibits elastic or elastic-plastic deformation with strain hardening. The elastic modulus, Poisson’s ratio, yield stress and hardening parameter of the material are specified as GPa, , GPa and GPa, respectively. The Lamé elastic constants, therefore, have values GPa and GPa.

To generate synthetic (simulated) data to be used in the deep learning frameworks, we simulate the problem described above with the finite element method using COMSOL [43]. The domain is discretized with a uniform mesh of size elements of quartic Lagrange polynomials.
The simulated displacement (, ), strain (, , , ) and stress (, , , ) is computed for a purely linear elastic response (Fig. 5) and for elastic-plastic deformation (Fig. 6). It is apparent that the distribution of strain and stress components for the elastoplastic case are significantly different from those of the elastic case, with more localized deformation underneath the rigid punch. As expected, the plastic-strain components are zero in most of the domain, except in the vicinity of the corners of the punch, where it exhibits sharp gradients—a feature that, as we will see, poses a challenge for the approximation of the solution with a neural network.


4.2 Local PINN results
We first apply the established (local) PINN framework for solution and parameter identification of the indentation problem described above [24, 14]. Training of the neural networks is performed with 10,000 training points (nodal solutions of the FEM solution). The convergence of the network training is sensitive to the choice of data-normalization and network size. After a trial-and-error approach, we selected the network architectures and parameters that led to the lowest value of the loss function and the highest accuracy of the physical model parameters. The selected feed-forward neural network has 4 hidden layers, each with 100 neuron units, and employs the hyperbolic-tangent activation function between layers. We adopt batch-training with a total number of 20,000 epochs and a batch-size of 64. We use the Adam optimizer with a learning rate initialized to 0.0005 and decreased gradually to 0.000001 for the last epoch.
The local PINN predictions for elastic deformation do capture the high-gradient regions near the corners of punch, but they are significantly diffused. The differences between the local PINN predictions and the true data are shown in Fig. 7. The Lamé coefficients identified by the network are GPa and GPa—an error of less than 3% (Fig. 8).

In the case of elastic-plastic deformation, depends on the plastic-strain components. Thus, the PINN architecture is defined with networks for , , , , and . The error between the local PINN predictions and the true data are shown in Fig. 9. In contrast with the local PINN predictions for the elastic case, the predictions for this case show poor quantitative agreement with the exact solution. The material parameters identified by the method are: GPa, GPa, GPa and GPa. While the elastic Lamé coefficients and the yield stress are identified accurately, the method fails to identify the hardening parameter (Fig. 10). We speculate that this is due to the localized plastic deformation in narrow regions in the vicinity of the corners of the rigid punch (Fig. 6). Therefore, there are very few sampling points that contribute to the loss function with the local PINN network.

4.3 Nonlocal PINN results
Given the relative success of the local PINN framework, but also the challenges faced in capturing the sharp gradients in the solution of the elastoplastic deformation problem, here we investigate the application of nonlocal PINN to this problem.
The selected feed-forward neural networks are identical to those used in local PINN: they have 4 hidden layers, each with 100 neuron units, and with hyperbolic-tangent activation functions. ]In the construction of the PD functions, , the TSE is truncated after the second-order derivatives (, see A). The number of family members for each point depends on the order of approximation in the TSE; it is points in each dimension, resulting in for a square horizon in 2D [31]. Therefore, we choose a maximum number of 49 members as the nonlocal input features. Depending on the location of , the influence (degree of interaction) of some of these points (family members) might be zero. However, they are all incorporated in the construction of the nonlocal neural network to simplify the implementation procedure.
In what follows we present the results for both AD-PDDO-PINN and PDDO-PINN architectures to the indentation problem with elastic-plastic deformation.
4.3.1 AD-PDDO-PINN
The nonlocal deep neural network described by Eq. (3.5) is employed to construct approximations for variables , , , and . They are evaluated as
(4.2) |
where represents , , , , . The derivatives are evaluated using automatic differentiation (AD). Since is a nonlocal function of and its family points , the differentiation of is performed with respect to each family member using AD as
(4.3) |
In order to incorporate the effect of family members on the derivatives, the local AD differentiations are recast as
(4.4) |
The differences between the AD-PDDO-PINN predictions and the true solution for the elastoplastic deformation case are shown in Fig. 11. The value of the elastoplastic model parameters estimated by the method are: GPa, GPa, GPa and GPa (Fig. 12). Both the solution and the model parameters are captured much more accurately than in the local PINN framework. In particular, the method reproduces the regions of high gradients in the solution, and is now able to accurately identify the hardening parameter .

4.3.2 PDDO-PINN
We now employ the nonlocal deep neural network described by Eq. (2.17) to construct approximations for variables , , , , and their derivatives. These derivatives are evaluated as
(4.5) |
where represents , , , , .
The errors in the PDDO-PINN solution for the elastoplastic deformation case are shown in Fig. 13, and the estimated elastoplastic model parameters are: GPa, GPa, GPa and GPa (Fig. 14). The overall performance is better than that of local PINN, but less accurate than that of AD-PDDO-PINN. An advantage of the PDDO-PINN framework, however, is that it does not rely on automatic differentiation; therefore, the evaluation of derivatives through Eq. (4.5) is faster for each epoch of training.

5 Discussion and Conclusions
The results of the previous section demonstrate the benefits of the nonlocal PINN framework in the reconstruction of the deformation and parameter identification for solid-mechanics problems with sharp gradients in the solution, compared with those obtained with the local PINN architecture. This improved performance is also apparent from examination of the evolution of the normalized loss function for the different architectures (Fig. 15), illustrating the faster convergence and lower final value of of the nonlocal PINN approaches.

In summary, we have introduced a nonlocal approach to Physics-Informed Neural Networks (PINN) using the Peridynamic Differential Operator (PDDO). In the limit when the interaction range approaches zero, the method reverts to the local PINN model. We have presented two versions of the proposed approach: one with automatic differentiation using the neural network (AD-PDDO-PINN), and the other with analytical evaluation of the derivatives relying on PDDO functions (PDDO-PINN). The PD functions can be readily and efficiently incorporated in the neural network architecture and, therefore, the nonlocality does not degrade the performance of modern deep-learning algorithms. We have applied both versions of nonlocal PINN to the solution and identification of material parameters in solid mechanics. Specifically, we focused on the solution and inference of linear-elastic and elastoplastic deformation in a domain subjected to indentation by a rigid punch. The resulting boundary value problem is challenging because of the mixed displacement–traction boundary conditions along the top boundary, which result in localized deformation and sharp gradients in the solution. We have shown that the PDDO framework is able to capture the stress and strain concentrations with global functions and, as a result, leads to the superior behavior of nonlocal PINN both in terms of the accuracy of the solution and the estimated model parameters. While many questions remain with regard to the selection of network size, order of the PDDO approximation and training optimization algorithms, these results suggest that nonlocal PINN may offer a powerful framework for simulation and discovery of partial differential equations whose solution develops sharp gradients.
Acknowledgments
RJ and EH conducted this work as a part of KFUPM-MIT collaborative agreement ‘Multiscale Reservoir Science’. EM and ACB performed this work as part of the ongoing research at the MURI Center for Material Failure Prediction through Peridynamics at the University of Arizona (AFOSR Grant No. FA9550-14-1-0073).
Appendix A PDDO Derivation
According to the 2nd-order TSE in a 2-dimensional space, the following expression holds
(A.1) |
where is the remainder. Multiplying each term with PD functions, and integrating over the domain of interaction (family), , results in
(A.2) |
in which the point is not necessarily symmetrically located in the domain of interaction. The initial relative position, , between the material points and can be expressed as . This ability permits each point to have its own unique family with an arbitrary position. Therefore, the size and shape of each family can be different, and they significantly influence the degree of nonlocality. The degree of interaction between the material points in each family is specified by a nondimensional weight function, , which can vary from point to point. The interactions become more local with decreasing family size. Thus, the family size and shape are important parameters. In general, the family of a point can be nonsymmetric due to nonuniform spatial discretization. Each point has its own family members in the domain of interaction (family), and occupies an infinitesimally small entity such as volume, area or a distance.
The PD functions are constructed such that they are orthogonal to each term in the TSE as
(A.3) |
with () and is the Kronecker symbol. Enforcing the orthogonality conditions in the TSE leads to the nonlocal PD representation of the function itself and its derivatives as
(A.4a) | ||||
(A.4b) | ||||
(A.4c) |
The PD functions can be constructed as a linear combination of polynomial basis functions
(A.5) |
where are the unknown coefficients, are the influence functions, and and are the components of the vector . Assuming and incorporating the PD functions into the orthogonality equation lead to a system of algebraic equations for the determination of the coefficients as
(A.6) |
where
(A.7a) | ||||
(A.7b) | ||||
(A.7c) |
After determining the coefficients via , the PD functions can be constructed. The detailed derivations and the associated computer programs can be found in [31]. The PDDO is nonlocal; however, in the limit as the horizon size approaches zero, it recovers the local differentiation as proven by Silling and Lehoucq [35].
References
-
[1]
I. Goodfellow, Y. Bengio, A. Courville,
Deep Learning, MIT press, 2016.
doi:https://dl.acm.org/doi/book/10.5555/3086952.
URL https://www.deeplearningbook.org -
[2]
C. M. Bishop, Pattern
Recognition and Machine Learning, Springer-Verlag, Berlin, Heidelberg, 2006.
doi:https://dl.acm.org/doi/book/10.5555/1162264.
URL https://www.springer.com/gp/book/9780387310732 - [3] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: F. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (NIPS 2012), 2012, pp. 1097–1105.
-
[4]
Y. LeCun, Y. Bengio, G. Hinton, Deep
learning, Nature 521 (7553) (2015) 436–444.
doi:10.1038/nature14539.
URL https://doi.org/10.1038/nature14539 - [5] D. Jannach, M. Zanker, A. Felfernig, G. Friedrich, Recommender Systems: An Introduction, Cambridge University Press, 2010.
- [6] S. Zhang, L. Yao, A. Sun, Y. Tay, Deep learning based recommender system: A survey and new perspectives, ACM Computing Surveys (CSUR) 52 (1) (2019) 1–38.
- [7] A. Graves, M. Abdel-Rahman, G. Hinton, Speech recognition with deep recurrent neural networks, in: IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, pp. 6645–6649.
- [8] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al., End to end learning for self-driving cars, arXiv preprint arXiv:1604.07316 (2016).
- [9] R. Miotto, F. Wang, S. Wang, X. Jiang, J. T. Dudley, Deep learning for healthcare: review, opportunities and challenges, Briefings in Bioinformatics 19 (6) (2018) 1236–1246.
- [10] S. L. Brunton, J. N. Kutz, Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, Cambridge University Press, 2019. doi:10.1017/9781108380690.
-
[11]
M. P. Brenner, J. D. Eldredge, J. B. Freund,
Perspective on
machine learning for advancing fluid mechanics, Physical Review Fluids
4 (10) (2019) 100501.
doi:10.1103/PhysRevFluids.4.100501.
URL https://link.aps.org/doi/10.1103/PhysRevFluids.4.100501 - [12] J. Ghaboussi, D. Sidarta, New nested adaptive neural networks (NANN) for constitutive modeling, Computers and Geotechnics 22 (1) (1998) 29–52.
- [13] T. Kirchdoerfer, M. Ortiz, Data-driven computational mechanics, Computer Methods in Applied Mechanics and Engineering 304 (2016) 81–101.
- [14] E. Haghighat, M. Raissi, A. Moure, H. Gomez, R. Juanes, A deep learning framework for solution and discovery in solid mechanics, arXiv preprint arXiv:2003.02751 (2020).
-
[15]
P. M. DeVries, F. Viégas, M. Wattenberg, B. J. Meade,
Deep learning of
aftershock patterns following large earthquakes, Nature 560 (7720) (2018)
632–634.
doi:10.1038/s41586-018-0438-y.
URL http://dx.doi.org/10.1038/s41586-018-0438-y - [16] Q. Kong, D. T. Trugman, Z. E. Ross, M. J. Bianco, B. J. Meade, P. Gerstoft, Machine learning in seismology: turning data into insights, Seismological Research Letters 90 (1) (2018) 3–14. doi:10.1785/0220180259.
- [17] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, Y. Bengio, Theano: a CPU and GPU math expression compiler, in: Proceedings of the Python for Scientific Computing Conference (SciPy), Vol. 4, 2010, p. 3.
-
[18]
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga,
S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden,
M. Wicke, Y. Yu, X. Zheng,
TensorFlow:
A system for large-scale machine learning, in: 12th USENIX Symposium on
Operating Systems Design and Implementation (OSDI 16), USENIX
Association, Savannah, GA, 2016, pp. 265–283.
URL https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi - [19] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library, in: H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Information Processing Systems (NIPS 2019), 2019, pp. 8024–8035.
-
[20]
E. Haghighat, R. Juanes, SciANN: A Keras wrapper
for scientific computations and physics-informed deep learning using
artificial neural networks, arXiv preprint arXiv:2005.08803 (2020).
URL https://sciann.com -
[21]
J. Han, A. Jentzen, W. E,
Solving high-dimensional
partial differential equations using deep learning, Proceedings of the
National Academy of Sciences 115 (34) (2018) 8505–8510.
doi:10.1073/pnas.1718942115.
URL https://www.pnas.org/content/115/34/8505 -
[22]
Y. Bar-Sinai, S. Hoyer, J. Hickey, M. P. Brenner,
Learning data-driven
discretizations for partial differential equations, Proceedings of the
National Academy of Sciences 116 (31) (2019) 15344–15349.
doi:10.1073/pnas.1814058116.
URL https://www.pnas.org/content/116/31/15344 -
[23]
S. Rudy, A. Alla, S. L. Brunton, J. N. Kutz,
Data-driven
identification of parametric partial differential equations, SIAM Journal on
Applied Dynamical Systems 18 (2) (2019) 643–660.
doi:10.1137/18M1191944.
URL https://epubs.siam.org/doi/abs/10.1137/18M1191944 -
[24]
M. Raissi, P. Perdikaris, G. E. Karniadakis,
Physics-informed neural
networks: A deep learning framework for solving forward and inverse
problems involving nonlinear partial differential equations, Journal of
Computational Physics 378 (2019) 686–707.
doi:10.1016/j.jcp.2018.10.045.
URL https://doi.org/10.1016/j.jcp.2018.10.045 -
[25]
K. Champion, B. Lusch, J. N. Kutz, S. L. Brunton,
Data-driven discovery of
coordinates and governing equations, Proceedings of the National Academy of
Sciences 116 (45) (2019) 22445–22451.
doi:10.1073/pnas.1906995116.
URL https://www.pnas.org/content/116/45/22445 - [26] M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations, Science 367 (6481) (2020) 1026–1030.
-
[27]
A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind,
Automatic
differentiation in machine learning: a survey, The Journal of Machine
Learning Research 18 (1) (2017) 5595–5637.
URL https://dl.acm.org/doi/abs/10.5555/3122009.3242010 - [28] E. Kharazmi, Z. Zhang, G. E. Karniadakis, Variational physics-informed neural networks for solving partial differential equations (2019). arXiv:1912.00873.
- [29] X. Meng, Z. Li, D. Zhang, G. E. Karniadakis, PPINN: Parareal physics-informed neural network for time-dependent PDEs, arXiv preprint arXiv:1909.10145 (2019).
- [30] E. Madenci, A. Barut, M. Futch, Peridynamic differential operator and its applications, Computer Methods in Applied Mechanics and Engineering 304 (2016) 408–451.
- [31] E. Madenci, A. Barut, M. Dorduncu, Peridynamic differential operator for numerical analysis, Springer, 2019.
- [32] S. A. Silling, Reformulation of elasticity theory for discontinuities and long-range forces, Journal of the Mechanics and Physics of Solids 48 (1) (2000) 175–209.
- [33] S. A. Silling, E. Askari, A meshfree method based on the peridynamic model of solid mechanics, Computers & Structures 83 (17-18) (2005) 1526–1535.
- [34] S. A. Silling, M. Epton, O. Weckner, J. Xu, E. Askari, Peridynamic states and constitutive modeling, Journal of Elasticity 88 (2) (2007) 151–184.
- [35] S. A. Silling, R. B. Lehoucq, Convergence of peridynamics to classical elasticity theory, Journal of Elasticity 93 (1) (2008) 13.
- [36] E. Madenci, E. Oterkus, Peridynamic Theory and its Applications, Springer, 2014.
- [37] E. Madenci, M. Dorduncu, A. Barut, M. Futch, Numerical solution of linear and nonlinear partial differential equations using the peridynamic differential operator, Numerical Methods for Partial Differential Equations 33 (5) (2017) 1726–1753.
- [38] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, Z. Zhang, MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems (2015). arXiv:1512.01274.
-
[39]
J. Duchi, E. Hazan, Y. Singer,
Adaptive subgradient methods
for online learning and stochastic optimization, Journal of Machine Learning
Research 12 (Jul) (2011) 2121–2159.
URL http://jmlr.org/papers/v12/duchi11a.html - [40] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization (2014). arXiv:1412.6980.
-
[41]
F. Chollet, Keras (2015).
URL https://github.com/fchollet/keras - [42] J. C. Simo, T. J. R. Hughes, Computational Inelasticity, Springer, 1998.
- [43] COMSOL, COMSOL Multiphysics User’s Guide, COMSOL, Stockholm, Sweden, 2020.