This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Parameter calibration with Consensus-based Optimization for interaction dynamics driven by neural networks

Simone Göttlich University of Mannheim [email protected]  and  Claudia Totzeck University of Wuppertal [email protected]
Abstract.

We calibrate parameters of neural networks that model forces in interaction dynamics with the help of the Consensus-based global optimization method (CBO). We state the general framework of interaction particle systems driven by neural networks and test the proposed method with a real dataset from the ESIMAS traffic experiment. The resulting forces are compared to well-known physical interaction forces. Moreover, we compare the performance of the proposed calibration process to the one in [4] which uses a stochastic gradient descent algorithm.

1. Introduction

Modelling interacting particle dynamics such as traffic, crowd dynamics, schools of fish and flocks of birds has attracted the attention of many research groups in the recent decades. Most models use physically-inspired interaction forces resulting from potentials to capture the observed behaviour. In fact, the gradient of the potential is used as driving force for interacting particle systems formulated with the help of ordinary differential equation (ODE). These models are able to represent the main features of the dynamics, but as for all models we cannot be sure that they deliver the whole truth. The idea in [4] was therefore to replace the physical-inspired models by neural networks, train the networks with real data and compare the resulting forces.

In the recent years it became obvious that neural networks are able to represent a lot of details from the dataset. It may be possible that there are details captured that are not even noticed by humans and therefore do not appear in physical models which are built to reproduce observations of the modeller.

In the following we recall the general dynamic of interaction particle systems driven by neural networks as proposed in [4]. Then we shortly describe the global optimization method ’Consensus-based optimization’ that we use for the real-data based calibration the network. Finally, we present the numerical results obtained by the calibration process and compare them to the ones resulting from the calibration with the stochastic gradient descent method reported in [4].

2. Interacting particle systems driven by neural networks

We consider interacting particle dynamics described by systems of ODEs of the following form

(1) ddtyi=j=1NWθi,j(yjyi),yi(0)=z0i,i=1,,N,\frac{d}{dt}y_{i}=\sum_{j=1}^{N}W^{i,j}_{\theta}(y_{j}-y_{i}),\quad y_{i}(0)=z_{0}^{i},\quad i=1,\dots,N,

where Wθi,jW^{i,j}_{\theta} represents the interaction force resulting for yiy_{i} in its interaction with yj.y_{j}. The initial condition of the particles is given by real dataset z0=z(0).z_{0}=z(0). In order to compare the results to the ones in [4] we restrict the class of neural networks to feed-forward networks. However, note that the approach discussed here allows for general neural networks while the discussion in [4] considers feed-forward networks and can only be generalized to neural networks allowing for back propagation.

2.1. Feed-forward neural networks

In the following we consider feed-forward artificial neural networks of the form

Definition 1.

A feed-forward artificial neural network (NN) is characterized by

  • -

    Input layer:

    a1(1)=1,ak(1)=xk1, for k{2,,n(1)+1},a_{1}^{(1)}=1,\quad a_{k}^{(1)}=x_{k-1},\;\text{ for }k\in\{2,\dots,n(1)+1\},

    where xn(1)x\in\mathbb{R}^{n^{(1)}} is the input (feature) in (1) and n(1)n^{(1)} is the number of neurons without the bias unit a1a_{1}.

  • -

    Hidden layers: for {2,,L1},k{2,,n()+1}\ell\in\{2,\dots,L-1\},k\in\{2,\dots,n^{(\ell)}+1\}

    a1()=1,ak()=g()(j=1n(1)+1θj,k(1)aj(1)).a_{1}^{(\ell)}=1,\quad a_{k}^{(\ell)}=g^{(\ell)}\left(\sum_{j=1}^{n^{(\ell-1)}+1}\theta_{j,k}^{(\ell-1)}a_{j}^{(\ell-1)}\right).
  • -

    Output layer:  ak(L)=g(L)(j=1n(L1)+1θj,k(L1)aj(L1))fork{1,,n(L)}a_{k}^{(L)}=g^{(L)}\left(\sum_{j=1}^{n^{(L-1)}+1}\theta_{j,k}^{(L-1)}a_{j}^{(L-1)}\right)\quad\text{for}\quad k\in\{1,\dots,n^{(L)}\}

Note that the output layer has no bias unit. The entry θj,k\theta_{j,k}^{\ell} of the weight matrix θ()n(1)×n()\theta^{(\ell)}\in\mathbb{R}^{n^{(\ell-1)}\times n^{(\ell)}} describes the weight from neuron aj(1)a_{j}^{(\ell-1)} to the neuron ak()a_{k}^{(\ell)}. For notational convenience, we assemble all entries θj,k()\theta_{j,k}^{(\ell)} in a vector K\mathbb{R}^{K} with

K:=n(1)n(2)+n(2)n(3)++n(L1)n(L).K:=n^{(1)}\cdot n^{(2)}+n^{(2)}\cdot n^{(3)}+\dots+n^{(L-1)}\cdot n^{(L)}.

For the numerical experiment we use g()=log(1+ex)g^{(\ell)}=\log(1+e^{x}) for =2,,N1\ell=2,\dots,N-1 and g(L)(x)=x.g^{(L)}(x)=x. For an illustration of the NN structure we refer the interested reader to [4]. In the numerical section we consider an NN with L=3,L=3, one input and 5 units in the hidden layer.

3. Parameter Calibration

We formulate the task of the parameter calibration as an optimization problem. Let udu\in\mathbb{R}^{d} denote the vector of parameters to be calibrated. This could be the weights of the neural network θ\theta and some other parameters, as for example the average length LL and the maximal speed vmaxv_{\max} of the cars which we will consider in the application. As we want the network to recover the forces hidden in the real data dynamics, we define the cost function for the parameter calibration as

(2) J(y,u)=120Ty(t)z(t)2𝑑t+δ2|uuref|2,J(y,u)=\frac{1}{2}\int_{0}^{T}\|y(t)-z(t)\|^{2}dt+\frac{\delta}{2}|u-u_{\text{ref}}|^{2},

where zz denotes the trajectories of the cars obtained by the traffic experiment, and urefu_{\text{ref}} are reference values for the parameters. The parameter δ\delta allows to balance the two terms in the cost functional. In case no reference values of the parameters are available, we set δ=0\delta=0 in the numerical section.

3.1. Consensus-based optimization (CBO)

We solve the parameter calibration problem with the help of a Consensus-based optimization method [3]. In more details, we choose the variant introduced in [2] which is tailored for high-dimensional problems involving the calibration of neural networks. The CBO dynamics is itself a stochastic interacting particle system with NCBON_{\text{CBO}} agents given by stochastic differential equations (SDEs). The evolution of the agents is influenced by two terms. On the one hand, there is a deterministic term that aims to confine the positions of the agents at a weighted mean. On the other hand, there is a stochastic term that allows for exploration of the state space. The details are the following

(3) duti=λ(utivf)dt+σdiag(utivf)dBti,i=1,,NCBOdu_{t}^{i}=-\lambda(u_{t}^{i}-v_{f})dt+\sigma\text{diag}(u_{t}^{i}-v_{f})dB_{t}^{i},\quad i=1,\dots,N_{\text{CBO}}

with drift and diffusion parameters λ,σ>0\lambda,\sigma>0, independent dd-dimensional Brownian motions BtiB_{t}^{i} and initial conditions u0iu_{0}^{i} drawn uniformly from the parameter set of interest. A main role plays the weighed mean

vf=1i=1NCBOeJ(ui)i=1NCBOuieαJ(ui).v_{f}=\frac{1}{\sum_{i=1}^{N_{\text{CBO}}}e^{-J(u_{i})}}\sum_{i=1}^{N_{\text{CBO}}}u_{i}\,e^{-\alpha J(u_{i})}.

By its construction, agents with lower cost have more weight in the mean as the ones with higher cost. The parameter α\alpha allows to adjust this difference of the weights. For more information on the CBO method and its proof of convergence on the mean-field level we refer the interested reader to [5] and the references therein. As indicated by the notation above, the agents used in the CBO method are different realizations of parameter vectors that we consider for the calibration. For the numerical results NN4 we consider a neural network with 1313 weights, i.e., θ13\theta\in\mathbb{R}^{13}. Moreover, we assume the maximal speed vmaxv_{\text{max}} as additional parameter. Hence, for fixed tt we have for the ii-th CBO agent uti14.u_{t}^{i}\in\mathbb{R}^{14}.

4. Numerical results and conclusion

For the calibration of the parameters we consider real data from the project ESIMAS [1]. As we want to compare the results to the well-known follow-the-leader model for traffic flow (LWR) we recall its details

(4a) ddtyi(t)\displaystyle\frac{d}{dt}y_{i}(t) =f(yi+1(t)yi(t)L),i=1,,N1,\displaystyle=f\left(\frac{y_{i+1}(t)-y_{i}(t)}{L}\right),\quad i=1,\dots,N-1,
(4b) ddtyN(t)\displaystyle\frac{d}{dt}y_{N}(t) =vmax.\displaystyle=v_{\text{max}}.

Here f()f(\cdot) is either vmaxlog()v_{\text{max}}\log(\cdot) or vmax(11/).v_{\text{max}}(1-1/\cdot). To be prepared for a reasonable comparison, we consider for the neural network dynamics

(5a) ddtyi(t)\displaystyle\frac{d}{dt}y_{i}(t) =Wθi,i+1(yi+1(t)yj(t)),i=1,,N1,\displaystyle=W^{i,i+1}_{\theta}(y_{i+1}(t)-y_{j}(t)),\quad i=1,\dots,N-1,
(5b) ddtyN(t)\displaystyle\frac{d}{dt}y_{N}(t) =vmax\displaystyle=v_{\text{max}}

supplemented with initial data y(0)=z0y(0)=z_{0}. This leads to u=(vmax,θ).u=(v_{\text{max}},\theta). To evaluate the models and compute the corresponding cost we solve all ODEs with an explicit Euler scheme. For details we refer to [4]. The number in the notation NN2NN2, NN4NN4 and NN10NN10 corresponds to the number of nonbias neurons in the hidden layer.

4.1. Data processing and numerical schemes

The data collection of the ESIMAS project contains vehicle data from 5 cameras that were placed in a 1km1km tunnel section on the German motorway A3 nearby Frankfurt/Main [1]. The data is processed in the exact same way as in [4]. Files with the processed data can be found online111https://github.com/ctotzeck/NN-interaction.

The SDE which represents the CBO scheme is solved with the scheme proposed in [2]. In particular, we set dt=0.05,σ0=1,λ=1dt=0.05,\sigma_{0}=1,\lambda=1 and the maximal number of time steps to 100.100. The mini-batch size of the CBO scheme is 5050 and we have 100100 CBO agents in total. In each time step we update one randomly chosen mini-batch. The initial values are chosen as follows

vmaxU([20,40]),LU([0,10]) and θU([0.5,0.5]K).v_{\text{max}}\sim U([20,40]),\quad L\sim U([0,10])\text{ and }\theta\sim U([-0.5,0.5]^{K}).

4.2. Resulting forces and comparison

Figure 1 (left) shows the velocities resulting from the parameter calibration process. We find that the estimates velocities for the NN approaches are higher than the velocities of the LWR based models. The difference is most significant in data set 10.10. The plot on the right shows the average of the resulting forces for the different models. The forces of the NN approaches resemble linear approximations of the forces corresponding to the LWR models.

Refer to caption
Refer to caption
Figure 1. Average velocities and forces resulting from the parameter calibration and learning process.

The car length (L)(L) appears only in the LWR models. Its optimized values for the different data sets are given in Table 1. We see that the lengths for the linear model are smaller than the ones in the logarithmic model. This is in agreement with the results obtained with stochastic gradient descent and shown in [4].

1 2 3 4 5 6 7 8 9 10 average
Lin 3.5969 3.76 4.17 2.19 3.02 2.81 5.92 5.86 2.14 3.65 3.71
Log 7.15 7.21 8.05 8.17 6.19 5.00 8.10 8.46 5.63 6.91 7.09
Table 1. Car lengths (in mm) estimated with the algorithm for the 10 data sets with the LWR-model with linear and logarithmic velocity.
1 2 3 4 5 6 7 8 9 10 average
NN2 47.95 46.49 98.07 44.97 23.69 29.72 40.69 55.75 11.50 68.91 46.77
NN4 47.82 46.09 97.01 51.84 23.33 26.71 41.60 55.29 11.16 67.60 46.84
NN10 47.90 45.78 99.20 42.50 22.16 24.40 41.18 56.68 10.01 66.01 45.58
Lin 44.41 41.29 93.73 30.86 19.00 37.98 38.00 56.40 8.18 46.24 41.61
Log 53.53 50.31 109.36 65.24 26.50 52.93 38.09 58.22 14.54 52.75 52.15
Table 2. Values of the cost functional estimated with the algorithm for the 10 data sets with the LWR-model with linear and logarithmic velocity and the three different neural network approaches

Finally, we summarize the cost values after parameter calibration in Table 2. The least values of every column are highlighted. It is obvious that the LWR model with linear force outperforms the other models. The results of the NN approaches are better than the ones of the LWR model with logarithmic force.

4.2.1. Comparison to calibration with stochastic gradient descent

In comparison to the parameter calibration based on the stochastic gradient descent method reported in [4], we find that the CBO approach finds better parameters for both LWR models. In fact, the resulting cost values are significantly smaller after the calibration with CBO. For the NN approaches the results are in good agreement. A clear decision in favour of the LWR approach or the NN ansatz was not possible based on the results of [4]. After the training with CBO the LWR with linear force seems to outperform all other approaches. Note that we used NN with very simple structure here, it may be worth to test more sophisticated network structures in future work.

References

  • [1] S. Lamberty M. Oeser E. Kallo, A. Fazekas. Microscopic traffic data obtained from videos recorded on a german motorway. Mendeley Data 10.17632/tzckcsrpn6.1, 2019.
  • [2] L. Li Y. Zhu J. A. Carrillo, S. Jin. A consensus-based global optimization method for high dimensional machine learning problems. ESAIM:COCV, 27(S5), 2021.
  • [3] O. Tse S. Martin R. Pinnau, C. Totzeck. A consensus-based model for global optimization and its mean-field limit. Math. Mod. Meth. Appl. Sci., 27(1):183–204, 2017.
  • [4] C. Totzeck S. Göttlich. Optimal control for interaction particle systems driven by neural networks. arXiv:2104.01383, 2021.
  • [5] C. Totzeck. Trends in consensus-based optimization. arXiv:2104.01383, 2021.