Path Structured Multimarginal Schrödinger Bridge for Probabilistic
Learning of Hardware Resource Usage by Control Software
Abstract
Solution of the path structured multimarginal Schrödinger bridge problem (MSBP) is the most-likely measure-valued trajectory consistent with a sequence of observed probability measures or distributional snapshots. We leverage recent algorithmic advances in solving such structured MSBPs for learning stochastic hardware resource usage by control software. The solution enables predicting the time-varying distribution of hardware resource availability at a desired time with guaranteed linear convergence. We demonstrate the efficacy of our probabilistic learning approach in a model predictive control software execution case study. The method exhibits rapid convergence to an accurate prediction of hardware resource utilization of the controller. The method can be broadly applied to any software to predict cyber-physical context-dependent performance at arbitrary time.
I Introduction
Control software in safety-critical cyber-physical systems (CPS) is often designed and verified based on platform models that do not fully capture the complexity of its deployment settings. For example, it is common to assume that the processor always operates at full speed, is dedicated to the control software, and that overheads are negligible. In practice, the hardware resources – such as last-level shared cache (LLC), memory bandwidth and processor cycles – often vary with time and depend on the current hardware state, which is a reason why we observe different execution times across different runs of the same control software [1]. This gap can lead to overly costly or unsafe design.
Measurement-based approaches and overhead-aware analysis can reduce the analysis pessimism or ensure safety [2]. The recent work [3] uses fine-grained profiles of the software execution on an actual platform to make dynamic scheduling and resource allocations. Supervisory algorithms that dynamically switch among a bank of controllers – all provably safe but some computationally more benign (and less performant) than others – depending on the resource availability also exist [4]. However, the effectiveness of these techniques is contingent on the quality of prediction of the hardware resource availability at a future instance or time horizon of interest.
In this work, we propose to predict the resource usage by control software based on just a very small set of measurements. This approach is attractive as it can reduce measurement efforts while better handling potential variances.
A first-principle predictive model for hardware resource availability based on semiconductor physics of the specific platform is, however, unavailable. Furthermore, resources such as cache and bandwidth are not only time-varying and stochastic, but they are also statistically correlated. This makes it challenging to predict the joint stochastic variability of the hardware resource availability in general. The challenge is even more pronounced for control software because the computational burden then also depends on additional context, e.g., reference trajectory that the controller is tracking.
We note that for safety-critical CPS, predicting the joint stochastic hardware resource state, as opposed to predicting a lumped variable such as worst-case execution time, can open the door for designing a new class of dynamic scheduling algorithms with better performance than what is feasible today while minimizing hardware cost.
This work proposes learning a joint stochastic process for hardware resource availability from control software execution profiles conditioned on CPS contexts (to be made precise in Sec. III-A, III-B). Our proposed method leverages recent advances in stochastic control – specifically in the multimarginal Schrödinger bridge (MSBP) – to allow prediction of time-varying joint statistical distributions of hardware resource availability at any desired time.
Contributions
Our specific contributions are as follows.
-
•
We show how recent algorithmic developments in solving the MSBP, enable probabilistic learning of hardware resources. This advances the state-of-the-art at the intersection of control, learning and real-time systems.
-
•
The proposed method is statistically nonparametric, and is suitable for high-dimensional joint prediction since it avoids gridding the hardware feature/state space.
-
•
The proposed formulation provably predicts the most likely distribution given a sequence of distributional snapshots for the hardware resource state.
-
•
We explain that the resulting algorithm is an instance of the multimarginal Sinkhorn iteration with path structured cost that is guaranteed to converge to a unique solution, and enjoys linear rate of convergence. Its computational complexity scales linearly w.r.t. dimensions, linearly w.r.t. number of distributional snapshots, and quadratically w.r.t. number of scattered samples.
Organization
II Notations and Preliminaries
We use unboldfaced capital letters to denote matrices and bold capital letters to denote tensors (of order three or more). Unboldfaced (resp. boldfaced) small letters are used to denote scalars (resp. vectors). Capital calligraphic letters are reserved to denote sets.
Square braces are used to denote the components. For instance, denotes the th component of the order tensor , where . We use the fold tensor product space notation .
For two tensors of order , we define their Hilbert-Schmidt inner product as
(1) |
The operators and are understood elementwise. We use and to denote elementwise (Hadamard) multiplication and division, respectively.
For measures defined on two Polish spaces, their product measure is denoted by . The relative entropy a.k.a. Kullback-Leibler divergence between probability measures and is
(2) |
where denotes the Radon-Nikodym derivative, and is a shorthand for “ is absolutely continuous w.r.t. ”.
The Hilbert (projective) metric (see e.g., [5]) between two vectors is
(3) |
We use the term “control cycle” to mean one pass of a feedback control loop. Due to hardware stochasticity, each control cycle completion takes variable amount of time.
III Problem Formulation
III-A Context
We consider a context vector comprised of separable cyber and physical context vectors
(4) |
In this work, we consider an instance of (4) where
(5) |
where both features are allocated in blocks of some size, and
(6) |
where denotes a Gaussian process over the domain . We work with a collection of contexts with cardinality , i.e., a sample of contexts .
III-B Hardware Resource State
For concreteness, we define a hardware resource state or feature vector used in our numerical case study (Sec. V):
(7) |
The three elements of denote the number of CPU instructions, the number of LLC requests, and the number of LLC misses in the last time unit (10 ms in our profiling), respectively.
We emphasize that our proposed method is not limited by what specific components comprise . To highlight this flexibility, we describe the proposed approach for with suitable interpretations for the specific application.
For a time interval of interest, we think of time-varying as a continuous time vector-valued stochastic process over subsets of . Suppose that snapshots or observations are made for the stochastic state , , at (possibly non-equispaced) instances
Consider the snapshot index set . For a fixed context , the snapshot observations comprise a sequence of joint probability measures satisfying . In other words,
(8) |
In our application, the data comes from control software execution profiles, i.e., by executing the same control software for the same with all parameters and initial conditions fixed. So the stochasticity in stems from the dynamic variability in hardware resource availability.
In particular, for finitely many (say ) execution profiles, we consider empirical distributions
(9) |
where denotes the Dirac delta at sample location where , . At any snapshot index , the set is scattered data.
Given the data (8)-(9), we would like to predict the most likely hardware resource state statistics
(10) |
Without the qualifier “most likely”, the problem is overdetermined since there are uncountably many measure-valued continuous curves over that are consistent with the observed data (8)-(9).

III-C Multimarginal Schrödinger Bridge
Let , and consider the Cartesian product . Let and denote the collection (i.e., manifold) of probability measures on and , respectively. Define a ground cost .
Following [6, Sec. 3], let
(11a) | ||||
(11b) |
For (not necessarily small), the multimarginal Schrödinger bridge problem (MSBP) is the following infinite dimensional convex program:
(12a) | |||
(12b) |
In particular, is a convex set. The objective (12a) is strictly convex in , thanks to the -regularized negative entropy term . The constraints (12b) are linear.
In this work, the measures correspond to sequential observation, and we therefore fix the path structured (Fig. 1) ground cost
(13) |
In particular, we choose the squared Euclidean distance sequential cost between two consecutive snapshot indices, i.e., . MSBPs with more general tree structured ground costs have appeared in [7].
When the cardinality of the index set equals , then (12) reduces to the (bi-marginal) Schrödinger bridge problem (SBP) [8, 9]. In this case, the solution of (12) gives the most likely evolution between two marginal snapshots . This can be established via the large deviations [10] interpretation [11, Sec. II] of SBP using Sanov’s theorem [12]; see also [13, Sec. 2.1].
Specifically, let denote the collection of continuous functions on the time interval taking values in . Let be the collection of all path measures on with time marginal , and time marginal . Given a symmetric ground cost (e.g., Euclidean distance) , let
(14) |
and consider the bimarginal Gibbs kernel
(15) |
Then, the bimarginal SBP solves
(16) |
i.e., the most likely evolution of the path measure consistent with the observed measure-valued snapshots .
Under the stated assumptions on the ground cost , the existence of minimizer for (16) is guaranteed [14, 15]. The uniqueness of minimizer follows from strict convexity of the map for fixed .
This relative entropy reformulation, and thereby “the most likely evolution consistent with observed measures” interpretation, also holds for the MSBP (12) with snapshots. Specifically, for as in (12)-(13), we generalize (14) as
(17) |
and define the multimarginal Gibbs kernel
(18) |
Problem (16) then generalizes to
(19) |
where denotes the collection of all path measures on with time marginal . The equivalence between (12) and (19) can be verified by direct computation. Thus solving (19), or equivalently (12), yields the most likely evolution of the path measure consistent with the observed measure-valued snapshots .
We propose to solve the MSBP (12) for learning the time-varying statistics of the hardware resource state as in (10). We next detail a discrete formulation to numerically solve the same for scattered data where is the number of control software execution profiles.
The minimizer of (12), can be used to compute the optimal coupling between snapshot index pairs as
(20) |
where
(21a) | ||||
(21b) |
This will be useful for predicting the statistics of at any (out-of-sample) query time .
Remark 1.
(MSBP and MOT) When the entropic regularization strength , then the MSBP (12) reduces to the multimarginal optimal transport (MOT) problem [16, 17] that has found widespread applications in barycenter computation [18], fluid dynamics [19, 20], team matching problems [21], and density functional theory [22, 23]. Further specializing MOT with the cardinality of equals , yields the (bimarginal) optimal transport [24] problem.
III-D Discrete Formulation of MSBP
With slight abuse of notations, we use the same symbol for the continuum and discrete version of a tensor. The ground cost in discrete formulation is represented by an order tensor , with components . The component encodes the cost of transporting unit mass for a tuple .
Likewise, consider the discrete mass tensor with components . The component denotes the amount of transported mass for a tuple .
For any , the empirical marginals are supported on the finite sets . We denote the projection of on the th marginal as . Thus , and is given componentwise as
(22) |
Likewise, denote the projection of on the th marginal as , i.e., , and is given componentwise as
(23) |
We note that (22) and (23) are the discrete versions of the integrals in (12b) and (20), respectively.
With the above notations in place, the discrete version of (12) becomes
(24a) | |||
(24b) |
The primal formulation (24) has decision variables, and is computationally intractable. Recall that even for the bimarginal () case, a standard approach [25] is to use Lagrange duality to notice that the optimal mass matrix is a diagonal scaling of , i.e., where , , and are the Lagrange multipliers associated with respective bimarginal constraints , . The unknowns can be obtained by performing the Sinkhorn iterations
(25a) | ||||
(25b) |
with guaranteed linear convergence [26] wherein the computational cost is governed by two matrix-vector multiplications.
The duality result holds for the multimarginal () case. Specifically, the optimal mass tensor in (24) admits a structure where , , , and are the Lagrange multipliers associated with the respective multimarginal constraints (24b). The unknowns can, in principle, be obtained from the multimarginal Sinkhorn iterations [27]
(26) |
which generalize (25). However, computing requires operations. Before describing how to avoid this exponential complexity (Sec. III-F), we point out the convergence guarantees for (26).
III-E Convergence for Multimarginal Sinkhorn Iterations
III-F Multimarginal Sinkhorn Iterations for Path Structured
We circumvent the exponential complexity in computing in (26) by leveraging the path structured ground cost (13). This is enabled by a key result from [6], rephrased, and reproved below in slightly generalized form.
Proposition 1.
([6, Prop. 2]) Consider the discrete ground cost tensor in (24) induced by a path structured cost (13) so that where the matrix encodes the cost of transporting unit mass between each source-destination pair from the source set to the destination set .
Let , , .
Proof.
The proof strategy is to write the Hilbert-Schmidt inner product in two different ways.
First, recall that and . So following (1), we have
Remark 2.
Remark 3.
Remark 4.
Remark 5.
III-G Predicting Most Likely Distribution
For the ground cost (13) resulting from sequential information structure (Fig. 1), we utilize (28) to decompose of (24) into bimarginal transport plans
(30) |
Further, when is squared Euclidean, as we consider here, the maximum likelihood estimate for in (10) for a query point , is (see [6, Sec. 2.2])
(31) |
where such that , and
(32a) | |||
(32b) |
IV Overall Algorithm
The methodology proposed in Sec. III is comprised of the following three overall steps.
Step 1. Given a collection of contexts (Sec. III-A) , execute the control software over to generate hardware resource state sample snapshots (Sec. III-B) , and thereby empirical as in (9) for all , conditional on each of the context samples.
Step 2. Using data from Step 1, construct Euclidean distance matrices from the source set to the destination set . Perform recursions (29) until convergence (error within desired tolerance).
Step 3. Given a query context and time , return most likely distribution using (31).
Remark 6.
For the three steps mentioned above, Step 1 is data generation, Step 2 is probabilistic learning using data from Step 1, and Step 3 is prediction using the learnt model.

V Numerical Case Study
In this Section, we illustrate the application of the proposed method for a vehicle path tracking control software. All along, we provide details for the three steps in Sec. IV.
Control Software. We wrote custom software111Git repo: https://github.com/abhishekhalder/CPS-Frontier-Task3-Collaboration in C language implementing path following nonlinear model predictive controller (NMPC) for a kinematic bicycle model (KBM) [32, 33] of a vehicle with four states and two control inputs , given by , where the sideslip angle . The state vector comprises of the inertial position for the vehicle’s center-of-mass, its speed , and the vehicle’s inertial heading angle . The control vector comprises of the acceleration , and the front steering wheel angle .

The parameters are the distances of the vehicle’s center-of-mass to the front and rear axles, respectively.
The NMPC was designed to track a desired path given as a sequence of waypoint tuples , i.e., a sequence of desired positions and speeds (desired speed profile was numerically estimated from the desired waypoint profiles). At every control step (at most every 100 ms), using the IPOPT nonlinear program solver [34], the NMPC solved an optimization problem to minimize the sum of the crosstrack, , and errors, along with the magnitude and rapidity in change of the control inputs, over a period of time from 0 to the time horizon , subject to control magnitude and slew rate constraints. For formulation details and control performance achieved, see [35]. For implementation and parameter values, we refer the readers to the Git repository in the footnote.
While closing the loop with KBM with computed control values requires minimal computational overhead, the NMPC is computationally demanding. In the case where multiple vehicle controllers are available, it is of practical interest to predict the hardware resource usage for the NMPC for one to several control cycles, conditional on the CPS context (Sec. III-A) at a given time. For this we ‘profile’ the NMPC, meaning we run the software many times for different values of as in (4), measuring time evolution of the hardware resource state as in (7). We use these profiles to generate marginals as in (9), thus completing Step 1 in Sec. IV.
We next provide details on generating control software execution profiles for our specific case study.
Generating Execution Profiles. To gather the execution profiles for our NMPC control software, we used an Ubuntu 16.04.7 Linux machine with an Intel Xeon E5-2683 v4 CPU.
We leveraged Intel’s Cache Allocation Technology (CAT) [36] and Memguard [37] to control allocation of LLC partitions and memory bandwidth available to the control software, respectively. Both LLC partitions and memory bandwidth were allocated in blocks of 2MB.
Utilizing these resource partitioning mechanisms, we ran our application on an isolated CPU and used the Linux perf tool [38], version 4.9.3, to sample every 10 ms.
For each run of our application, we set the cache and memory bandwidth to a static allocation and pass as input a path for the NMPC to follow, represented as an array of desired coordinates. We then execute the control software for uninterrupted “control cycles”, wherein the NMPC gets the KBM state, makes a control decision, and updates the KBM state.
We profile over 12 unique desired paths to track, denoted , and 5 unique vectors of , comprising samples for . Conditional on each of these context samples , we run the software for 500 profiles for each unique for a total of 30,000 profiles.
The sample paths in (6) were all generated for using a GP with mean zero and variance 10 [31], and are shown in Fig. 2.
Our vectorial samples in (5) were , , , , and , where each entry represents the number of cache/memory bandwidth partitions from to . These values were selected to broadly cover the range of possible hardware contexts.

Control cycle | Mean | Standard deviation |
---|---|---|
#1 | 0.1181 | 0.0076 |
#2 | 0.2336 | 0.0106 |
#3 | 0.3495 | 0.0127 |
#4 | 0.4660 | 0.0143 |
#5 | 0.5775 | 0.0159 |


- | - | - | - | ||
- | - | - | |||
- | - | ||||
- | |||||
Applying the Proposed Algorithm. Given a query context , we determine the closest CPS context for which profiling data is available, using the Euclidean distance between cyber context vectors (5), and the Fréchet distance [41] between physical context curves (6). In this case study, we consider a query context with closest and closest . Profiling data for this is shown in Fig. 3.
For each of the profiles, we are given the end times for each of the control cycles. We then determine the statistics of the cycle end times (Fig. 4) and compute the empirical distributions of at the means (Table I) of the control cycle start/end time boundaries. For empirical distributions at times between cycle boundaries, we let be the number of marginals equispaced-in-time between each cycle boundary. We then set from the means in Table I, where and is the sampled mean end time for the th control cycle.
Our distributions are as per (9), where is the sample of the hardware resource state (7) at time (within 5ms) for profile given context .
We set and solve the discrete MSBP (24) with squared Euclidean cost using (29). Fig. 5 shows that the Sinkhorn iterations converge linearly (Sec. III-E). We emphasize here that the computational complexity of proposed algorithm is minimal, thanks to the path structure of the information. In particular, we solve the MSBP (24) with decision variables in approx. 10 s in MATLAB on an Ubuntu 22.04.2 LTS Linux machine with an AMD Ryzen 7 5800X CPU.
Fig. 6 compares predicted versus observed empirical distributions. Specifically, Fig. 6 shows distributional predictions at times , temporally equispaced throughout the duration of the 3rd control cycle, i.e., between and , with
where . We used (31) with , since .
From Fig. 6 it is clear that the measure-valued predictions, while largely accurate, are prone to error in cases where the software resource usage behavior changes in bursts too short to be appear in our observations. It follows that increasing the number of snapshots should yield an improvement in overall accuracy. In this example, we achieve this by increasing . Table II reports the Wasserstein distances between the corresponding predicted and measured distributions:
(33) |
We computed each of these as the square root of the optimal value of the corresponding linear program that results from specializing (24) with , .
VI Concluding Remarks
We apply recent algorithmic advances in solving the MSBP to learn stochastic hardware resource usage by control software. The learnt model demonstrates accurate nonparametric measure-valued predictions for the joint hardware resource state at a desired time conditioned on CPS context. The formulation and its solution comes with a maximum likelihood guarantee in the space of probability measures, and the algorithm enjoys a guaranteed linear convergence rate.
References
- [1] G. Bernat, A. Colin, and S. Petters, “WCET analysis of probabilistic hard real-time systems,” in 23rd IEEE Real-Time Systems Symposium, 2002 (RTSS), 2002, pp. 279–288.
- [2] M. Lv, N. Guan, Y. Zhang, Q. Deng, G. Yu, and J. Zhang, “A survey of WCET analysis of real-time operating systems,” in 2009 International Conference on Embedded Software and Systems, 2009, pp. 65–72.
- [3] R. Gifford, N. Gandhi, L. T. X. Phan, and A. Haeberlen, “Dna: Dynamic resource allocation for soft real-time multicore systems,” in 2021 IEEE 27th Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 2021, pp. 196–209.
- [4] K. Zhang, J. Sprinkle, and R. G. Sanfelice, “Computationally aware switching criteria for hybrid model predictive control of cyber-physical systems,” IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 479–490, 2016.
- [5] P. J. Bushell, “Hilbert’s metric and positive contraction mappings in a Banach space,” Archive for Rational Mechanics and Analysis, vol. 52, pp. 330–338, 1973.
- [6] F. Elvander, I. Haasler, A. Jakobsson, and J. Karlsson, “Multi-marginal optimal transport using partial information with applications in robust localization and sensor fusion,” Signal Processing, vol. 171, p. 107474, 2020.
- [7] I. Haasler, A. Ringh, Y. Chen, and J. Karlsson, “Multimarginal optimal transport with a tree-structured cost and the Schrodinger bridge problem,” SIAM Journal on Control and Optimization, vol. 59, no. 4, pp. 2428–2453, 2021.
- [8] C. Léonard, “A survey of the Schrödinger problem and some of its connections with optimal transport,” Discrete and Continuous Dynamical Systems-Series A, vol. 34, no. 4, pp. 1533–1574, 2014.
- [9] Y. Chen, T. T. Georgiou, and M. Pavon, “Stochastic control liaisons: Richard Sinkhorn meets Gaspard Monge on a Schrodinger bridge,” Siam Review, vol. 63, no. 2, pp. 249–313, 2021.
- [10] A. Dembo and O. Zeitouni, Large deviations techniques and applications. Springer Science & Business Media, 2009, vol. 38.
- [11] H. Follmer, “Random fields and diffusion processes,” Ecole d’Ete de Probabilites de Saint-Flour XV-XVII, 1985-87, 1988.
- [12] I. N. Sanov, On the probability of large deviations of random variables. United States Air Force, Office of Scientific Research, 1958.
- [13] M. Pavon, G. Trigila, and E. G. Tabak, “The data-driven Schrödinger bridge,” Communications on Pure and Applied Mathematics, vol. 74, no. 7, pp. 1545–1573, 2021.
- [14] I. Csiszár, “I-divergence geometry of probability distributions and minimization problems,” The annals of probability, pp. 146–158, 1975.
- [15] J. M. Borwein, A. S. Lewis, and R. D. Nussbaum, “Entropy minimization, DAD problems, and doubly stochastic kernels,” Journal of Functional Analysis, vol. 123, no. 2, pp. 264–307, 1994.
- [16] L. Rüschendorf and L. Uckelmann, “On the n-coupling problem,” Journal of multivariate analysis, vol. 81, no. 2, pp. 242–258, 2002.
- [17] B. Pass, “Multi-marginal optimal transport: theory and applications,” ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique, vol. 49, no. 6, pp. 1771–1790, 2015.
- [18] M. Agueh and G. Carlier, “Barycenters in the Wasserstein space,” SIAM Journal on Mathematical Analysis, vol. 43, no. 2, pp. 904–924, 2011.
- [19] Y. Brenier, “Generalized solutions and hydrostatic approximation of the Euler equations,” Physica D: Nonlinear Phenomena, vol. 237, no. 14-17, pp. 1982–1988, 2008.
- [20] J.-D. Benamou, G. Carlier, and L. Nenna, “Generalized incompressible flows, multi-marginal transport and sinkhorn algorithm,” Numerische Mathematik, vol. 142, pp. 33–54, 2019.
- [21] G. Carlier, A. Oberman, and E. Oudet, “Numerical methods for matching for teams and Wasserstein barycenters,” ESAIM: Mathematical Modelling and Numerical Analysis, vol. 49, no. 6, pp. 1621–1642, 2015.
- [22] G. Buttazzo, L. De Pascale, and P. Gori-Giorgi, “Optimal-transport formulation of electronic density-functional theory,” Physical Review A, vol. 85, no. 6, p. 062502, 2012.
- [23] C. Cotar, G. Friesecke, and C. Klüppelberg, “Density functional theory and optimal transportation with Coulomb cost,” Communications on Pure and Applied Mathematics, vol. 66, no. 4, pp. 548–599, 2013.
- [24] G. Peyré and M. Cuturi, “Computational optimal transport: With applications to data science,” Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355–607, 2019.
- [25] M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” Advances in neural information processing systems, vol. 26, 2013.
- [26] J. Franklin and J. Lorenz, “On the scaling of multidimensional matrices,” Linear Algebra and its applications, vol. 114, pp. 717–735, 1989.
- [27] J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyré, “Iterative Bregman projections for regularized transportation problems,” SIAM Journal on Scientific Computing, vol. 37, no. 2, pp. A1111–A1138, 2015.
- [28] H. H. Bauschke and A. S. Lewis, “Dykstras algorithm with Bregman projections: A convergence proof,” Optimization, vol. 48, no. 4, pp. 409–427, 2000.
- [29] S. D. Marino and A. Gerolin, “An optimal transport approach for the Schrödinger bridge problem and convergence of Sinkhorn algorithm,” Journal of Scientific Computing, vol. 85, no. 2, p. 27, 2020.
- [30] G. Carlier, “On the linear convergence of the multimarginal Sinkhorn algorithm,” SIAM Journal on Optimization, vol. 32, no. 2, pp. 786–794, 2022.
- [31] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
- [32] J. Kong, M. Pfeiffer, G. Schildbach, and F. Borrelli, “Kinematic and dynamic vehicle models for autonomous driving control design,” in 2015 IEEE intelligent vehicles symposium (IV). IEEE, 2015, pp. 1094–1099.
- [33] S. Haddad, A. Halder, and B. Singh, “Density-based stochastic reachability computation for occupancy prediction in automated driving,” IEEE Transactions on Control Systems Technology, vol. 30, no. 6, pp. 2406–2419, 2022.
- [34] A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” Mathematical Programming, vol. 106, no. 1, pp. 25–57, 2006.
- [35] https://github.com/abhishekhalder/CPS-Frontier-Task3-Collaboration/blob/master/Codes/kbm_sim/Documentation_KinematicBicycle_Controllers.pdf, accessed: 2023-09-29.
- [36] Intel Corporation, “Improving real-time performance by utilizing cache allocation technology,” Apr. 2015, White Paper.
- [37] H. Yun, G. Yao, R. Pellizzoni, M. Caccamo, and L. Sha, “Memory bandwidth management for efficient performance isolation in multi-core platforms,” IEEE Transactions on Computers, vol. 65, no. 2, pp. 562–576, Feb 2016.
- [38] “perf(1) — linux manual page,” https://man7.org/linux/man-pages/man1/perf.1.html, accessed: 2023-09-29.
- [39] A. W. Bowman, “An alternative method of cross-validation for the smoothing of density estimates,” Biometrika, vol. 71, no. 2, pp. 353–360, 1984.
- [40] P. Hall, J. Marron, and B. U. Park, “Smoothed cross-validation,” Probability theory and related fields, vol. 92, no. 1, pp. 1–20, 1992.
- [41] T. Eiter and H. Mannila, “Computing discrete Fréchet distance,” 1994. [Online]. Available: https://api.semanticscholar.org/CorpusID:16010565