This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: MIT CSAIL, Cambridge MA 02139, USA
11email: {jiakai,rinard}@mit.edu

Verifying Low-dimensional Input Neural Networks via Input Quantization

Kai Jia 11    Martin Rinard 11
Abstract

Deep neural networks are an attractive tool for compressing the control policy lookup tables in systems such as the Airborne Collision Avoidance System (ACAS). It is vital to ensure the safety of such neural controllers via verification techniques. The problem of analyzing ACAS Xu networks has motivated many successful neural network verifiers. These verifiers typically analyze the internal computation of neural networks to decide whether a property regarding the input/output holds. The intrinsic complexity of neural network computation renders such verifiers slow to run and vulnerable to floating-point error.

This paper revisits the original problem of verifying ACAS Xu networks. The networks take low-dimensional sensory inputs with training data provided by a precomputed lookup table. We propose to prepend an input quantization layer to the network. Quantization allows efficient verification via input state enumeration, whose complexity is bounded by the size of the quantization space. Quantization is equivalent to nearest-neighbor interpolation at run time, which has been shown to provide acceptable accuracy for ACAS in simulation. Moreover, our technique can deliver exact verification results immune to floating-point error if we directly enumerate the network outputs on the target inference implementation or on an accurate simulation of the target implementation.

Keywords:
Neural network verification Verification by enumeration ACAS Xu network verification

1 Introduction

The Airborne Collision Avoidance System (ACAS) is crucial for aircraft safety [11]. This system aims to avoid collision with intruding aircraft via automatically controlling the aircraft or advising a human operator to take action. The ACAS typically takes low-dimensional sensory inputs, including distance, direction, and speed for the intruder and ownship aircraft, and provides a control policy which is a valuation for a set of candidate actions such as “weak left” or “strong right”. Recent work has formulated aircraft dynamics under uncertainties such as advisory response delay as a partially observable Markov decision process for which dynamic programming can be used to compute values for different actions [10]. The value function computed via dynamic programming is often stored in a lookup table with millions of entries [12] that require gigabytes of storage. While this table could, in principle, be used to implement the ACAS, the high storage demand makes it too costly to be embedded in practical flight control systems. This situation has motivated the development of table compression techniques, including block compression with reduced floating-point precision [13] and decision trees [7].

Recently, neural networks have emerged as an efficient alternative for compressing the lookup tables in ACAS Xu (ACAS X for unmanned aircraft) by approximating the value function with small neural networks. Specifically, Julian et al. [7] compresses the two-gigabyte lookup table into 45 neural networks with 2.4MB of storage, where each network handles a partition of the input space.

Katz et al. [9] proposes a set of safety properties for the ACAS Xu networks, such as that a “strong right” advisory should be given when a nearby intruder is approaching from the left. These safety properties have served as a valuable benchmark to motivate and evaluate multiple verification algorithms [9, 19, 15, 17, 1]. Such verifiers typically need to perform exact or conservative analysis of the internal neural network computation [14, 18]. Unfortunately, neural network verification is an NP-Complete problem [9], and therefore the verifiers need exponential running time in the worst case and can be very slow in practice. In particular, Bak et al. [1] recently presented the first verifier that is able to analyze the properties ϕ1\phi_{1} to ϕ4\phi_{4} in the ACAS Xu benchmarks with a time limit of 10 minutes for each case, but their verifier still needs 1.7 hours to analyze the property ϕ7\phi_{7}.

In summary, previous techniques perform the following steps to obtain and verify their neural network controllers for ACAS:

  1. 1.

    Compute a lookup table containing the scores of different actions given sensory states via dynamic programming.

  2. 2.

    Train neural networks to approximate the lookup table.

  3. 3.

    In deployed systems, use the neural networks to provide control advisories.

    • At run time, the networks give interpolated scores for states not present in the original lookup table.

    • Neural network verifiers that analyze the internal computing of neural networks are adopted to check if the networks meet certain safety specifications.

We propose instead to verify neural networks with low-dimensional inputs, such as the ACAS Xu networks, via input quantization and state enumeration. Specifically, we prepend a quantization layer to the network so that all the internal computation is performed on the discretized input space. Our proposed technique performs the following steps to obtain and verify a quantized neural network:

  1. 1.

    We take a pretrained network and prepend an input quantization layer to the network. The input quantization should be compatible with the original lookup table, i.e., preserving the grid points in the lookup table.

  2. 2.

    In deployed systems, sensory inputs are first quantized by the input quantization layer. The original network then computes the scores for the quantized input.

    • At run time, the quantization process is equivalent to nearest-neighbor interpolation.

    • To verify the network for any specification, we enumerate all quantized states within the constraint of the specification and check if the network outputs meet the specification.

Our method provides the following desirable features:

  1. 1.

    Our method provides acceptable runtime accuracy for ACAS Xu. Our input quantization is equivalent to nearest-neighbor interpolation and gives identical results on the table grid points as the original continuous network. Julian et al. [7] has shown that nearest-neighbor interpolation on the lookup table for runtime sensory inputs provides effective collision avoidance advisories in simulation.

  2. 2.

    Our method enables efficient verification. Verifying the input-quantized networks for any safety specification takes nearly constant time bounded by evaluating the network on all the grid points in the quantized space. Multiple specifications can be verified simultaneously by evaluating the network on the grid once and checking the input and output conditions for each property. Our method provides a verification speedup of tens of thousands of times compared to the ReluVal [19] verifier.

  3. 3.

    Many existing verifiers do not accurately model floating-point arithmetic due to efficiency considerations, thus giving potentially incorrect verification results [5]. For example, Wang et al. [19] reports that Reluplex [9] occasionally produces false adversarial examples due to floating-point error.

    By contrast, our verification result is exact (i.e., complete and sound) and does not suffer from floating-point error because we combine input quantization and complete enumeration of the effective input space. Moreover, input quantization allows directly verifying on the target implementation or an accurate simulation of the implementation, and therefore provides trustworthy safety guarantees for given neural network inference implementations.

  4. 4.

    Our technique allows easily verifying more complicated network architectures, such as continuous-depth models [2]. Our verification only needs an efficient inference implementation for the networks. By contrast, extending other neural network verifiers to new network architectures requires significant effort.

We recommend input quantization for neural networks with low-dimensional inputs as long as the quantization provides sufficient accuracy for the target application and the quantization space is small enough to allow efficient enumeration. This technique enables efficient, exact, and robust verification and provides reliable performance on the deployed platform.

2 Method

We formally describe our input-quantization method. This paper uses bold symbols to represent vectors and regular symbols to represent scalars. The superscript represents derived mathematical objects or exponentiation depending on the context.

Let 𝒇:nm\bm{f}:^{n}\mapsto^{m} denote the computation of a neural network on nn-dimensional input space with nn being a small number. We propose to use a quantized version of the network for both training and inference, defined as

𝒇q(𝒙):=𝒇(𝒒(𝒙))\displaystyle\bm{f}^{q}(\bm{x})\vcentcolon=\bm{f}(\bm{q}(\bm{x})) (1)

where 𝒒(𝒙)\bm{q}(\bm{x}) is the quantization function such that 𝒒(𝒙)S\bm{q}(\bm{x})\in S with SS being a finite-sized set. For a specification ϕ:𝒙P(𝒙)R(𝒇(𝒙))\phi:\forall_{\bm{x}}P(\bm{x})\implies R(\bm{f}(\bm{x})) where P()P(\cdot) and R()R(\cdot) are predicates, we verify ϕ\phi regarding 𝒇q\bm{f}^{q} by checking:

ϕq:\displaystyle\phi^{q}\;:\quad 𝒙qSpR(𝒇(𝒙q))\displaystyle\forall\bm{x}^{q}\in S_{p}\implies R(\bm{f}(\bm{x}^{q})) (2)
where Sp:={𝒒(𝒙):P(𝒙)}\displaystyle S_{p}\vcentcolon=\{\bm{q}(\bm{x})\;:\;P(\bm{x})\}

Since SpSS_{p}\subseteq S, the complexity of verifying ϕq\phi^{q} is bounded by |S||S|.

We quantize each dimension of 𝒙\bm{x} independently via 𝒒(𝒙)=[q1(x1)qn(xn)]\bm{q}(\bm{x})=[q_{1}(x_{1})\ldots q_{n}(x_{n})]. Note that if some of the dimensions are highly correlated in some application, we can quantize them together to avoid a complete Cartesian product and thus reduce the size of the quantized space.

In many cases, the input space is uniformly quantized. Previous work has utilized uniform input quantization for neural network verification [20, 4] and uniform computation quantization for efficient neural network inference [3]. Given a quantization step sis_{i} and a bias value bib_{i}, we define a uniform quantization function qi()q_{i}(\cdot) as:

qi(xi)=xibisisi+bi\displaystyle q_{i}(x_{i})=\left\lfloor\frac{x_{i}-b_{i}}{s_{i}}\right\rceil s_{i}+b_{i} (3)

where \lfloor\cdot\rceil denotes rounding to the nearest integer.

The values of qi()q_{i}(\cdot) are essentially determined according to prior knowledge about the target application and may thus be nonuniform. Let Qi={vi1,,vik}Q_{i}=\{v_{i}^{1},\cdots,v_{i}^{k}\} denote the range of qi()q_{i}(\cdot). We use nearest neighbor for nonuniform quantization:

qi(xi)=argminvij|vijxi|\displaystyle q_{i}(x_{i})=\operatorname{argmin}_{v_{i}^{j}}|v_{i}^{j}-x_{i}| (4)

The ACAS Xu networks are trained on a lookup table 𝑳:Gm\bm{L}:G\mapsto^{m}, where the domain GnG\subset^{n} is a finite set. We choose the quantization scheme so that the quantization preserves grid points, formally 𝒙G:𝒒(𝒙)=𝒙\forall\bm{x}\in G:\bm{q}(\bm{x})=\bm{x}. In this way, the training processes of 𝒇()\bm{f}(\cdot) and 𝒇q()\bm{f}^{q}(\cdot) are identical. In fact, we directly prepend 𝒒()\bm{q}(\cdot) as an input quantization layer to a pretrained network 𝒇()\bm{f}(\cdot) to obtain 𝒇q()\bm{f}^{q}(\cdot). Note that we can use a denser quantization than the grid points in GG so that prediction accuracy might get improved by using the neural network as an interpolator.

3 Experiments

We evaluate our method on checking the safety properties for the ACAS Xu networks [9]. Note that accuracy of input-quantized networks in deployed systems is acceptable since the quantization is equivalent to nearest-neighbor interpolation that has been shown to provide effective collision avoidance advisories in simulation [7].

Experiments in this section focus on evaluating the runtime overhead of input quantization and the actual speed of verification by enumerating quantized states. We train two networks of different sizes to evaluate the scalability of the proposed method.

3.1 Experimental Setup

The horizontal CAS problem takes seven inputs as described in Table 1, and generates one of the five possible advisories: COC (clear of conflict), WL (weak left), WR (weak right), SL (strong left), and SR (strong right).

Table 1: Description of horizontal CAS inputs. The last column describes the values used to generate the lookup table, which are taken from the open-source implementation of HorizontalCAS [6] and the Appendix VI of Katz et al. [9].
Symbol Description Values in the lookup table
ρ\rho (m) Distance from ownship to intruder 32 values between 0 and 56000 1
θ\theta (rad) Angle to intruder 2 41 evenly spaced values between π-\pi and π\pi
ψ\psi (rad) Heading angle of intruder 2 41 evenly spaced values between π-\pi and π\pi
vownv_{own} (m/s) Speed of ownship {50,100,150,200}\{50,100,150,200\}
vintv_{int} (m/s) Speed of intruder {50,100,150,200}\{50,100,150,200\}
τ\tau (sec) Time until loss of vertical separation {0,1,5,10,20,40,60}\{0,1,5,10,20,40,60\}
αprev\alpha_{prev} Previous advisory {COC, WL, WR, SL, SR}

Julian et al. [8] proposes to train a collection of neural networks where each network works with a pair of specific (τ,αprev)(\tau,\,\alpha_{prev}) values, takes the remaining five values as network inputs, and approximates the corresponding scores in the lookup table. Although ACAS only needs to suggest the action with the maximal score, the network is still trained to approximate the original scores in the table instead of directly giving the best action because the numerical scores are used in a Kalman filter to improve system robustness in the face of state measurement uncertainty [8]. In order to maintain the action recommendation of the original table while reducing score approximation error, Julian et al. [8] adopts an asymmetric loss function that imposes a higher penalty if the network and the lookup table give different action advisories.

Katz et al. [9] proposes a few ACAS Xu safety properties as a sanity check for the networks trained by Julian et al. [8]. These properties have also served as a useful benchmark for many neural network verifiers. Although the pretrained networks of Julian et al. [8] are publicly accessible, the authors told us that they could not provide the training data or the source code due to regulatory reasons. They suggested that we use their open-source HorizontalCAS system [6] to generate the lookup tables to train our own networks. However, HorizontalCAS networks differ from the original ACAS Xu networks in that they only have three inputs by fixing vown=200v_{own}=200 and vint=185v_{int}=185. We modified the source code of HorizontalCAS to match the input description in Table 1 so that we can directly use the ReluVal [19] verifier.

We evaluate our method by analyzing the property ϕ9\phi_{9} proposed in Katz et al. [9], which usually takes the longest time to verify among all the properties for many verifiers [9, 19, 16]. Other properties share a similar form but have different input constraints and output requirements. Note that property ϕ9\phi_{9} is the most compatible with the open-source HorizontalCAS because the input constraints of other properties are beyond the ranges in Table 1. For example, property ϕ1\phi_{1} has vown1145v_{own}\geq 1145 but the quantization scheme of vownv_{own} for the original ACAS Xu networks is not publicly available.

The specification of ϕ9\phi_{9} is:

  • Description: Even if the previous advisory was “weak right”, the presence of a nearby intruder will cause the network to output a “strong left” advisory instead.

  • Tested on: the network trained on τ=5\tau=5 and αprev=WR\alpha_{prev}=\text{WR}

  • Input constraints: 2000ρ70002000\leq\rho\leq 7000, 0.4θ0.14-0.4\leq\theta\leq-0.14, 3.141592ψ3.141592+0.01-3.141592\leq\psi\leq-3.141592+0.01, 100vown150100\leq v_{own}\leq 150, 0vint1500\leq v_{int}\leq 150.

We conduct the experiments on a workstation equipped with two GPUs (NVIDIA Titan RTX and NVIDIA GeForce RTX 2070 SUPER), 128 GiB of RAM, and an AMD Ryzen Threadripper 2970WX processor. We train two neural networks for property ϕ9\phi_{9} (i.e., with τ=5\tau=5 and αprev=WR\alpha_{prev}=\text{WR}) with PyTorch.

Our small network has five hidden layers with 50 neurons in each layer, and our large network has seven hidden layers with 100 neurons in each layer. We use the ReLU activation.

We implement the nearest-neighbor quantization for ρ\rho via directly indexing a lookup table. The greatest common divisor of differences between adjacent quantized ρ\rho values is 5. Therefore, we precompute a lookup table 𝑼\bm{U} such that Uρ/5U_{\lfloor\rho/5\rceil} is the nearest neighbor of ρ\rho in the set of quantized values. We use the torch.index_select operator provided by PyTorch to take elements in the lookup table in a batched manner. Other network inputs use uniform quantization as described in Table 1. We implement uniform quantization according to the equation (3).

3.2 Experimental Results

Table 2: Accuracies achieved by the networks evaluated on the lookup table. For comparison, Julian and Kochenderfer [6] reports an accuracy of 97.9% for networks trained only with three out of the five inputs (they fixed vown=200v_{own}=200 and vint=185v_{int}=185). This table shows that our network achieves sufficient accuracy for practical use.
Metric Small network     Large network
Policy accuracy 96.87% 98.54%
Score 1\ell_{1} error 0.052 0.026
Score 2\ell_{2} error 1.3×1031.3\times 10^{-3} 3.3×1043.3\times 10^{-4}
Table 3: Comparing verification time (in seconds) for the property ϕ9\phi_{9} on two methods: the ReluVal verifier [19] that runs on multiple cores, and exhaustive enumeration in the quantized input space on a single CPU core. This table shows that verification by enumerating quantized input states is significantly faster in our case and also more scalable regarding different network sizes.

Verification method Small network     Large network ReluVal [19] 0.622 171.239 Input quantization - specific 1 0.002 0.002 Input quantization - all 2 0.384 0.866

  • 1

    Network is evaluated on the 60 input states that fall within the input constraint of ϕ9\phi_{9}.

  • 2

    Network is evaluated on all the 860,672 input states in a batched manner. This time is the upper bound for verifying any first-order specification in the form of 𝒙P(𝒙)R(𝒇(𝒙))\forall_{\bm{x}}P(\bm{x})\implies R(\bm{f}(\bm{x})) by ignoring the time on evaluating predicates P()P(\cdot) and R()R(\cdot).

Let 𝒚𝒊5\bm{y_{i}}\in^{5} (resp. 𝒚𝒊^5\hat{\bm{y_{i}}}\in^{5}) denote the scores given by the network (resp. the original lookup table) for the five candidate actions on the i thi^{\text{\hskip 0.70004ptth}} lookup table entry. We consider three accuracy measurements, assuming a uniform distribution of the table index ii:

  • Policy accuracy is the probability that the network recommends the same action as the original lookup table. Formally, it is P(argmax𝒚𝒊=argmax𝒚𝒊^)P(\operatorname{argmax}\bm{y_{i}}=\operatorname{argmax}\hat{\bm{y_{i}}}).

  • Score 1\ell_{1} error measures the 1\ell_{1} error of approximated scores, defined as E(𝒚𝒊𝒚𝒊^1)E(\left\|\bm{y_{i}}-\hat{\bm{y_{i}}}\right\|_{1}), where 𝒙1:=i|xi|\left\|\bm{x}\right\|_{1}\vcentcolon=\sum_{i}|x_{i}|.

  • Score 2\ell_{2} error measures the 2\ell_{2} error of approximated scores, defined as E(𝒚𝒊𝒚𝒊^2)E(\left\|\bm{y_{i}}-\hat{\bm{y_{i}}}\right\|_{2}), where 𝒙2:=ixi2\left\|\bm{x}\right\|_{2}\vcentcolon=\sqrt{\sum_{i}x_{i}^{2}}.

Table 2 presents the accuracies achieved by our networks, which shows that our training achieves comparable results as the HorizontalCAS system [6].

To verify the networks, we prepend them with an input quantization layer that implements the quantization scheme given in Table 1. To verify any specification or a set of specifications, we evaluate the network on all the 860, 672 points in the quantized space and check if each input/output pair meets the specification(s). Evaluating the network on the grid points takes 0.384 seconds for the small network and 0.866 seconds for the large one. We evaluate the network on multiple inputs in a batched manner to benefit from optimized numerical computing routines included in PyTorch. Adding the quantization layer incurs about 2% runtime overhead. We do not do any performance engineering and use the off-the-shelf implementation provided by PyTorch. Our verification speed can be further improved by using multiple CPU cores or using the GPU.

We also compare our method with ReluVal [19] on verifying the property ϕ9\phi_{9}. The input constraint of ϕ9\phi_{9} consists of only 60 states in the quantized space. Therefore, we only need to check if the network constantly gives the “weak right’ advisory for all the 60 states to verify ϕ9\phi_{9}. As shown in Table 3, input quantization significantly reduces the verification time compared to the ReluVal solver.

4 Conclusion

This paper advocates input quantization for the verification of neural networks with low-dimensional inputs. Our experiments show that this technique is significantly faster and more scalable than verifiers that analyze the internal computations of the neural networks on verifying ACAS Xu networks. Moreover, our method does not suffer from the floating-point discrepancy between the verifier and the network inference implementation. In general, our method applies to deterministic floating-point programs that take low-dimensional inputs as long as the target application tolerates input quantization such that enumerating all the quantized values takes acceptable time.

References

  • Bak et al. [2020] Bak, S., Tran, H.D., Hobbs, K., Johnson, T.T.: Improved geometric path enumeration for verifying relu neural networks. In: International Conference on Computer Aided Verification, pp. 66–96, Springer (2020)
  • Chen et al. [2018] Chen, R.T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, Curran Associates, Inc. (2018), URL https://proceedings.neurips.cc/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf
  • Gholami et al. [2021] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M.W., Keutzer, K.: A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630 (2021)
  • Jia and Rinard [2020a] Jia, K., Rinard, M.: Efficient exact verification of binarized neural networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1782–1795, Curran Associates, Inc. (2020a), URL https://proceedings.neurips.cc/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Paper.pdf
  • Jia and Rinard [2020b] Jia, K., Rinard, M.: Exploiting verified neural networks via floating point numerical error. arXiv preprint arXiv:2003.03021 (2020b)
  • Julian and Kochenderfer [2019] Julian, K.D., Kochenderfer, M.J.: Guaranteeing safety for neural network-based aircraft collision avoidance systems. In: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), pp. 1–10, IEEE (2019)
  • Julian et al. [2019] Julian, K.D., Kochenderfer, M.J., Owen, M.P.: Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics 42(3), 598–608 (2019)
  • Julian et al. [2016] Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016), doi:10.1109/DASC.2016.7778091
  • Katz et al. [2017] Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An efficient smt solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117, Springer (2017)
  • Kochenderfer et al. [2015] Kochenderfer, M.J., Amato, C., Chowdhary, G., How, J.P., Reynolds, H.J.D., Thornton, J.R., Torres-Carrasquillo, P.A., Ure, N.K., Vian, J.: Optimized Airborne Collision Avoidance, pp. 249–276. MIT Press (2015)
  • Kochenderfer and Chryssanthacopoulos [2011] Kochenderfer, M.J., Chryssanthacopoulos, J.: Robust airborne collision avoidance through dynamic programming. Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-371 130 (2011)
  • Kochenderfer and Chryssanthacopoulos [2010] Kochenderfer, M.J., Chryssanthacopoulos, J.P.: A decision-theoretic approach to developing robust collision avoidance logic. In: 13th International IEEE Conference on Intelligent Transportation Systems, pp. 1837–1842 (2010), doi:10.1109/ITSC.2010.5625063
  • Kochenderfer and Monath [2013] Kochenderfer, M.J., Monath, N.: Compression of optimal value functions for markov decision processes. In: 2013 Data Compression Conference, pp. 501–501 (2013), doi:10.1109/DCC.2013.81
  • Liu et al. [2019] Liu, C., Arnon, T., Lazarus, C., Strong, C., Barrett, C., Kochenderfer, M.J.: Algorithms for verifying deep neural networks. arXiv preprint arXiv:1903.06758 (2019)
  • Singh et al. [2019a] Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: ICLR (Poster) (2019a)
  • Singh et al. [2019b] Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: International Conference on Learning Representations (2019b)
  • Tran et al. [2020] Tran, H.D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: Nnv: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Lahiri, S.K., Wang, C. (eds.) Computer Aided Verification, pp. 3–17, Springer International Publishing, Cham (2020), ISBN 978-3-030-53288-8
  • Urban and Miné [2021] Urban, C., Miné, A.: A review of formal methods applied to machine learning. arXiv preprint arXiv:2104.02466 (2021)
  • Wang et al. [2018] Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: 27th USENIX Security Symposium (USENIX Security 18), pp. 1599–1614 (2018)
  • Wu and Kwiatkowska [2020] Wu, M., Kwiatkowska, M.: Robustness guarantees for deep neural networks on videos. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 308–317 (2020)