SOAR: Second-Order Adversarial Regularization
Abstract
Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples. In this work, we propose a novel regularization approach as an alternative. To derive the regularizer, we formulate the adversarial robustness problem under the robust optimization framework and approximate the loss function using a second-order Taylor series expansion. Our proposed second-order adversarial regularizer (SOAR) is an upper bound based on the Taylor approximation of the inner-max in the robust optimization objective. We empirically show that the proposed method significantly improves the robustness of networks against the and bounded perturbations generated using cross-entropy-based PGD on CIFAR-10 and SVHN.
1 Introduction
Adversarial training (Szegedy et al., 2013) is the standard approach for improving the robustness of deep neural networks (DNN), or any other model, against adversarial examples. It is a data augmentation method that adds adversarial examples to the training set and updates the network with newly added data points. Intuitively, this procedure encourages the DNN not to make the same mistakes against an adversary. By adding sufficiently enough adversarial examples, the network gradually becomes robust to the attack it was trained on. One of the challenges with such a data augmentation approach is the tremendous amount of additional data required for learning a robust model. Schmidt et al. (2018) show that under a Gaussian data model, the sample complexity of robust generalization is times larger than that of standard generalization. They further suggest that current datasets (e.g., CIFAR-10) may not be large enough to attain higher adversarial accuracy. A data augmentation procedure, however, is an indirect way to improve the robustness of a DNN. Our proposed alternative is to define a regularizer that penalizes DNN parameters prone to attacks. Minimizing the regularized loss function leads to estimators robust to adversarial examples.
Adversarial training and our proposal can both be formulated in terms of robust optimization framework for adversarial robustness (Ben-Tal et al., 2009; Madry et al., 2018; Wong and Kolter, 2018; Shaham et al., 2018; Sinha et al., 2018). In this formulation, one is seeking to improve the worst-case performance of the model, where the performance is measured by a particular loss function . Adversarial training can be understood as approximating such a worst-case loss by finding the corresponding worst-case data point, i.e., with some specific attack techniques. Our proposed method is more direct. It is based on approximating the loss function using its second-order Taylor series expansion, i.e.,
and then upper bounding the worst-case loss using the expansion terms. By considering both gradient and Hessian of the loss function with respect to (w.r.t.) the input, we can provide a more accurate approximation to the worst-case loss. In our derivations, we consider both and attacks. In our derivations, the second-order expansion incorporates both the gradient and Hessian of the loss function with respect to (w.r.t.) the input. We call the method Second-Order Adversarial Regularizer (SOAR) (not to be confused with the Soar cognitive architecture Laird 2012). In the course of development of SOAR, we make the following contributions:
-
•
We show that an over-parameterized linear regression model can be severely affected by an adversary, even though its population loss is zero. We robustify it with a regularizer that exactly mimics the adversarial training. This suggests that regularization can be used instead of adversarial training (Section 2).
-
•
Inspired by such a possibility, we develop a regularizer which upper bounds the worst-case effect of an adversary under an approximation of the loss. In particular, we derive SOAR, which approximates the inner maximization of the robust optimization formulation based on the second-order Taylor series expansion of the loss function (Section 4).
-
•
We study SOAR in the logistic regression setting and reveal challenges with regularization using Hessian w.r.t. the input. We develop a simple initialization method to circumvent the issue (Section 4.1).
- •
-
•
Our result on state-of-the-art attack method AutoAttack (Croce and Hein, 2020) reveals SOAR’s vulnerability. We include a thorough empirical study by investigating how SOAR regularized models behave under different strengths of AutoAttacks (different ), as well as they react to different parts of the AutoAttack algorithm. Based on the results, we discuss several hypotheses on SOAR’s vulnerability (Section 5.3).
2 Linear Regression with an Over-parametrized Model
This section shows that for over-parameterized linear models, gradient descent (GD) finds a solution that has zero population loss, but is prone to attacks. It also shows that one can avoid this problem with defining an appropriate regularizer. Hence, we do not need adversarial training to robustify such a model. This simple illustration motivates the development of our method in next sections. We only briefly report the main results here, and defer the derivations to Appendix A.
Consider a linear model with . Suppose that and the distribution of is such that it is confined on a -dimensional subspace . This setup can be thought of as using an over-parameterized model that has many irrelevant dimensions with data that is only covering the relevant dimension of the input space. This is a simplified model of the situation when the data manifold has a dimension lower than the input space. We consider the squared error pointwise loss . Denote the residual by , and the population loss by .
Suppose that we initialize the weights as , and use GD on the population loss, i.e., . It is easy to see that the partial derivatives w.r.t. are all zero, i.e., no weight adaptation happens. With a proper choice of learning rate , we get that the asymptotic solution is . That is, the initial random weights on dimensions do not change.
We make two observations. The first is that , i.e., the population loss is zero. So from the perspective of training under the original loss, we are finding the optimal solution. The second observation is that this model is vulnerable to adversarial examples. An FGSM-like attack that perturbs by with (for ) has the population loss of under the adversary at the asymptotic solution . When the dimension is large, this loss is quite significant. The culprit is obviously that GD is not forcing the initial weights to go to zero when there is no data from irrelevant and unused dimensions. This simple problem illustrates how the optimizer and an over-parameterized model might interact and lead to a solution that is prone to attacks.
An effective solution is to regularize the loss such that the weights of irrelevant dimensions to go to zero. Generic regularizers such as ridge and Lasso regression lead to a biased estimate of , and thus, one is motivated to define a regularizer that is specially-designed for improving adversarial robustness. Bishop (1995) showed the close connection between training with random perturbation and Tikhonov Regularization. Inspired by this idea, we develop a regularizer that mimics the adversary itself. For this FGSM-like adversary, the population loss at the perturbed point is
(1) |
Minimizing is equivalent to minimizing the model at the point . The regularizer incorporates the effect of adversary in exact form.
Nonetheless, there are two limitations of this approach. The first is that it is designed for a particular choice of attack, an FGSM-like one. We would like a regularizer that is robust to a larger class of attacks. The second is that this regularizer is designed for a linear model and the squared error loss. How can we design a regularizer for more complicated models, such as DNNs? We address these questions by formulating the problem of adversarial robustness within the robust optimization framework (Section 3), and propose an approach to approximately solve it (Section 4).
3 Robust Optimization Formulation
Designing an adversarial robust estimator can be formulated as a robust optimization problem (Huang et al., 2015; Madry et al., 2018; Wong and Kolter, 2018; Shaham et al., 2018). To describe it, let us introduce our notations first. Consider an input space , an output space , and a parameter (or hypothesis) space , parameterizing a model . In the supervised learning scenario, we are given a data distribution over pairs of examples . Given the prediction of and a target value , the pointwise loss function of the model is denoted by . Given the distribution of data, one can define the population loss as . The goal of the standard supervised learning problem is to find a that minimizes the population loss. A generic approach to do this is through empirical risk minimization (ERM). Explicit or implicit regularization is often used to control the complexity of the hypothesis to avoid over- or under-fitting (Hastie et al., 2009).
As shown in the previous section, it is possible to find a parameter that minimizes the loss through ERM, but leads to a model that is vulnerable to adversarial examples. To incorporate the robustness notion in the model, it requires defenders to reconsider the training objective. It is also important to formalize and constrain the power of the adversary, so we understand the strength of the attack to which the model is resistant. This can be specified by limiting that the adversary can only modify any input to with . Commonly used constraints are -balls w.r.t. the -norms, though other constraint sets have been used too (Wong et al., 2019b). This goal can be formulated as a robust optimization problem where the objective is to minimize the adversarial population loss given some perturbation constraint :
(2) |
We have an interplay between two goals: 1) the inner-max term looks for the worst-case loss around the input, while 2) the outer-min term optimizes the hypothesis by minimizing such a loss.
Note that solving the inner-max problem is often computationally difficult, so one may approximate it with a surrogate loss obtained from a particular attack. Adversarial training and its variants (Szegedy et al., 2013; Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2018; Wong et al., 2019a) can be intuitively understood as an approximation of this min-max problem via different .
As shown in Section 2, one can design a regularizer that provides the exact value of the loss function at the attacked point for a particular choice of model, loss function, and adversary, cf. (1). Under the robust optimization framework, the regularizer and adversarial training are two realizations of the inner-max objective in (2), but using such a regularizer relieved us from using a separate inner optimization procedure, as is done in adversarial training. Motivated by that example and the robust optimization framework discussed here, we develop a regularizer that can be understood as an upper-bound on the worst-case value of the loss at an attacked point under a second-order approximation of the loss function.
4 Second-Order Adversarial Regularizer (SOAR)
The main idea of SOAR is to approximate the loss function using the second-order Taylor series expansion around an input and then solve the inner maximization term of the robust optimization formulation (2) using the approximated form. We show this for both and attacks; the same idea can be applied to other norms. We describe crucial steps of the derivation in this section, and defer details to Appendix B .
Assuming that the loss is twice-differentiable, we can approximate the loss function around input by the second-order Taylor expansion
(3) |
For brevity, we drop , and use to denote . Let us focus on the attacks, where the constraint set in (2) is for some and . We focus on the attack because of its popularity, but we also derive the formulation for the attacks.
As a warm-up, let us solve the inner optimization problem by considering the first-order Taylor series expansion. We have
(4) |
The term defines the First-Order Adversarial Regularizer (FOAR). This is similar to the regularizer introduced by Simon-Gabriel et al. (2019) with the choice of perturbation set. For a general -attach with , we have with satisfying . We shall empirically evaluate FOAR-based approach (for the attack), but our focus is going to be on solving the inner maximization problem based on the second-order Taylor expansion:
(5) |
for . The second-order expansion in (3) can be rewritten as
(6) |
where . This allows us to derive an upper bound on the expansion terms using the characteristics of a single Hessian term . Note that is a -dimensional vector and is a matrix. We need to find an upper bound on under the attack constraint.
For the attack, solving this maximizing problem is not as easy as in (4) since the Boolean quadratic programming problem in formulation (5) is NP-hard. But we can relax the constraint set and find an upper bound for the maximizer. Note that with , an -ball of size is enclosed by an -ball of size with the same centre. Therefore, we can upper bound the inner maximization by
(7) |
which after substituting the second-order Taylor series expansion leads to an -constrained quadratic optimization problem
(8) |
with as before. The version of SOAR does not require this extra step, and we have instead of in (8). A more detailed discussion on the above relaxation procedure is included in Appendix B.2 .
Proposition 1.
This result upper bounds the maximum of the second-order approximation over an ball with radius , and relates it to an expectation of a Hessian-vector product. Note that there is a simple correspondence between (1) and regularized loss in (9). The latter can be understood as an upper bound on the worst-case damage of an adversary under a second-order approximation of the loss. For the attack, the same line of argument leads to instead of .
Let us take a closer look at . By decomposing , we get
The term can be computed using Finite Difference (FD) approximation. Note that for our Normally distributed . To ensure that the approximation direction has the same magnitude, we use the normalized instead, and use the approximation below
(10) |
To summarize, SOAR regularizer evaluated at , with a direction , and FD step size is
(11) |
The expectation in (9) can then be approximated by taking multiple samples of drawn from . These samples would be concentrated around its expectation. One can show that , where is a constant and is the -induced norm (see Theorem 6.3.2 of Vershynin 2018). In practice, we observed that taking more than one sample of do not provide significant improvement for increasing adversarial robustness, and we include an empirical study on the the effect of sample sizes in Appendix E.4 .
Before we discuss the remaining details, recall that we fully robustify the model with an appropriate regularizer in Section 2. Note the maximizer of the loss based on formulation (2) is exactly the FGSM direction, and (1) shows the population loss with our FGSM-like choice of . To further motivate a second-order approach, note that we can obtain the first two terms in (1) with a first-order regularizer such as FOAR; and we recover the exact form with a second-order formulation in (5).
Next, we study SOAR in the simple logistic regression setting, which shows potential failure of the regularizer and reveals why we might observe gradient masking. Based on that insight, we provide the remaining details of the method afterwards in Section 4.1.
4.1 Avoiding Gradient Masking
Consider a linear classifier with , where are the input and the weights, and is the sigmoid function. Note that the output of has the interpretation of being a Bernoulli distribution. For the cross-entropy loss function , the gradient w.r.t. the input is and the Hessian w.r.t. the input is .
The second-order Taylor series expansion (3) with the gradient and Hessian evaluated at is
(12) |
where is the residual term describing the difference between the predicted probability and the correct label, and . Note that can be interpreted as how confident the model is about its predication (correct or incorrect), and is close to whenever the classifier is predicting a value close to or . With this linear model, the maximization (8) becomes
The regularization term is encouraging the norm of to be small, weighted according to the residual and the uncertainty .
Consider a linear interpolation of the cross-entropy loss from to a perturbed input . Specifically, we consider for . Previous work has empirically shown that the value of the loss behaves logistically as increases from 0 to 1 (Madry et al., 2018). In such a case, since there is very little curvature at , if we use Hessian exactly at , it leads to an inaccurate approximation of the value at . Consequently, we have a poor approximation of the inner-max, and the derived regularization will not be effective.
For the approximation in (12), this issue corresponds to the scenario in which the classifier is very confident about the clean input at . Standard training techniques such as minimizing the cross-entropy loss optimize the model such that it returns the correct label with a high confidence. Whenever the classifier is correct with a high confidence, both and will be close to zero. As a result, the effect of the regularizer diminishes, i.e., the weights are no longer regularized. In such a case, the Taylor series expansion, computed using the gradient and Hessian evaluated at , becomes an inaccurate approximation to the loss, and hence its maximizer is not a good solution to the inner maximization problem.
Note that this does not mean that one cannot use Taylor series expansion to approximate the loss. In fact, by the mean value theorem, there exists an such that the second-order Taylor expansion is exact: . The issue is that if we compute the Hessian at (instead of at ), our approximation might not be very good whenever the curvature profile of the loss function at is drastically different from the one at .
More importantly, a method relying on the gradient masking can be easily circumvented (Athalye et al., 2018). Our early experimental results had also indicated that gradient masking occurred with SOAR when the gradient and Hessian were evaluated at . In particular, we observe that SOAR with zero-initialization leads to models with nearly 100% confidence on their predictions, leading to an ineffective regularizer. The result is reported in Table 5 in Appendix D.
This suggests a heuristic to improve the quality of SOAR. That is to evaluate the gradient and Hessian, through FD approximation (10) at a less confident point in the ball of . We found that evaluating the gradient and Hessian at 1-step PGD adversary successfully circumvent the issue (Line 1-1 in Algorithm 1). We compare other initializations in Table 5 in Appendix D. To ensure the regularization is of the original ball of , we use for PGD1 initialization, and then in SOAR.
Based on this heuristic, the regularized pointwise objective for a data point is
(13) |
where and the point is initialized at PGD1 adversary. Algorithm 1 summarizes SOAR on a single training data. We include the full training procedure in Appendix C. Moreover, we include additional discussions and experiments on gradient masking in Appendix E.11.
4.2 Related Work
Several regularization-based alternatives to adversarial training have been proposed. In this section, we briefly review them. Simon-Gabriel et al. (2019) studied regularization under the first-order Taylor approximation. The proposed regularizer for the perturbation set is the same as FOAR. Qin et al. (2019) propose local linearity regularization (LLR), where the local linearity measure is defined by the maximum error of the first-order Taylor approximation of the loss. LLR minimizes the local linearity mesaure, and minimizes the magnitude of the projection of gradient along the corresponding direction of the local linearity mesaure. It is motivated by the observation of flat loss surfaces during adversarial training.
CURE (Moosavi-Dezfooli et al., 2019) is the closest to our method. They empirically observed that adversarial training leads to a reduction in the magnitude of eigenvalues of the Hessian w.r.t. the input. Thus, they proposed directly minimizing the curvature of the loss function to mimic the effect of adversarial training. An important advantage of our proposed method is that SOAR is derived from a complete second-order Taylor approximation of the loss, while CURE exclusively focuses on the second-order term for the estimation of the curvature. Note the final optimization objective in SOAR, FOAR, LLR and CURE contains derivative w.r.t. the input of the DNN, and such a technique was first introduced to improve generalization by Drucker and Le Cun (1992) as double backpropagation.
Another related line of adversarial regularization methods do not involve approximation to the loss function nor robust optimization. TRADES (Zhang et al., 2019) introduces a regularization term that penalizes the difference between the output of the model on a training data and its corresponding adversarial example. MART (Wang et al., 2020) reformulated the training objective by explicitly differentiating between the mis-classified and correctly classified examples. Ding et al. (2019b) present another regularization approach that leverages adaptive margin maximization (MMA) on correctly classified example to robustify the model.
5 Experiments
In this section, we verify the effectiveness of the proposed regularization method against PGD attacks on CIFAR10. Our experiments show that training with SOAR leads to significant improvements in adversarial robustness against PGD attacks based on the cross-entropy loss under bothe the black-box and white-box settings. We discover that SOAR regularized model is more vulnerable under the state-of-the-art AutoAttack(Croce and Hein, 2020). We focus on in this section and defer evaluations on in Appendix E.5. Additionally, we provide a detailed discussion and evaluations on the SVHN dataset in Appendix E.6.
We train ResNet-10 (He et al., 2016) on the CIFAR-10 dataset. The baseline methods consist of: (1) Standard: training with no adversarially perturbed data; (2) ADV: training with 10-step PGD adversarial examples; (3) TRADES; (4) MART and (5) MMA. Empirical studies in Madry et al. (2018) and Wang et al. (2020) reveal that their approaches benefit from increasing model capacity to achieve higher adversarial robustness, as such, we include WideResNet (Zagoruyko and Komodakis, 2016) for all baseline methods. We were not able to reproduce the results of two closely related works, CURE and LLR, which we discuss further in Appendix E.1. In Appendix E.13, we compare SOAR and FOAR with different initializations. FOAR achieves the best adversarial robustness using PGD1 initialization, so we only present this variation of FOAR in this section.
The optimization procedure is described in detail in Appendix E.2. Note that all methods in this section are trained to defend against norm attacks with , as this is a popular choice of in the literature. The PGD adversaries discussed in Sections 5.1 and 5.2 are generated with and a step size of (pixel values are normalized to ). PGD20-50 denotes 20-step PGD attacks with 50 restarts. In Section 5.3, we compare SOAR with baseline methods on AutoAttack (Croce and Hein, 2020) adversaries with a varying . Additionally, results of all methods on ResNet10 are obtained by averaging over 3 independently initialized and trained models, where the standard deviations are reported in Appendix E.10. We use the provided pretrained WideResNet model provided in the public repository of each method. Lastly, discussions on challenges (i.e., difficult to train from scratch, catastrophic overfitting, BatchNorm, etc.) we encountered while implementing SOAR and our solutions (i.e., using pretrained model, clipping regularizer gradient, early stopping, etc.) are included in Appendix E.7.
Method | Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD1000 | PGD20-50 | |
WideResNet | Standard | |||||||
ADV | ||||||||
TRADES | ||||||||
MART | ||||||||
MMA | ||||||||
ResNet | Standard | |||||||
ADV | ||||||||
TRADES | ||||||||
MART | ||||||||
MMA | ||||||||
FOAR | ||||||||
SOAR | ||||||||
5.1 Robustness against PGD White-Box Attacks
Before making the comparison between SOAR and the baselines in Table 1, note that FOAR achieves against PGD20 attacks. Despite its uncompetitive performance, this shows that approximating the robust optimization formulation based on Taylor series expansion is a reasonable approach. Furthermore, this justifies our extension to a second-order approximation, as the first-order alone is not sufficient. Lastly, we observe that training with SOAR significantly improves the adversarial robustness against all PGD attacks , leading to higher robustness in all k-step PGD attacks on the ResNet model. SOAR remains competitive compared to baseline methods trained on high-capacity WideResNet architecture.
Method | SimBA | PGD20-R | PGD20-W | PGD1000-R | PGD1000-W | |
WideResNet | ADV | |||||
TRADES | ||||||
MART | ||||||
MMA | ||||||
ResNet | ADV | |||||
TRADES | ||||||
MART | ||||||
MMA | ||||||
FOAR | ||||||
SOAR | ||||||
5.2 Robustness against Black-Box Attacks
Many defences only reach an illusion of robustness through methods collectively known as gradient masking (Athalye et al., 2018). These methods often fail against attacks generated from an undefended independently trained model, known as transfer-based black-box attacks. Recent works (Tramèr et al., 2017; Ilyas et al., 2019) have proposed hypotheses for the success of transfer-based black-box attacks. In our evaluation, the transferred attacks are PGD20 and PGD1000 perturbations generated from two source models: ResNet and WideResNet, which are denoted by the suffix -R and -W respectively. The source models are trained separately from the defence models on the unperturbed training set. Additionally, Tramer et al. (2020) recommends score-based black-box attacks such as SimBA (Guo et al., 2019). They are more relevant in real-world applications where gradient information is not accessible, and are empirically shown to be more effective than transfer-based attacks. Because they are solely based on the confidence score of the model, score-based attacks are resistant to gradient-masking. All black-box attacks in this section are constrained at .
SOAR achieves the best robustness against all baseline methods trained on ResNet, as shown in Table 2. Compared with the baselines trained on WideResNet, SOAR remains the most robust model against transferred PGD20-W and PGD1000-W, approaching its standard accuracy on unperturbed data. Note that all defence methods are substantially more vulnerable to the score-based SimBA attack. SOAR regularized model is the most robust method against SimBA.
Method | Untargeted APGD-CE | Targeted APGD-DLR | Targeted FAB | Square Attack |
ADV | ||||
TRADES | ||||
MART | ||||
MMA | ||||
FOAR | ||||
SOAR | ||||
5.3 Robustness against AutoAttack
We also evaluated SOAR against a SOTA attack method Autoattack (Croce and Hein, 2020). In this section, we focus on the -bounded Autoattack, and similar results with the -bounded attack is included in Appendix E.5. We noticed that SOAR has shown greater vulnerabilities to AutoAttack compared to the attacks discussed in Sections 5.1 and 5.2. AutoAttack consists of an ensemble of four attacks: two parameter-free versions of the PGD attack (APGD-CE and APGD-DLR), a white-box fast adaptive boundary (FAB) attack (Croce and Hein, 2019), and a score-based black-box Square Attack (Andriushchenko et al., 2020). Notice that the major difference between the two PGD attacks is the loss they are based on: APGD-CE is based on the cross-entropy loss similar to (Madry et al., 2018), and APGD-DLR is based on the logit difference similar to (Carlini and Wagner, 2017).
To better understand the source of SOAR’s vulnerability, we tested it against the four attacks individually. First, we observed that the result against untargeted APGD-CE is similar to the one shown in Section 5.1. This is expected because the attacks are both formulated based on cross-entropy-based PGD. However, there is a considerable degradation in the accuracy of SOAR against targeted APGD-DLR and targeted FAB. At , SOAR is most vulnerable to targeted APGD-DLR with a robust accuracy of only . To further investigate SOAR’s robustness against AutoAttack, we tested with different to verify if SOAR can at least improve robustness against attacks with smaller . We observed that at the robustness improvement of SOAR becomes more consistent. Interestingly, we also noticed that a model with better robustness at does not guarantee a better robustness at , as is the case for Square Attack on ADV and SOAR.
Combing the results with the four attacks and with different , we provide three hypotheses on the vulnerability of SOAR. First, SOAR might overfit to a particular type of attack: adversarial examples generated based on the cross-entropy loss. APGD-DLR is based on logit difference and FAB is based on finding minimal perturbation distances, which are both very different from the cross-entropy loss. Second, SOAR might rely on gradient masking to a certain extent, and thus PGD with cross-entropy loss is difficult to find adversaries while they still exist. This also suggests that the results with black-box attacks might be insufficient to conclusively eliminate the possibility of gradient masking. Third, since SOAR provide a more consistent robustness improvement at a smaller , this suggests that the techniques discussed in Section 4 did not completely address the problems raised from the second-order approximation. This makes the upper-bound of the inner-max problem loose, hence making SOAR improves robustness against attacks with smaller than what it was formulated with.
Finally, we emphasize that this should not rule SOAR as a failed defence. Previous work shows that a mechanism based on gradient masking can be completely circumvented, resulting in a accuracy against non-gradient-based attacks (Athalye et al., 2018). Our result on SimBA and Square Attack shows that this is not the case with SOAR, even at , and thus the robustness improvement cannot be only due to gradient masking. Overall, we think SOAR’s vulnerability to AutoAttack is an interesting observation and requires further investigation.
6 Conclusion
This work proposed SOAR, a regularizer that improves the robustness of DNN to adversarial examples. SOAR was obtained using the second-order Taylor series approximation of the loss function w.r.t. the input, and approximately solving the inner maximization of the robust optimization formulation. We showed that training with SOAR leads to significant improvement in adversarial robustness under and attacks. This is only one step in designing better regularizers to improve the adversarial robustness. Several directions deserve further study, with the prominent one being SOAR’s vulnerabilities to AutoAttack. Another future direction is to understand the loss surface of DNN better in order to select a good point around which an accurate Taylor approximation can be made. This is important for designing regularizers that are not affected by gradient masking.
Acknowledgments
We acknowledge the funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada CIFAR AI Chairs program. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
References
- Andriushchenko et al. (2020) Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, pages 484–501. Springer, 2020.
- Athalye et al. (2018) Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, pages 274–283, 2018.
- Beasley (1998) John E Beasley. Heuristic algorithms for the unconstrained binary quadratic programming problem. London, England, 1998.
- Ben-Tal et al. (2009) Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust optimization, volume 28. Princeton University Press, 2009.
- Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009.
- Bishop (1995) Chris M Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7(1):108–116, 1995.
- Cai et al. (2018) Qi-Zhi Cai, Chang Liu, and Dawn Song. Curriculum adversarial training. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 3740–3747, 2018.
- Carlini and Wagner (2017) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.
- Croce and Hein (2019) Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. arXiv preprint arXiv:1907.02044, 2019.
- Croce and Hein (2020) Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:2003.01690, 2020.
- Ding et al. (2019a) Gavin Weiguang Ding, Kry Yik Chau Lui, Xiaomeng Jin, Luyu Wang, and Ruitong Huang. On the sensitivity of adversarial robustness to input data distributions. In International Conference on Learning Representations, 2019a.
- Ding et al. (2019b) Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Direct input space margin maximization through adversarial training. In International Conference on Learning Representations, 2019b.
- Ding et al. (2019c) Gavin Weiguang Ding, Luyu Wang, and Xiaomeng Jin. AdverTorch v0.1: An adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:1902.07623, 2019c.
- Drucker and Le Cun (1992) Harris Drucker and Yann Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991–997, 1992.
- Galloway et al. (2019) Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, and Graham W Taylor. Batch normalization is a cause of adversarial vulnerability. arXiv preprint arXiv:1905.02161, 2019.
- Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Guo et al. (2019) Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. Simple black-box adversarial attacks. In International Conference on Machine Learning, pages 2484–2493, 2019.
- Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd edition). Springer, 2009.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Huang et al. (2015) Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvári. Learning with a strong adversary. arXiv preprint arXiv:1511.03034, 2015.
- Ilyas et al. (2019) Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, pages 125–136, 2019.
- Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015.
- Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
- Laird (2012) John E Laird. The Soar cognitive architecture. MIT press, 2012.
- Lima and Grossmann (2017) Ricardo M Lima and Ignacio E Grossmann. On the solution of nonconvex cardinality boolean quadratic programming problems: a computational study. Computational Optimization and Applications, 66(1):1–37, 2017.
- Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
- Moosavi-Dezfooli et al. (2019) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9078–9086, 2019.
- Qin et al. (2019) Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, and Pushmeet Kohli. Adversarial robustness through local linearization. In Advances in Neural Information Processing Systems, pages 13824–13833, 2019.
- Sabour et al. (2015) Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122, 2015.
- Schmidt et al. (2018) Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems, pages 5014–5026, 2018.
- Shaham et al. (2018) Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing, 307:195–204, 2018.
- Simon-Gabriel et al. (2019) Carl-Johann Simon-Gabriel, Yann Ollivier, Leon Bottou, Bernhard Schölkopf, and David Lopez-Paz. First-order adversarial vulnerability of neural networks and input dimension. In International Conference on Machine Learning, pages 5809–5817, 2019.
- Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Sinha et al. (2018) Aman Sinha, Hongseok Namkoong, and John Duchi. Certifying some distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.
- Sitawarin et al. (2020) Chawin Sitawarin, Supriyo Chakraborty, and David Wagner. Improving adversarial robustness through progressive hardening. arXiv preprint arXiv:2003.09347, 2020.
- Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Tramèr et al. (2017) Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017.
- Tramer et al. (2020) Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347, 2020.
- Vershynin (2018) Roman Vershynin. High-dimensional probability. Cambridge University Press: An Introduction with Applications in Data Science, 2018.
- Wang and Zhang (2019) Jianyu Wang and Haichao Zhang. Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks. In Proceedings of the IEEE International Conference on Computer Vision, pages 6629–6638, 2019.
- Wang et al. (2020) Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In International Conference on Learning Representations, 2020.
- Wong and Kolter (2018) Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pages 5286–5295, 2018.
- Wong et al. (2019a) Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2019a.
- Wong et al. (2019b) Eric Wong, Frank Schmidt, and Zico Kolter. Wasserstein adversarial examples via projected sinkhorn iterations. In International Conference on Machine Learning, pages 6808–6817, 2019b.
- Yang et al. (2020) Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, and Kamalika Chaudhuri. Adversarial robustness through local lipschitzness. arXiv preprint arXiv:2003.02460, 2020.
- Zagoruyko and Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
- Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pages 7472–7482, 2019.
Appendix A Derivations of Section 2: Linear Regression with an Over-parametrized Model
We derive the results reported in Section 2 in more detail here. Recall that we consider a linear model with . We suppose that and the distribution of is such that it is confined on a -dimensional subspace . So the density of is , where is Dirac’s delta function.
We initialize the weights at the first time step as , and use GD to find the minimizer of the population loss. The partial derivatives of the population loss are
where . Notice that the gradient in dimension is non-zero, unless . Assuming that , this implies that the gradient won’t be zero unless . On the other hand, the gradients in dimensions are all zero, so GD does not change the value of for . Therefore, under the proper choice of learning rate , we get that the asymptotic solution of GD solution is . It is clear that , i.e., the population loss is zero, as noted already as our first observation in that section.
Also note that we can easily attack this model by perturbing by . The pointwise loss at is
With the choice of (for ) and , an FGSM-like attack (Goodfellow et al., 2014) at the learned weight leads to the pointwise loss of
We comment that our choice of is not from the same distribution as the training data . This choice aligns with the hypotheses in Ding et al. (2019a); Schmidt et al. (2018) that adversarial examples come from a shifted data distribution; however, techniques such as feature adversaries (Sabour et al., 2015) focus on designing perturbations to be close to input distributions. We stress that the goal here is to illustrate the loss under this particular attack.
In order to get a better sense of this loss, we compute its expected value w.r.t. the randomness of weight initialization. We have that (including the extra term too)
where we used the independence of the r.v. and when . The expectation is the variance of . The r.v. has a folded normal distribution, and its expectation is . Thus, we get that
for . The expected population loss of the specified attack at the asymptotic solution is
The dependence of this loss on dimension is significant, showing that the learned model is quite vulnerable to attacks. We note that the conclusions would not change much with initial distributions other than the Normal distribution.
An effective solution is to regularize the loss to encourage the weights of irrelevant dimensions going to zero. A generic regularizer is to use the -norm of the weights, i.e., formulate the problem as a ridge regression. In that case, the regularized population loss is
One can see that the solution of is and for .
The use of this generic regularizer seems reasonable in this example, as it enforces the weights for dimensions to to become zero. Its only drawback is that it leads to a biased estimate of . The bias, however, can be made small with a small choice for . We can obtain a similar conclusion for the regularizer (Lasso).
As such, one has to define a regularizer that is specially-designed for improving adversarial robustness. Bishop (1995) showed the strong connection between training with random perturbation and Tikhonov Regularization. Inspired by this idea, we develop a regularizer that mimics the adversary itself. Let us assume that a particular adversary attacks the model by adding . The population loss at the perturbed point is
where .111A similar, but more complicated result, would hold if the adversary could also attack the first dimension. This is the same objective as (1) reported in Section 2. Note that minimizing is equivalent to minimizing the model at the point . The regularizer incorporates the effect of adversary in exact form. This motivated the possibility of designing a regularized tailored to prevent attacks.
A.1 Derivation of the population loss under its first and second order apprxoimation
First, we show that the FGSM direction is the maximizer of the loss when the perturbation is constrained. Based on the pointwise loss at , we have
We use the Cauchy-Schwarz inequality to obtain
which leads to
Next, we show that the first-order approximation of obtains the first two terms in (1).
Note the gradient of the loss w.r.t. the input is
and the Hessian w.r.t. the input is
The first-order Taylor series approximation is
Note that because of our particular choice of and . Here we obtain the first two terms in (1).
This completes the motivation of using second-order Taylor series approximation with our warm-up toy example.
Appendix B Derivations of Section 4: Second-Order Adversarial Regularizer (SOAR)
B.1 Relaxation
Note the Boolean quadratic programming (BQP) problem in formulation (5) is NP-hard (Beasley, 1998; Lima and Grossmann, 2017). Even though there exist semi-definite programming (SDP) relaxations, such approaches require the exact Hessian w.r.t. the input, which is computationally expensive to obtain for high-dimensional inputs. And even if we could compute the exact Hessian, SDP itself is a computationally expensive approach, and not suitable to be within the inner loop of a DNN training. As such, we relax the constraint to an constraint, which as we see, leads to a computationally efficient solution.
B.2 The looseness of the bound in Eq (7)
From the perspective of the volume ratio between the two balls, replacing with can be problematic since the volume of is whereas the volume of is . Their ratio goes to 0 as the dimension increases. The implication is that the search space for the maximizer is infinitesimal compared to the one for the maximizer, leading to a loose upper-bound.
As a preliminary study on the tightness of the bound, we evaluated the two slides of (7) by approximating the maximum using PGD attacks. In particular, we approximate using where is generated using 20-iteration -PGD with . Similarly, we approximate using where is generated using 100-iteration -PGD with . The reason for this particular configuration of attack parameter is to match the ones used during our previous evaluations.
Checkpoints | ||
Beginning of SOAR | ||
End of SOAR | ||
From this preliminary study, we observe that there is indeed a gap between the approximated LHS and RHS of (7), and thus, we think it is a valuable future research direction to explore other possibilities that allow us to use a second-order approximation to study the worst-case loss subject to an constrained perturbation.
B.3 Unified Objective
We could maximize each term inside (8) separately and upper bound the max by
where is the largest singular value of the Hessian matrix, . Even though the norm of the gradient and the singular value of the Hessian have an intuitive appeal, separately optimizing these terms might lead to a looser upper bound than necessary. The reason is that the maximizer of the first two terms are and the direction corresponding to the largest singular value of . In general, these two directions are not aligned.
B.4 Proof of Proposition 1
Proof.
By the inclusion of the -ball of radius within the -ball of radius and the definition of in (6), we have
It remains to upper bound with . We use the Cauchy-Schwarz inequality to obtain
where the last equality is obtained using properties of the -induced matrix norm (this is the spectral norm). Since computing would again require the exact input Hessian, and we would like to avoid it, we further upper bound the spectral norm by the Frobenius norm as
The Frobenius norm itself satisfies
(14) |
where . Therefore, we can estimate by sampling random vectors and compute the sample average of . ∎
Appendix C SOAR Algroithm: A Complete Illustration
In Algorithm 1, we present the inner-loop operation of SOAR using a single data point. Here we summarize the full training procedure with SOAR in Algorithm 2. Note that it is presented as if the optimizer is SGD, but we may use other optimizers as well.
Appendix D Potential Causes of Gradient Masking
Method | Standard | Random | PGD1 |
Standard | |||
ADV | |||
SOAR | |||
- zero init | |||
- random init | |||
- PGD1 init | |||
We summarize the average value of the highest probability output for test set data initialized with zero, random and PGD1 perturbations in Table 5. We notice that training with SOAR using zero or random initialization leads to models with nearly 100% confidence on their predictions. This is aligned with the analysis of SOAR for a linear classifier (Section 4.1), which shows that the regularizer becomes ineffective as the model outputs high confidence predictions. Indeed, results in Table 7 show that those models are vulnerable under black-box attacks.
Results in Table 5 suggest that highly confident predictions could be an indication for gradient masking. We demonstrate this using the gradient-based PGD attack. Recall that we generate PGD attacks by first initializing the clean data with a randomly chosen within the ball of size , followed by gradient ascent at . Suppose that the model makes predictions with 100% confidence on any given input. This leads to a piece-wise loss surface that is either zero (correct predictions) or infinity (incorrect predictions). The gradient of this loss function is either zero or undefined, and thus making gradient ascent ineffective. Therefore, white-box gradient-based attacks are unable to find adversarial examples.
Appendix E Supplementary Experiments
E.1 Discussion on the Reproducibility of CURE and LLR
We were not able to reproduce results of two closely related works, CURE (Moosavi-Dezfooli et al., 2019) and LLR (Qin et al., 2019). For CURE, we found the open-source implementation222https://github.com/F-Salehi/CURE_robustness, but were not able to reproduce their reported results using their implmentation. We were not able to reproduce the results of CURE with our own implementation either. For LLR, Yang et al. (2020) were not able to reproduce the results, they also provided an open-source implementation333https://github.com/yangarbiter/robust-local-lipschitz. Regardless, we compare SOAR to the reported result by CURE and LLR in Table 6:
Method | Standard Accuracy | PGD | Architecture |
SOAR | ResNet-10 | ||
CURE | ResNet-18 | ||
CURE | WideResNet | ||
LLR | WideResNet | ||
E.2 Training and Evaluation setup
CIFAR-10: Training data is augmented with random crops and horizontal flips.
ResNet: We used an open-source ResNet-10 implementation444https://github.com/kuangliu/pytorch-cifar. More specifically, we initialize the model with ResNet(BasicBlock, [1,1,1,1]). Note that we remove the BatchNorm layers in the ResNet-10 architecture, and we discuss this further in Appendix E.7 .
WideResNet: We used the implementation555https://github.com/yaodongyu/TRADES of WideResNet-34-10 model found in public repository maintained by the authors of TRADES (Zhang et al., 2019).
Standard training on ResNet and WideResNet: Both are trained for a total of 200 epochs, with an initial learning rate of 0.1. The learning rate decays by an order of magnitude at epoch 100 and 150. We used a minibatch size of 128 for testing and training. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4.
Adversarial training with PGD10 examples on ResNet: The optimization setting is the same as the one used for standard training. Additionally, to ensure that the final model has the highest adversarial robustness, we save the model at the end of every epoch, and the final evaluation is based on the one with the highest PGD20 accuracy.
SOAR on ResNet: SOAR refers to continuing the training of the Standard model on ResNet. It is trained for a total of 200 epochs with an initial learning rate of 0.004 and decay by an order of magnitude at epoch 100. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We use a FD step-size for the regularizer. Additionally, we apply a clipping of 10 on the regularizer, and we discuss this clipping operation in Appendix E.7 .
MART and TRADES on ResNet: We used the same optimization setup as the ones in their respective public repository666https://github.com/YisenWang/MART. We briefly summarize it here. The model is trained for a total of 120 epochs, with an initial learning rate of 0.1. The learning rate decays by an order of magnitude at epoch 75, 90, 100. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We performed a hyperparameter sweep on the strength of the regularization term and selected one that resulted in the best performance against PGD20 attacks. A complete result is reported in Appendix E.12 .
MMA on ResNet: We used the same optimization setup as the one in its public repository777https://github.com/BorealisAI/mma_training. We briefly summarize it here. The model is trained for a total of 50000 iterations, with an initial learning rate of 0.3. The learning rate changes to 0.09 at the 20000 iteration, 0.03 at the 30000 iteration and lastly 0.009 at the 40000 iteration. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We performed a hyperparameter sweep on the margin term and selected the one that resulted in the best performance against PGD20 attacks. A complete result is reported in Appendix E.12 .
ADV, TRADES, MART and MMA on WideResNet: We use the pretrained checkpoint provided in their respective repositories. Note that we use the pretrained checkpoint for PGD10 adversarially trained WideResNet in Madry’s CIFAR10 Challenge888https://github.com/MadryLab/cifar10_challenge.
Evaluations: For FGSM and PGD attacks, we use the implementation in AdverTorch (Ding et al., 2019c). For SimBA (Guo et al., 2019), we use the authors’ open-source implementation999https://github.com/cg563/simple-blackbox-attack.
E.3 Adversarial Robustness of the Model Trained using SOAR with Different Initializations
Method | Standard accuracy | White-box PGD | Black-box PGD |
SOAR | |||
- zero init | |||
- rand init | |||
- PGD1 init | |||
We report the adversarial robustness of the model trained using SOAR with different initialization techniques in Table 7.
The second column shows the accuracy against white-box PGD adversaries. The third column shows the accuracy against black-box PGD adversaries transferred from an independently initialized and standard-trained ResNet-10 model.
Note that despite the high adversarial accuracy against white-box PGD attacks, models trained using SOAR with zero and random initialization perform poorly against transferred attacks. This suggests the presence of gradient masking when using SOAR with zero and random initializations. Evidently, SOAR with PGD initialization alleviates the gradient masking problem.
E.4 Comparing the values of the SOAR regularized loss computed using different numbers of randomly sampled
Checkpoints | |||
Beginning of SOAR | |||
End of SOAR | |||
Suppose we slightly modify Eq (13) by to incorporate the effect of using multiple randomly sampled vectors in computing the SOAR regularized loss. Notice that the current implementation is equivalent to using . We observed the model at two checkpoints, at the beginning and the end of SOAR regularization, the value of the regularized loss remains unchanged as we increase from 1 to 100.
E.5 Robustness under attacks on CIFAR-10
We evaluate SOAR and two of the baseline methods, ADV and TRADES, against white-box and black-box attacks on CIFAR-10 in Table 9. No results were reported by MART and we are not able to reproduce the results using the implementation by MMA, thus those two methods are not included in our evaluation.
In Section 4, we show that the formulation of SOAR with is equivalent to the formulation of SOAR with . In other words, models trained with SOAR to be robust against attacks with should also obtain improved robustness against attacks with . In our evaluation, all adversaries used during ADV and TRADES are generated with 10-step PGD () and a step size of . Note that the goal here is to show the improved robustness of SOAR against attacks other than being SOTA, thus the optimization procedures are the same as the ones used in the evaluation.
We observe that training with SOAR improves the robustness of the model against attacks. Instead of a fixed norm, we demonstrate the improved robustness using an increasing range of . For all attacks, we use iterations of PGD and a step size of . In Table 9, we find that training with SOAR leads to a significant increase in robustness against white-box and black-box adversaries. As increases, SOAR model remain robust against white-box attacks (), while other methods falls off. The last column of Table 9 shows the robustness against transferred attacks (). The source model is a ResNet10 network trained separately from the defence models on the unperturbed training set. We observe that SOAR achieves the second highest robustness compared to baseline methods against transferred attacks. This result empirically verifies our previous claim that and formulation of SOAR only differs by a factor of . Moreover, it aligns with findings by Simon-Gabriel et al. (2019), that empirically showed adversarial robustness through regularization gains robustness against more than one norm-ball attack at the same time.
Method | Transfer | |||||
ResNet | ADV | |||||
TRADES | ||||||
SOAR | ||||||
Method | Untargeted APGD-CE | Targeted APGD-DLR | Targeted FAB | Square Attack |
ADV | ||||
TRADES | ||||
SOAR | ||||
E.6 Additional evaluation on SVHN dataset
We use the same ResNet-10 architecture as the one for CIFAR-10 evaluation. Training data is augmented with random crops and horizontal flips. For Standard training, we use the same optimization procedure as the one used for CIFAR-10. For SOAR and TRADES, we use the exact same hyperparameter for the regularizer. For SOAR, we use early-stopping at epoch 130 to prevent catastrophic over-fitting. Besides, the optimization schedule is identical for SOAR and TRADES as the ones used for CIFAR-10.
We emphasize again that the goal of evaluting using SVHN is to demonstrate the improved robustness with SOAR on a different dataset, thus we did not perform an additional hyper-parameter sweep. The optimization procedures are the same as the ones used in the CIFAR-10 evaluation.
For PGD10 adversarial training, we observe that ResNet-10 is not able to learn anything meaningful. Specifically, when trained with PGD10 examples, ResNet-10 does not perform better than a randomly-initialized network in both standard and adversarial accuracy. Cai et al. (2018) made a similar observation on ResNet-50, where training accuracy is not improving over a long period of adversarial training with PGD10. They further investigated models with different capacities and found that even ResNet-50 might not be sufficiently deep for PGD10 adversarial training on SVHN. Wang and Zhang (2019) reported PGD10 adversarial training result on SVHN with WideResNet, which we include in Table 11.
For MART, we were not able to translate their CIFAR-10 results on SVHN. We performed the same hyperparameter sweep as the one in Table 18, as well as different optimization settings, but none resulted in a meaningful model. It is likely that the potential cause is the small capacity of ResNet-10. For MMA, the implementation included in its public repository is very specific to the CIFAR-10 dataset, so we did not include it in the comparison.
Overall, we observe a similar performance on SVHN vs. on CIFAR-10. Compared to the result in Table 1, we observe a slight increase in standard accuracy and robust accuracy for both SOAR and TRADES. In particular, the standard accuracy increases by and , and the PGD20 accuracy increases by and for TRADES and SOAR respectively. More notably, we observe on SVHN that SOAR regularized model gains robustness without significantly sacrificing its standard accuracy.
Table 12 compares the performance of SOAR to TRADES on SimBa and on transferred attacks. The evaluation setting for transferred attacks is identical to the one used for CIFAR-10, where we use an undefended independently trained ResNet-10 as the source model. Despite a smaller gap on the accuracy against transferred attacks, we see that SOAR regularized model yields a significant higher accuracy against the stronger SimBA attacks.
Note that we did not perform any extensive hyperparameter sweep on SVHN, and we simply took what worked on CIFAR-10. We stress that the goal is to demonstrate the effectiveness of SOAR, and its performance relative to other baseline methods.
Method | Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD1000 | PGD20-50 | |
Standard | ||||||||
ADV | — | — | — | |||||
TRADES | ||||||||
SOAR | ||||||||
Method | SimBA | PGD20 | PGD1000 |
TRADES | |||
SOAR | |||
Next, we evaluate SOAR and TRADES under bounded white-box and black-box attacks. All PGD adversaries are generated using the same method as the one in the evaluation for CIFAR-10. Also, we do not include ADV due to the same result discussed above. Our results show that training with SOAR significantly improves the robustness against PGD white-box attacks compared to TRADES. For transferred attacks, TRADES and SOAR performs similarly.
Method | Standard | Transfer | ||||
Standard | ||||||
TRADES | ||||||
SOAR | ||||||
E.7 Challenges
Batch Normalization: We observe that networks with BatchNorm layers do not benefit from SOAR in adversarial robustness. Specifically, we performed an extensive hyper-parameter search for SOAR on networks with BatchNorm layers, and we were not able to achieve meaningful improvement in adversarial robustness. A related work by Galloway et al. (2019) focuses on the connection between BatchNorm and adversarial robustness. In particular, their results show that on VGG-based architecture (Simonyan and Zisserman, 2014), there is a significant gap in adversarial robustness between networks with and without BatchNorm layers under standard training. Needless to say, the interaction between SOAR and BatchNorm requires further investigations, and we consider this as an important future direction. As such, we use a small-capacity ResNet (ResNet-10) in our experiment, and modified it by removing its BatchNorm layers. Specifically, we removed BatchNorm layers from all models used in the baseline experiments with ResNet. Note that BatchNorm layers makes the training process less sensitive to hyperparameters (Ioffe and Szegedy, 2015), and removing them makes it difficult to train a very deep network such as WideResNet. As such, we did not perform SOAR on WideResNet.
Starting from pretrained model: We notice that it is difficult to train with SOAR on a newly-initialized model. Note that it is a common technique to perform fine-tuning on a pretrained model for a specific task. In CURE, regularization is performed after a model is first trained with a cross-entropy loss to reach a high accuracy on clean data. They call the process adversarial fine-tuning. Cai et al. (2018); Sitawarin et al. (2020) study the connection between curriculum learning (Bengio et al., 2009) and training using adversarial examples with increasing difficulties. Our idea is similar. The model is first optimized for an easier task (standard training), and then regularized for a related, but more difficult task (improving adversarial robustness). Since the model has been trained to minimize its standard loss, the loss gradient can be very small compared to the regularizer gradient, and thus we apply a clipping of 10 on the regularizer.
Catastrophic Overfitting: We observe that when the model achieves a high adversarial accuracy and continues training for a long period of time, both the standard and adversarial accuracy drop significantly. A similar phenomenon was observed in (Cai et al., 2018; Wong et al., 2019a), which they refer to as catastrophic forgetting and catastrophic over-fitting respectively. Wong et al. (2019a) use early-stopping as a simple solution. We observe that with a large learning rate, the model reaches a high adversarial accuracy faster and catastrophic over-fitting happens sooner. As such, our solution is to fix the number of epochs to 200 and then carefully sweep over various learning rates to make sure that catastrophic over-fitting do not happen.
Discussion on Computation Complexity: We emphasize that our primary goal is to propose regularization as an alternative approach to improving adversarial robustness. We discussed techniques towards an efficient implementation, however, there is still potential for a faster implementation. In our current implementation, a single epoch with WideResNet takes: 19 mins on PGD10 adversarial training, 26.5 mins on SOAR, 29 mins on MART, and 39.6 mins on TRADES. We observe that despite being a faster method than MART and TRADES, SOAR is still quite slow compared to PGD10 adversarial training. We characterize the computation complexity as a function of the number of forward and backward passes required for a single mini-batch. Standard training: 1 forward pass and 1 backward pass; Adversarial training with k-step PGD: k+1 forward passes and k+1 backward passes; FOAR: 1 forward pass and 2 backward passes; SOAR: 3 forward passes and 4 backward passes.
E.8 Differentiability of ReLU and its effect on SOAR
The SOAR regularizer is derived based on the second-order Taylor approximation of the loss which requires the loss to be twice-differentiable. Although ReLU is not differentiable at 0, the probability of its input being at exactly 0 is very small. That is also why we can train ReLU networks through backpropagation. This is true for the Hessian too. In addition, notice that from a computation viewpoint, we never need to compute the exact Hessian as we approximate it through first-order approximation.
E.9 Potential robustness gain with increasing capacities
Empirical studies in Madry et al. (2018) and Wang et al. (2020) reveal that their approaches benefit from increasing model capacity to achieve higher adversarial robustness. We have a similar observation with SOAR.
Model | Standard | PGD20 | PGD100 | PGD200 |
CNN6 | ||||
CNN8 | ||||
ResNet-10 | ||||
Table 14 compares the performance of SOAR against bounded white-box attacks on networks with different capacities. CNN6(CNN8) refers to a simple 6-layer(8-layer) convolutional network, and ResNet-10 is the network we use in Section 5. Evidently, as network capacity increases, we observe improvements in both standard accuracy and adversarial accuracy. As such, we expect a similar gain in performance with larger capacity networks such as WideResNet.
E.10 Experiment results on ResNet10 in Table 1 and Table 2 with standard deviaitions
All results on ResNet10 are obtained by averaging over 3 independently initialized and trained models. Here, we report the standard deviation of the results in Table 1 and Table 2. Notice we omit results on PGD100 and PGD200 due to space constraint.
Method | Standard | FGSM | PGD20 | PGD1000 | PGD20-50 | |
ResNet | Standard | |||||
ADV | ||||||
TRADES | ||||||
MART | ||||||
MMA | ||||||
FOAR | ||||||
SOAR | ||||||
Method | SimBA | PGD20-R | PGD20-W | PGD1000-R | PGD1000-W | |
ResNet | ADV | |||||
TRADES | ||||||
MART | ||||||
MMA | ||||||
FOAR | ||||||
SOAR | ||||||
E.11 Additional Experiments on Gradient Masking
To verify that SOAR improves robustness of the model without gradient masking, we include the following experiments to empirically support our claim.
First, from the result in Appendix E.3, we conclude that SOAR with zero initilaization results in gradient masking. This is shown by the high accuracy (, close to standard accuracy) under white-box PGD attacks and low accuracy () under black-box transferred attacks. Next, prior work has verified that adversarial training with PGD20 adversaries (ADV) results model without gradient maskingAthalye et al. (2018). Therefore, let us use models trained using ADV and SOAR(zero-init) as examples of models with/without gradient masking respectively.
In the attack setting, PGD uses the sign of the gradient to generate perturbations. As such, one way to verify the strength of gradient is to measure the average number of none-zero elements in the gradient. A model with gradient masking is expected to have much less non-zero elements than one without. In our experiment, the average non-zero element in gradient is for ADV trained (no GM), for SOAR (PGD1-init) and for SOAR (zero-init, has GM). We observe that SOAR with PGD1-init has a similar number of non-zero gradient elements compared to ADV, meaning PGD adversary can use sign of those non-zero gradient elements to generate meaningful perturbations.
In Section 5, the 20-iteration PGD adversaries are generated with a step-size of and . Suppose we use instead of and other parameters remain the same, that is, we allow the maximum perturbation to reach the input range () and generate PGD20 attacks. We observe such attacks result in near black-and-white images on SOAR with PGD-1 init; it has a 0% accuracy against such PGD20 attacks, similar to the on ADV trained model. On the other hand, the robust accuracy for SOAR (zero-init) is .
E.12 Hyperparameter sweep for TRADES, MART and MMA on ResNet
The following results show the hyperparameter sweep on TRADES, MART and MMA respectively. We include the one with the highest PGD20 accuracy in Section 5.
Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD20-50 | |
15 | ||||||
13 | ||||||
11 | ||||||
9 | ||||||
7 | ||||||
5 | ||||||
3 | ||||||
1 | ||||||
0.5 | ||||||
0.1 | ||||||
Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD20-50 | |
15 | ||||||
13 | ||||||
11 | ||||||
9 | ||||||
7 | ||||||
5 | ||||||
3 | ||||||
1 | ||||||
0.5 | ||||||
0.1 | ||||||
Margin | Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD20-50 |
12 | ||||||
20 | ||||||
32 | ||||||
48 | ||||||
60 | ||||||
72 | ||||||
E.13 Adversarial Robustness of the Model Trained using FOAR with Different Initializations
FOAR achieves the best adversarial robustness using PGD1 initialization, so we only present this variation of FOAR in Section 5.
Initialization | Standard | FGSM | PGD20 | PGD100 | PGD200 | PGD1000 | PGD20-50 |
zero | |||||||
rand | |||||||
PGD1 | |||||||