This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Local Reweighting for Adversarial Training

Local Reweighting for Adversarial Training


Ruize Gao1,†
  Feng Liu2,†  Kaiwen Zhou1  Gang Niu3  Bo Han4 James Cheng1

1Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China
2DeSI Lab, Australian AI Institute, University of Technology Sydney, Sydney, Australia
3RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
4Department of Computer Science, Hong Kong Baptist University, HKSAR, China
Equal Contribution
    Emails: [email protected], [email protected], [email protected], [email protected],
             [email protected], [email protected]

 

Abstract

Instances-reweighted adversarial training (IRAT) can significantly boost the robustness of trained models, where data being less/more vulnerable to the given attack are assigned smaller/larger weights during training. However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e.g., even worse than no reweighting). In this paper, we study this problem and propose our solution—locally reweighted adversarial training (LRAT). The rationale behind IRAT is that we do not need to pay much attention to an instance that is already safe under the attack. We argue that the safeness should be attack-dependent, so that for the same instance, its weight can change given different attacks based on the same model. Thus, if the attack simulated in training is mis-specified, the weights of IRAT are misleading. To this end, LRAT pairs each instance with its adversarial variants and performs local reweighting inside each pair, while performing no global reweighting—the rationale is to fit the instance itself if it is immune to the attack, but not to skip the pair, in order to passively defend different attacks in future. Experiments show that LRAT works better than both IRAT (i.e., global reweighting) and the standard AT (i.e., no reweighting) when trained with an attack and tested on different attacks.

 

1 Introduction

A growing body of research shows that neural networks are vulnerable to adversarial examples, i.e., test inputs that are modified slightly yet strategically to cause misclassification [4, 11, 12, 16, 20, 26, 30, 39]. It is crucial to train a robust neural network to defend against such examples for security-critical computer vision systems, such as autonomous driving and medical diagnostics [7, 17, 19, 20, 26]. To mitigate this issue, adversarial training methods have been proposed in recent years [2, 13, 18, 25, 29, 32]. By injecting adversarial examples into training data, adversarial training methods seek to train an adversarial-robust deep network whose predictions are locally invariant in a small neighborhood of its inputs [1, 15, 22, 23, 27, 34].

Due to the diversity and complexity of adversarial data, over-parameterized deep networks have insufficient model capacity in adversarial training (AT) [38]. To obtain a robust model given fixed model capacity, Zhang et al. [38] suggest that we do not need to pay much attention to an instance that is already safe under the attack, and propose instance-reweighted adversarial training (IRAT), which performs global reweighting with a given attack. To identify safe/non-robustness instances, they propose a geometric distance between natural data points and current class boundary. Instances being closer to/farther from the class boundary is more/less vulnerable to the given attack, and should be assigned larger/smaller weights during AT. This geometry-aware IRAT (GAIRAT) significantly boosts the robustness of the trained models when facing the given attack.

Refer to caption
(a) The weight distribution
Refer to caption
(b) Performance of SAT
Refer to caption
(c) Performance of GAIRAT
Figure 1: The extreme reweighting in GAIRAT results in a significant decrease when facing an unseen attack. The subfigure (a) illustrates the frequency distribution of weights in the GAIRAT model on the CIFAR-10 training set, where approximately four-fifths of the instances are assigned very low weights (less than 0.2). The subfigure (b) and (c) illustrate the performance on the GAIRAT trained model and the AT trained model under different attacks, which show that GAIRAT does improve the robustness when attacked by PGD (used during training), but reduces the robustness when attacked by CW (unseen during training).

However, when tested on attacks that are different from the given attack simulated in IRAT, the robustness of IRAT drops significantly (e.g., even worse than no reweighting). First, we find that a large number of instances are actually overlooked during IRAT. Figure 1(a) shows that, for approximately four-fifths of the instances, their corresponding adversarial variants are assigned very low weights (less than 0.20.2). Second, we find that the robustness of the IRAT-trained classifier drops significantly when facing an unseen attack. Figures 1(b) and 1(c) show that the classifier trained by GAIRAT (with projected gradient descent (PGD) attack [18]) has lower robustness when attacked by the unseen Carlini and Wagner attack (CW) [5] compared with the robustness of standard adversarial training (SAT) [18].

In this paper, we investigate the reasons behind this phenomenon. Unlike the common scenario of the classification problem where the training and testing data are fixed, there are different adversarial variants for the same instance in AT, e.g., PGD-based or CW-based adversarial variants. A natural question comes with this—whether there are inconsistent vulnerabilities in the view of different attacks? The answer is affirmative. Figure 2 visualizes this phenomenon using t-SNE [28]. The 88 subfigures in Figure 2 visualize inconsistent vulnerable instances in different views using t-SNE. The red dots in all subfigures represent consistently vulnerable instances between different views, while the blue dots represent the inconsistent vulnerable instances. From both the SAT-trained classifier (Figure 2(a)-2(d)) and GAIRAT-trained classifier (Figure 2(e)-2(h)), we can clearly see that a large number of vulnerable instances are inconsistent (the blue dots dominate in all the subfigures).

Given the above investigation, we argue that the safeness of instances is attack-dependent, that is, for the same instance, its weight can change given different attacks based on the same model. Thus, if the attack simulated in training is mis-specified, the weights of IRAT are misleading. In order to ameliorate this pessimism of IRAT, we propose our solution—locally reweighted adversarial training (LRAT). As shown in Figure 3, LRAT pairs each instance with its adversarial variants and performs local reweighting inside each pair, while performing no global reweighting. The rationale of LRAT is to fit the instance itself if it is immune to the attack, and in order to passively defend different attacks in future, LRAT does not skip the pair. For the realization of LRAT, we propose a general vulnerability-based reweighting strategy that is applicable to various attacks instead of the geometric distance that is only compatible with the PGD attack [38].

Our experimental results show that LRAT works better than both SAT (i.e., no reweighting) and IRAT (i.e., global reweighting) when trained with an attack but tested on different attacks. For other existing adversarial training methods, e.g., TRADES [36], we also design LRAT for TRADES [36] (i.e., LRAT-TRADES). Our results also show that LRAT-TRADES works better than both TRADES and IRAT-TRADES.

Refer to caption
(a) 𝒱\mathcal{V}-Union
Refer to caption
(b) 𝒱GD\mathcal{V}^{\textnormal{GD}}-𝒱CW\mathcal{V}^{\textnormal{CW}}
Refer to caption
(c) 𝒱GD\mathcal{V}^{\textnormal{GD}}-𝒱PGD\mathcal{V}^{\textnormal{PGD}}
Refer to caption
(d) 𝒱PGD\mathcal{V}^{\textnormal{PGD}}-𝒱CW\mathcal{V}^{\textnormal{CW}}
Refer to caption
(e) 𝒱\mathcal{V}-Union
Refer to caption
(f) 𝒱GD\mathcal{V}^{\textnormal{GD}}-𝒱CW\mathcal{V}^{\textnormal{CW}}
Refer to caption
(g) 𝒱GD\mathcal{V}^{\textnormal{GD}}-𝒱PGD\mathcal{V}^{\textnormal{PGD}}
Refer to caption
(h) 𝒱PGD\mathcal{V}^{\textnormal{PGD}}-𝒱CW\mathcal{V}^{\textnormal{CW}}
Figure 2: Visualization of inconsistent vulnerable instances in different views using t-SNE. The dots in subfigures are outputs of the last layers in SAT-trained classifier (subfigures (a)-(d)) and GAIRAT-trained classifier (subfigures (e)-(h)). Classifier inputs are natural data points in the CIFAR-10 training set. The dots represent the vulnerable instances (top-20%). The red dots represent consistently vulnerable instances, while the blue dots represent the inconsistent vulnerable instances in the view of 𝒱GD\mathcal{V}^{\textnormal{GD}}, 𝒱PGD\mathcal{V}^{\textnormal{PGD}} and 𝒱CW\mathcal{V}^{\textnormal{CW}}. 𝒱GD\mathcal{V}^{\textnormal{GD}} is the vulnerability w.r.t. geometry distance (GD) defined by Zhang et al. [38]. 𝒱PGD\mathcal{V}^{\textnormal{PGD}} and 𝒱CW\mathcal{V}^{\textnormal{CW}} are the vulnerability measurement function w.r.t. PGD and CW defined in Eq. (8) and Eq. (9), respectively. We can clearly see that there exist abundant blue dots but only a few red dots, which means a large number of inconsistent vulnerable instances in different views.

2 Adversarial Training

In this section, we briefly review existing adversarial training methods [18, 38]. Let (𝒳,d)(\mathcal{X},d_{\infty}) be the input feature space 𝒳\mathcal{X} with a metric d(x,x)=xxd_{\infty}(x,x^{\prime})=\|x-x^{\prime}\|_{\infty}, and ϵ[x]={xd(x,x)ϵ}\mathcal{B}_{\epsilon}[x]=\{x^{\prime}\mid d_{\infty}(x,x^{\prime})\leq\epsilon\} be the closed ball of radius ϵ>0\epsilon>0 centered at xx in 𝒳\mathcal{X}. The dataset S={xi,yi}i=1nS=\{x_{i},y_{i}\}^{n}_{i=1}, where xi𝒳x_{i}\in\mathcal{X}, yi𝒴={0,1,,K1}y_{i}\in\mathcal{Y}=\{0,1,\dots,K-1\}. We use fθ(x)f_{\theta}(x) to denote a deep neural network parameterized by θ\theta. Specifically, fθ(x)f_{\theta}(x) predicts the label of an input instance xx via:

fθ(x)=argmaxkKpk(x;θ),\displaystyle f_{\theta}(x)=\mathop{\operatorname*{arg\,max}}\limits_{k\in K}\mathrm{p}_{k}(x;\theta), (1)

where pk(x;θ)\mathrm{p}_{k}(x;\theta) denotes the predicted probability (softmax on logits) of xx belonging to class kk.

2.1 Standard Adversarial Training

The objective function of SAT [18] is

minfθ1ni=1n(fθ(x~i),yi),\displaystyle\mathop{\min}\limits_{f_{\theta}\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\theta}(\tilde{x}_{i}),y_{i}), (2)

where x~i=argmaxx~iϵ[x](fθ(x~i),yi)\tilde{x}_{i}=\mathop{\operatorname*{arg\,max}}_{\tilde{x}_{i}\in\mathcal{B}_{\epsilon}[x]}\ell(f_{\theta}(\tilde{x}_{i}),y_{i}). The selected x~\tilde{x} is the most adversarial variant within the ϵ\epsilon-ball center at xx. The loss function :K×𝒴\ell:\mathbb{R}^{K}\times\mathcal{Y}\to\mathbb{R} is a composition of a base loss B:K1×𝒴\ell_{B}:\bigtriangleup^{K-1}\times\mathcal{Y}\to\mathbb{R} (e.g., the cross-entropy loss) and an inverse link function L:KK1\ell_{L}:\mathbb{R}^{K}\to\bigtriangleup^{K-1} (e.g., the soft-max activation), where K1\bigtriangleup^{K-1} is the corresponding probalility simplex. Namely, (fθ(),y)=B(L(fθ()),y)\ell(f_{\theta}(\cdot),y)=\ell_{B}(\ell_{L}(f_{\theta}(\cdot)),y). PGD [18] is the most common approximation method for searching the most adversarial variant. Starting from x(0)𝒳x^{(0)}\in\mathcal{X}, PGD (with step size α>0\alpha>0) works as follows:

x(t+1)=Πϵ[x(0)](x(t)+αsign(xt(fθ(x(t),y))),t,\displaystyle x^{(t+1)}=\Pi_{\mathcal{B}_{\epsilon}[x^{(0)}]}(x^{(t)}+\alpha\operatorname{sign}(\nabla_{x^{t}}\ell(f_{\theta}(x^{(t)},y))),t\in\mathbb{N}, (3)

where \mathbb{N} is the number of iterations; x(0)x^{(0)} refers to the starting point that natural instance (or a natural instance perturbed by a small Gaussian or uniformly random noise); yy is the corresponding label for x(0)x^{(0)}; x(t)x^{(t)} is the adversarial variant at step tt; Πϵ[x(0)]()\Pi_{\mathcal{B}_{\epsilon}[x^{(0)}]}(\cdot) is the projection function that projects the adversarial variant back into the ϵ\epsilon-ball centered at x(0)x^{(0)} if necessary.

2.2 Geometry-Aware Instance-Reweighted Adversarial Training

GAIRAT is a typical IRAT proposed by Zhang et al. [38]. GAIRAT argues that natural training data farther from/close to the decision boundary are safe/non-robustness, and should be assigned with smaller/larger weights. Let ω(x,y)\omega(x,y) be the geometry-aware weight assignment function on the loss of the adversarial variant x~\tilde{x}, where the generation of x~\tilde{x} follows SAT. GAIRAT aims to

minfθ1ni=1nω(xi,yi)(fθ(x~i),yi).\displaystyle\mathop{\min}\limits_{f_{\theta}\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\omega(x_{i},y_{i})\ell(f_{\theta}(\tilde{x}_{i}),y_{i}). (4)

Eq. (4) rescales the loss using a function ω(x,y)\omega(x,y). This function is non-increasing w.r.t. GD, which is defined as the least steps that the PGD method needs to successfully attack the natural instances. The method then normalizes ω\omega to ensure that ω(x,y)0\omega(x,y)\geq 0 and 1ni=1nω(xi,yi)=1\frac{1}{n}\sum_{i=1}^{n}\omega(x_{i},y_{i})=1. Finally GAIRAT employs a bootstrap period in the initial part of the training by setting ω(xi,yi)=1\omega(x_{i},y_{i})=1, thereby performing regular training and ignoring the geometric-distance of input (xi,yi)(x_{i},y_{i}).

Refer to caption
Figure 3:  The illustration of LRAT. LRAT pairs each instance with its adversarial variants, and performs local reweighting inside each pair instead of global reweighting. For the same instance, there is inconsistent vulnerability in different views (the difference between red and blue). Thus, LRAT gives larger/smaller weights on the losses of adversarial variants, respectively.

3 The Limitation of IRAT: Inconsistent Vulnerability in Different Views

The difference between Eq. (2) and Eq. (4) is the addition of the geometry-aware weight ω(x,y)\omega(x,y). According to GAIRAT [38], more vulnerable instances should be assigned larger weights. However, the relative vulnerability between instances may vary in different situations, such as for different adversarial variants. As shown in Figure 4, 𝒱\mathcal{V} represents the selected variable to measure the vulnerability between the classifier and adversarial variants, and the smaller 𝒱\mathcal{V}, the more vulnerable is the instance, which is formally defined in Section 4.3. The dark yellow and dark blue are top-20% vulnerable instances in the view of PGD and CW, respectively. The frequency distribution of dark yellow in Figure 4(b) and dark blue in Figure 4(c) is clearly different, and the frequency distribution of dark blue in Figure 4(e) and dark yellow in Figure 4(f) is different. Namely, the most vulnerable 10,000 PGD adversarial variants and the most vulnerable 10,000 CW adversarial variants are not from the same 10,000 instances. As a consequence of this inconsistency, if the attack simulated in training is mis-specified, the weights of IRAT are misleading.

Refer to caption
(a) All PGD
Refer to caption
(b) Vulnerable PGD under all PGD
Refer to caption
(c) Vulnerable CW under all PGD
Refer to caption
(d) All CW
Refer to caption
(e) Vulnerable CW under all CW
Refer to caption
(f) Vulnerable PGD under all CW
Figure 4: The inconsistent vulnerability between instances about different variants. Six subfigures illustrate the frequency distribution of adversarial variants in the GAIRAT trained model on the CIFAR-10 training set (50,000). The x-coordinate in subfigures (a)-(c) is 𝒱\mathcal{V} in the view of PGD. The x-coordinate in subfigures (d)-(f) is 𝒱\mathcal{V} in the view of CW. Light yellow and light blue are all instances (50,000) in the view of PGD and CW, while dark yellow and dark blue are top-20% vulnerable instances (10,000) respectively in the view of PGD and CW. Different distributions of dark yellow and dark blue between subfigure (b) and (c) (or between subfigure (e) and (f)) show that, the vulnerable instances in the view of different attacks are not the same.

4 Locally Reweighted Adversarial Training

To break the limitation of IRAT and train a robust classifier against various attacks, we propose LRAT in this section, for which we perform local reweighting instead of global/no reweighting.

4.1 Motivation of LRAT

The Reweighting is Beneficial. As suggested by GAIRAT [38], the global reweighting indeed improves the robustness when tested on the given attack simulated in training. Figures 1(b) and 1(c) also show that, as a global reweighting, GAIRAT improves the robustness against PGD (when PGD is simulated in training). Thus, when training and testing on the same attack, the rationale that we do not need to pay much attention to an already-safe instance under the attack is significant.

Local Reweighting can Take Care of Various Attacks Simultaneously. As introduced in Section 3, there is inconsistent vulnerability between instances in different views. Thus, we should perform local reweighting inside each pair, while performing no global reweighting—the rationale is to fit the instance itself if it is immune to the attack, but not to skip the pair, in order to passively defend different attacks in future. In addition, it is inefficient (practically impossible) to simulate all attacks in training, so a gentle lower bound on the instance weights is necessary to defend against potentially adaptive adversarial attacks.

4.2 Learning Objective of LRAT

Let ω(x~,y)\omega(\tilde{x},y) be the weight assignment function on the loss of adversarial variant x~\tilde{x}. The inner optimization for generating x~\tilde{x} depends on attacks, such as PGD (Eq. (2)). The outer minimization is:

minfθ1ni=1n([𝒞j=1mω(x~ij,yi)]+(fθ(xi),yi)+j=1mω(x~ij,yi)(fθ(x~ij),yi)),\displaystyle\mathop{\min}\limits_{f_{\theta}\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\left(\left[\mathcal{C}-\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})\right]_{+}\ell(f_{\theta}(x_{i}),y_{i})+\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})\ell(f_{\theta}(\tilde{x}_{ij}),y_{i})\right), (5)

where nn is the number of instances in one mini-batch; mm is the number of used attacks; 𝒞\mathcal{C} is a constant representing the minimum weight sum of each instance; the notation [a]+[a]_{+} stands for max{a,0}\max\{a,0\}. We impose two constraints on our objective Eq. (5): the first constraint ensures that ω(x~,y)>0\omega(\tilde{x},y)>0 and the second constraint ensures that 𝒞>0\mathcal{C}>0. The non-negative coefficient [𝒞j=1mω(x~ij,yi)]+[\mathcal{C}-\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})]_{+} assigns some weight to the natural data term, which serves as a gentle lower bound to avoid discarding instances during training. It can also be seen that different weights are assigned to different adversarial variants, respectively. LRAT pairs each instance with its adversarial variants and performs local reweighting inside each pair. Figure 3 provides an illustrative schematic of the learning objective of LRAT. If ω(x~,y)=1\omega(\tilde{x},y)=1, 𝒞<1\mathcal{C}<1, m=1m=1 and x~\tilde{x} is generated by PGD, LRAT recovers the SAT [18], which assigns equal weights to the losses of PGD adversarial variant.

4.3 Realization of LRAT

The objective in Eq. (5) implies the optimization process of an adversarially robust network, with one step generating adversarial variants from natural counterparts and then reweighting loss on them, and one step minimizing the reweighted loss w.r.t. the model parameters θ\theta.

It is still an open question how to calculate the optimal ω\omega for different variants. Zhang et al. [38] heuristically design some non-increasing functions ω\omega, such as:

ω(x,y)=(1+tanh(λ+5×(12×κ(x,y)/K)))2,\displaystyle\omega(x,y)=\frac{(1+\tanh(\lambda+5\times(1-2\times\kappa(x,y)/K)))}{2}, (6)

where κ/K[0,1]\kappa/K\in[0,1], K+K\in\mathbb{N}^{+}, and λ\lambda\in\mathbb{R}. κ(x,y)\kappa(x,y) is the GD defined as the least steps that the PGD method needs to successfully attack the natural instance. KK is the maximally allowed steps.

Refer to caption
Figure 5: The limitation of Eq. (6). The figure illustrates the performance on the GAIRAT trained model when the CW is simulated in training.

However, the efficacy of this heuristic reweighting function is limited to PGD. When CW is simulated in training, Figure 5 shows the same decrease as in Figure 1 (c) when tested on CW.

Therefore, in this section, we propose a general vulnerability-based and attack-dependent reweighting strategy to calculate the weights of the corresponding variants:

ω(x~,y):=g(𝒱(x~,y)),\displaystyle\omega(\tilde{x},y)\mathrel{\mathop{:}}=g(\mathcal{V}_{(\tilde{x},y)}), (7)

where 𝒱\mathcal{V} is a predetermined function that measures the vulnerability between the classifier and a certain adversarial variant, and gg is a decreasing function of the variable 𝒱\mathcal{V}.

The generations of adversarial variants follow different rules under different attacks. For example, the adversarial variant generated by PGD misleads the classifier with the lowest probability of predicting the correct label [18]. In contrast, the adversarial variant generated by CW misleads the classifier with the highest probability of predicting a wrong label [5]. Motivated by these different generating processes, we define the vulnerability in the view of PGD and CW in the following.

Algorithm 1 Locally Reweighted Adversarial Training (LRAT)
  Input: network architecture parametrized by θ\theta, training dataset SS, learning rate η\eta, number of epochs TT, batch size nn, number of batches NN, number of attacks mm;
  Output: Adversarial robust network fθf_{\theta};
  for epoch=1,2,,Tepoch=1,2,\dots,T do
     for mini-batch = 1,2,…,NN do
        Sample a mini-batch {(xi,yi)}i=1n\left\{\left(x_{i},y_{i}\right)\right\}^{n}_{i=1} from SS;
        for i=1,2,,ni=1,2,\dots,n do
           for j=1,2,,mj=1,2,\dots,m do
              Obtain adversarial data x~ij\tilde{x}_{ij} of xi{x}_{i} (e.g., PGD by Algorithm 2);
              Calculate wij{w}_{ij} according to 𝒱(x~ij,yi)\mathcal{V}_{(\tilde{x}_{ij},y_{i})} by Eq. (7);
           end for
        end for
        θθηi=1nθ[[𝒞j=1mwij]+(fθ(xi),yi)+j=1mwij(fθ(x~ij),yi)]/n\theta\leftarrow\theta-\eta\sum_{i=1}^{n}\nabla_{\theta}\left[\left[\mathcal{C}-\sum_{j=1}^{m}{w}_{ij}\right]_{+}\ell\left(f_{\theta}\left({x}_{i}\right),y_{i}\right)+\sum_{j=1}^{m}{w}_{ij}\ell\left(f_{\theta}\left(\tilde{x}_{ij}\right),y_{i}\right)\right]/n;
     end for
  end for

Definition 1 (Vulnerability in the view of PGD).

In the view of PGD, the vulnerability 𝒱PGD\mathcal{V}^{\textnormal{PGD}} regarding x~\tilde{x} (generated by PGD) is defined as

𝒱(x~,y)PGD:=py(x~),\displaystyle\mathcal{V}^{\textnormal{PGD}}_{(\tilde{x},y)}\mathrel{\mathop{:}}=\mathrm{p}_{y}(\tilde{x}), (8)

where py\mathrm{p}_{y} denotes the predicted probability (softmax on logits) of x~\tilde{x} belonging to the true class yy.

For the PGD-based adversarial variant, the lower predicted probability it has on true class, the smaller 𝒱PGD\mathcal{V}^{\textnormal{PGD}} in Eq. (8).

Definition 2 (Vulnerability in the view of CW).

In the view of CW, the vulnerability 𝒱CW\mathcal{V}^{\textnormal{CW}} regarding x~\tilde{x} (generated by CW) is defined as

𝒱(x~,y)CW:=py(x~)max[pi(x~):iy],\displaystyle\mathcal{V}^{\textnormal{CW}}_{(\tilde{x},y)}\mathrel{\mathop{:}}=\mathrm{p}_{y}(\tilde{x})-\max[\mathrm{p}_{i}(\tilde{x}):i\neq y], (9)

where py\mathrm{p}_{y} denotes the predicted probability (softmax on logits) of x~\tilde{x} (CW) belonging to the true class yy. The max[pi(x~):iy]\max[\mathrm{p}_{i}(\tilde{x}):i\neq y] denotes the maximum predicted probability (softmax on logits) of x~\tilde{x} (CW) belonging to the false class ii (iyi\neq y).

For the CW-based adversarial variant, the relatively higher predicted probability it has on the false class, the larger 𝒱CW\mathcal{V}^{\textnormal{CW}} in Eq. (9). In this paper, we consider the decreasing function gg with the form

g(𝒱):=α(1𝒱)β,\displaystyle g(\mathcal{V})\mathrel{\mathop{:}}=\alpha{(1-\mathcal{V})}^{\beta}, (10)

where α>0\alpha>0 and β>0\beta>0 are hyper-parameters of each attack. In Eq. (10), adversarial variants with higher 𝒱\mathcal{V} are given lower weights. We present our locally reweighted adversarial training in Algorithm 1. LRAT uses different given attacks (e.g., PGD, Algorithm 2 in Appendix A) to obtain different adversarial variants, and leverages the attack-dependent reweighting strategy for obtaining their corresponding weights. For each mini-batch, LRAT reweights the loss of different adversarial variants according to our vulnerability-based reweighting strategy, and then updates the model parameters by minimizing the sum of the reweighted loss on each instance.

Table 1: Test accuracy (%) of LRAT and other methods.
Methods Natural PGD CW AA
ResNet-18
AT    82.88    51.16 ±\pm 0.13    49.74 ±\pm 0.17    48.57 ±\pm 0.11
GAIRAT    80.97    56.29 ±\pm 0.19    45.77 ±\pm 0.13    32.57 ±\pm 0.15
LRAT    82.80    53.01 ±\pm 0.13    50.49 ±\pm 0.16    48.60 ±\pm 0.20
WRN-32-10
AT    83.42    53.13 ±\pm 0.18    52.26 ±\pm 0.14    46.21 ±\pm 0.14
GAIRAT    82.11    62.74 ±\pm 0.08    44.63 ±\pm 0.18    44.63 ±\pm 0.18
LRAT    83.02    55.01 ±\pm 0.19    53.72 ±\pm 0.26    46.13 ±\pm 0.15
Table 2: Test accuracy (%) of LRAT-TRADES and other methods.
Methods Natural PGD CW AA
ResNet-18
Epoch 60th 100th 60th 100th 60th 100th 60th 100th
TRADES 78.90 83.15 53.26 ±\pm 0.17 52.62 ±\pm 0.19 51.74 ±\pm 0.16 50.38 ±\pm 0.18 48.15 ±\pm 0.12 47.80 ±\pm 0.11
GAIR-TRADES 78.55 82.19 60.90 ±\pm 0.18 58.17 ±\pm 0.13 43.39 ±\pm 0.16 41.27 ±\pm 0.19 35.29 ±\pm 0.14 33.24 ±\pm 0.18
LRAT-TRADES 78.83 82.91 55.21 ±\pm 0.15 54.77 ±\pm 0.17 52.97 ±\pm 0.23 52.09 ±\pm 0.14 48.24 ±\pm 0.19 47.81 ±\pm 0.20
WRN-32-10
Epoch 60th 100th 60th 100th 60th 100th 60th 100th
TRADES 82.96 86.21 55.11 ±\pm 0.16 54.27 ±\pm 0.18 54.19 ±\pm 0.19 53.09 ±\pm 0.13 52.14 ±\pm 0.12 51.70 ±\pm 0.15
GAIR-TRADES 82.20 85.35 63.34 ±\pm 0.17 61.27 ±\pm 0.19 45.31 ±\pm 0.12 43.32 ±\pm 0.14 37.82 ±\pm 0.13 35.88 ±\pm 0.09
LRAT-TRADES 82.74 85.99 57.68 ±\pm 0.13 56.89 ±\pm 0.19 55.12 ±\pm 0.23 54.27 ±\pm 0.17 52.17 ±\pm 0.14 51.78 ±\pm 0.24

5 Experiments

In this section, we justify the efficacy of LRAT using networks with various model capacity. In the experiments, we consider LL_{\infty}-norm bounded perturbation that x~xϵ||\tilde{x}-x{||}_{\infty}\leq\epsilon in both training and evaluations. All images of the CIFAR-10 are normalized into [0,1].

5.1 Baselines

We compare LRAT with the no-reweighting strategy (i.e., SAT [18]) and the global-reweighting strategy (i.e., GAIRAT [38]). Rice et al. [24] show that, unlike in standard training, overfitting in robust adversarial training decays test set performance during training. Thus, as suggested by Rice et al. [24], we compare different methods on the performance of the best checkpoint model (the early stopping results at epoch 60). Besides, we also design LRAT for TRADES [36], denoted as (LRAT-TRADES), and the details of the algorithm are in Appendix B. Accordingly, we also compare LRAT-TRADES with TRADES [36] and GAIR-TRADES [38]. Trades-based methods effectively mitigate the overfitting Rice et al. [24], so we compare different methods on both the best checkpoint model and the last checkpoint model (used by Madry et al. [18]), respectively.

5.2 Experimental Setup

We employ the small-capacity network, ResNet (ResNet-18) [14], and the large-capacity network, Wide ResNet (WRN-32-10) [35]. Our experimental setup follows previous works [18, 31, 38]. All networks are trained for 100 epochs using SGD with 0.90.9 momentum. The initial learning rate is 0.010.01, divided by 1010 at epoch 6060 and 9090, respectively. The weight decay is 0.0035. For generating the PGD adversarial data for updating the network, LL_{\infty}-norm bounded perturbation ϵtrain=8/255\epsilon_{train}=8/255; the maximum PGD step K=10K=10; step size α=ϵtrain/10\alpha=\epsilon_{train}/10. For generating the CW adversarial data [5], we follow the setup in [3, 37], where the confidence κ=50\kappa=50, and other hyper-parameters are the same as that of PGD above. Robustness to adversarial data is the main evaluation indicator in adversarial training [6, 8, 9, 10, 21, 33, 40]. Thus, we evaluate the robust models based on four evaluation metrics, i.e., standard test accuracy on natural data (Natural), robust test accuracy on adversarial data generated by projected gradient descent attack (PGD) [18], Carlini and Wagner attack (CW) [5] and AutoAttack (AA). In testing, LL_{\infty}-norm bounded perturbation ϵtest=8/255\epsilon_{test}=8/255, the maximum PGD step K=20K=20, and step size α=ϵ/4\alpha=\epsilon/4. There is a random start in training and testing, i.e., uniformly random perturbations ([ϵtrain,+ϵtrain][-\epsilon_{train},+\epsilon_{train}] and [ϵtest,+ϵtest][-\epsilon_{test},+\epsilon_{test}]) added to natural instances. Due to the random start, we test our methods and baselines five times with different random seeds.

5.3 Performance Evaluation

Tables 1 and 2 report the medians and standard deviations of the results. In our experiments of LRAT, we simulate the PGD and CW in training (m=2m=2 in Eq. (5)). For adversarial variants, we use our vulnerability-based reweighting strategy (Eq. (7)) to obtain their corresponding weights, where 𝒱PGD\mathcal{V}^{\textnormal{PGD}} follows Eq. (8) and 𝒱CW\mathcal{V}^{\textnormal{CW}} follows Eq. (9). We choose the three hyper-parameters (𝒞\mathcal{C} in Eq. (5), α,β\alpha,\beta in Eq. (10)) that α=2\alpha=2, β=0.5\beta=0.5, and 𝒞=0.1\mathcal{C}=0.1, and we analyze it in Appendix C.

Compared with SAT, LRAT significantly boosts adversarial robustness under PGD and CW, and the efficacy of LRAT does not diminish under AA. The results show that our local-reweighting strategy is superior to the no-reweighting strategy. Compared with GAIRAT, LRAT has a great improvement under CW and AA. Although GAIRAT improves the performance under PGD, it is not enough to be an effective adversarial defense. The adversarial defense could be described as a barrel Effect. For example, if there is a short slab, the barrel would leak. Similarly, if there is a weakness in defending against some attacks, the classifier would fail to predict. In contrast, LRAT reduces the threat of any potential attacks as an effective defense. Thus, the results also show that our local-reweighting strategy is superior to the global-reweighting strategy. In general, the results affirmatively confirm the efficacy of LRAT. We admit that LRAT has minor improvement under AA (AA does not simulate in training), and the reason behind the limited improvement is the inconsistent vulnerability in different views. Since it is impractical to simulate all attacks in training, and thus we recommend that practitioners simulate multiple attacks and assign some weight to the natural (or PGD adversarial) data term for each instance during training.

Table 3: Objective functions of ablation study.
Symbol Objective function Symbol Objective function
(P)    ω(x~p,y)(fθ(x~p),y)\omega(\tilde{x}^{p},y)\ell(f_{\theta}(\tilde{x}^{p}),y) ([N]) [𝒞ω(x~p,y)ω(x~c,y)]+(fθ(x),y)\left[\mathcal{C}-\omega(\tilde{x}^{p},y)-\omega(\tilde{x}^{c},y)\right]_{+}\ell(f_{\theta}({x}),y)
(C)    ω(x~c,y)(fθ(x~c),y)\omega(\tilde{x}^{c},y)\ell(f_{\theta}(\tilde{x}^{c}),y) ([P]) [𝒞ω(x~p,y)ω(x~c,y)]+(fθ(x~p),y)\left[\mathcal{C}-\omega(\tilde{x}^{p},y)-\omega(\tilde{x}^{c},y)\right]_{+}\ell(f_{\theta}(\tilde{x}^{p}),y)
Table 4: Ablation study of LRAT : Test accuracy (%) using ResNet-18.
Ablation Natural PGD CW AA
(P)    80.21    52.66 ±\pm 0.14    49.20 ±\pm 0.22    47.72±\pm 0.09
(C)    79.55    50.57 ±\pm 0.10    51.13 ±\pm 0.16    47.81 ±\pm 0.12
(P+C)    82.40    53.52 ±\pm 0.15    50.71 ±\pm 0.08    47.80 ±\pm 0.17
([N]+P+C)    82.80    53.01 ±\pm 0.13    50.49 ±\pm 0.16    48.60 ±\pm 0.20
([P]+P+C)    82.08    53.33 ±\pm 0.18    50.60 ±\pm 0.19    48.90 ±\pm 0.12

5.4 Ablation Study

This subsection validates that each component in LRAT can improve the adversarial robustness. P (or C) represents that only PGD (or CW) with its corresponding reweighting strategy is simulated during training. [N] (or [P]) represents assigning some weights to the natural (or PGD adversarial) data, which serves as a gentle lower bound to avoid discarding instances during training. The objective functions are in Table 3, where x~p\tilde{x}^{p} is the adversarial data under PGD, and x~c\tilde{x}^{c} is the adversarial data under CW. The results are reported in Table 4. Results show that the robustness of global reweighting (P) and (C) is lower than that of the other three local reweighting when tested on attacks different from the given attack simulated during training. Results also show that the robustness of ([N]+P+C) and ([P]+P+C) is higher than that of the other three with no lower bounds when tested on AA, which confirms that a gentle lower bound on instance weights is useful for defending against potentially adaptive adversarial attacks.

6 Conclusion

It has been showing great potential to improve adversarial robustness by reweighting adversarial variants during AT. This paper provides a new perspective to this promising direction and aims to train a robust classifier to defend against various attacks. Our proposal, locally reweighted adversarial training (LRAT), pairs each instance with its adversarial variants and performs local reweighting inside each pair. LRAT will not skip any pairs during adversarial training such that it can passively defend against different attacks in future. Experiments show that LRAT works better than both IRAT (i.e., global reweighting) and the standard AT (i.e., no reweighting) when trained with an attack and tested on different attacks. As a general framework, LRAT provides insights on how to design powerful reweighted adversarial training under any potentially adversarial attacks.

References

  • Bai et al. [2019] Y. Bai, Y. Feng, Y. Wang, T. Dai, S.-T. Xia, and Y. Jiang. Hilbert-based generative defense for adversarial examples. In ICCV, 2019.
  • Balunovic and Vechev [2019] M. Balunovic and M. Vechev. Adversarial training and provable defenses: Bridging the gap. In ICLR, 2019.
  • Cai et al. [2018] Q.-Z. Cai, M. Du, C. Liu, and D. Song. Curriculum adversarial training. In IJCAI, 2018.
  • Carlini and Wagner [2017a] N. Carlini and D. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017a.
  • Carlini and Wagner [2017b] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In CVPR, 2017b.
  • Carlini et al. [2019] N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
  • Chen et al. [2015] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, 2015.
  • Chen et al. [2020] T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, and Z. Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In CVPR, 2020.
  • Cohen et al. [2019] J. Cohen, E. Rosenfeld, and Z. Kolter. Certified adversarial robustness via randomized smoothing. In ICML, 2019.
  • Du et al. [2021] X. Du, J. Zhang, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang, and M. Sugiyama. Learning diverse-structured networks for adversarial robustness. In ICML, 2021.
  • Finlayson et al. [2019] S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane. Adversarial attacks on medical machine learning. Science, 2019.
  • Gao et al. [2021] R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu, and M. Sugiyama. Maximum mean discrepancy is aware of adversarial attacks. In ICML, 2021.
  • Goodfellow et al. [2015] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
  • He et al. [2016] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • He et al. [2018] W. He, B. Li, and D. Song. Decision boundary analysis of adversarial examples. In ICLR, 2018.
  • Kurakin et al. [2017] A. Kurakin, I. Goodfellow, S. Bengio, et al. Adversarial examples in the physical world. In ICLR, 2017.
  • Ma et al. [2021] X. Ma, Y. Niu, L. Gu, Y. Wang, Y. Zhao, J. Bailey, and F. Lu. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, 2021.
  • Madry et al. [2018] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
  • Miyato et al. [2017] T. Miyato, A. M. Dai, and I. Goodfellow. Adversarial training methods for semi-supervised text classification. In ICLR, 2017.
  • Nguyen et al. [2015] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015.
  • Pang et al. [2019] T. Pang, K. Xu, C. Du, N. Chen, and J. Zhu. Improving adversarial robustness via promoting ensemble diversity. In ICML, 2019.
  • Papernot et al. [2016] N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and privacy in machine learning. arXiv:1611.03814, 2016.
  • Raghunathan et al. [2020] A. Raghunathan, S. M. Xie, F. Yang, J. Duchi, and P. Liang. Understanding and mitigating the tradeoff between robustness and accuracy. In ICML, 2020.
  • Rice et al. [2020] L. Rice, E. Wong, and Z. Kolter. Overfitting in adversarially robust deep learning. In ICML, 2020.
  • Shafahi et al. [2020] A. Shafahi, M. Najibi, Z. Xu, J. Dickerson, L. S. Davis, and T. Goldstein. Universal adversarial training. In AAAI, 2020.
  • Szegedy et al. [2014] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2014.
  • Tsipras et al. [2019] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. Robustness may be at odds with accuracy. In ICLR, 2019.
  • Van der Maaten and Hinton [2008] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 2008.
  • Wang et al. [2020a] H. Wang, T. Chen, S. Gui, T.-K. Hu, J. Liu, and Z. Wang. Once-for-all adversarial training: In-situ tradeoff between robustness and accuracy for free. In NeurIPS, 2020a.
  • Wang et al. [2019] Y. Wang, X. Ma, J. Bailey, J. Yi, B. Zhou, and Q. Gu. On the convergence and robustness of adversarial training. In ICML, 2019.
  • Wang et al. [2020b] Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, and Q. Gu. Improving adversarial robustness requires revisiting misclassified examples. In ICLR, 2020b.
  • Wu et al. [2020] D. Wu, S.-T. Xia, and Y. Wang. Adversarial weight perturbation helps robust generalization. In NeurIPS, 2020.
  • Yang et al. [2021] S. Yang, T. Guo, Y. Wang, and C. Xu. Adversarial robustness through disentangled representations. In AAAI, 2021.
  • Yang et al. [2020] Y.-Y. Yang, C. Rashtchian, H. Zhang, R. Salakhutdinov, and K. Chaudhuri. A closer look at accuracy vs. robustness. In NeurIPS, 2020.
  • Zagoruyko and Komodakis [2016] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.
  • Zhang et al. [2019] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan. Theoretically principled trade-off between robustness and accuracy. In ICML, 2019.
  • Zhang et al. [2020a] J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli. Attacks which do not kill training make adversarial learning stronger. In ICML, 2020a.
  • Zhang et al. [2021] J. Zhang, J. Zhu, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli. Geometry-aware instance-reweighted adversarial training. In ICLR, 2021.
  • Zhang et al. [2020b] Y. Zhang, Y. Li, T. Liu, and X. Tian. Dual-path distillation: A unified framework to improve black-box attacks. In ICML, 2020b.
  • Zhu et al. [2021] J. Zhu, J. Zhang, B. Han, T. Liu, G. Niu, H. Yang, M. Kankanhalli, and M. Sugiyama. Understanding the interaction of adversarial training with noisy labels. arXiv preprint arXiv:2102.03482, 2021.

Appendix A Adversarial Attack

Algorithm 2 and Algorithm 3 are the adversarial data generation of PGD and CW, respectively. The loss function in PGD is:

CE:=logpy(x~),\displaystyle\ell_{CE}\mathrel{\mathop{:}}=-\log{\mathrm{p}_{y}(\tilde{x})}, (11)

where py\mathrm{p}_{y} denotes the predicted probability (softmax on logits) of x~\tilde{x} belonging to the true class yy.

The loss function in CW is:

CW:=Zy(x~)+max[Zi(x~):iy]κ,\displaystyle\ell_{CW}\mathrel{\mathop{:}}=-\mathrm{Z}_{y}(\tilde{x})+\max[\mathrm{Z}_{i}(\tilde{x}):i\neq y]-\kappa, (12)

where Zy\mathrm{Z}_{y} denotes the predicted probability (before softmax) of x~\tilde{x} belonging to the true class yy; κ=50\kappa=50 (following the setup in [3, 37]) is a hyper-parameter to denote the confidence.

Appendix B TRadeoff-inspired Adversarial DEfense via Surrogate-loss Minimization

The objective function of TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) [36] is

minfθ1ni=1n((fθ(xi),yi)+1λKL(fθ(x~i),fθ(xi))),\displaystyle\mathop{\min}\limits_{f_{\theta}\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\left(\ell(f_{\theta}(x_{i}),y_{i})+\frac{1}{\lambda}\ell_{KL}(f_{\theta}(\tilde{x}_{i}),f_{\theta}(x_{i}))\right), (13)

where

x~i=argmaxx~iϵ[x]KL(fθ(x~i),fθ(xi)).\displaystyle\tilde{x}_{i}=\mathop{\operatorname*{arg\,max}}\limits_{\tilde{x}_{i}\in\mathcal{B}_{\epsilon}[x]}\ell_{KL}(f_{\theta}(\tilde{x}_{i}),f_{\theta}(x_{i})). (14)

λ>0\lambda>0 is a regularization parameter. Others remain the same with standard adversarial training. The approximation method for searching adversarial data in TRADES is as follows:

x(t+1)=Π[x(0)](x(t)+αsign(xtKL(fθ(x~(t)),fθ(x(t))))),t.\displaystyle x^{(t+1)}=\Pi_{\mathcal{B}[x^{(0)}]}(x^{(t)}+\alpha sign(\nabla_{x^{t}}\ell_{KL}(f_{\theta}(\tilde{x}^{(t)}),f_{\theta}(x^{(t)})))),t\in\mathbb{N}. (15)

Instead of the loss function :𝒞×𝒴\ell:\mathbb{R}^{\mathcal{C}}\times\mathcal{Y}\to\mathbb{R} in Eq. (3), TRADES use KL\ell_{KL} that the KLKL divergence between the prediction of natural data and their adversarial variants, i.e.:

(x,δ;θ)=KL[p(x;θ)p(x~;θ)]=kpk(x;θ)logpk(x;θ)pk(x~;θ).\displaystyle\mathcal{R}(x,\delta;\theta)=\mathcal{L}_{KL}[\mathrm{p}(x;\theta)\parallel\mathrm{p}(\tilde{x};\theta)]=\sum_{k}\mathrm{p}_{k}(x;\theta)\log\frac{\mathrm{p}_{k}(x;\theta)}{\mathrm{p}_{k}(\tilde{x};\theta)}. (16)

TRADES generates the adversatial data with a maximum predicted probability distribution difference from the natural data, instead of generating the most adversarial data in AT. We also design LRAT for TRADES that LRAT-TRADES in Algorithm 4.

Appendix C Experimental Details

Weighting Normalization. In our experiments, we impose another constraint on our objective Eq. (5):

1ni=1n([𝒞j=1mω(x~ij,yi)]++j=1mω(x~ij,yi))=1,\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left([\mathcal{C}-\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})]_{+}+\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})\right)=1, (17)

to implement a fair comparison with baselines.

Algorithm 2 Adversarial Data Generation in Projected Gradient Descent Attack (PGD)
  Input: natural data x𝒳x\in\mathcal{X}, label y𝒴y\in\mathcal{Y}, model ff, loss funciton CE\ell_{CE}, maximum PGD step KK,
  perturbation bound ϵ\epsilon, step size α\alpha;
  Output: adversarial data x~\tilde{x};
  x~x\tilde{x}\leftarrow x;
  while K>0K>0 do
     x~Πϵ[x](x~+αsign(x~CE(f(x~),y))\tilde{x}\leftarrow\Pi_{\mathcal{B}_{\epsilon}[x]}(\tilde{x}+\alpha sign(\nabla_{\tilde{x}}\ell_{CE}(f(\tilde{x}),y));
     KK1K\leftarrow K-1;
  end while

Algorithm 3 Adversarial Data Generation in Carlini and Wagner Attack (CW)
  Input: natural data x𝒳x\in\mathcal{X}, label y𝒴y\in\mathcal{Y}, model ff, loss funciton CW\ell_{CW}, maximum PGD step KK,
  perturbation bound ϵ\epsilon, step size α\alpha;
  Output: adversarial data x~\tilde{x};
  x~x\tilde{x}\leftarrow x;
  while K>0K>0 do
     x~Πϵ[x](x~+αsign(x~CW(f(x~),y))\tilde{x}\leftarrow\Pi_{\mathcal{B}_{\epsilon}[x]}(\tilde{x}+\alpha sign(\nabla_{\tilde{x}}\ell_{CW}(f(\tilde{x}),y));
     KK1K\leftarrow K-1;
  end while

Algorithm 4 Locally Reweighted Adversarial Training for TRADES (LRAT-TRADES)
  Input: network architecture parametrized by θ\theta, training dataset SS, learning rate η\eta, number of epochs TT, batch size nn, number of batches NN, number of attacks mm;
  Output: Adversarial robust network fθf_{\theta};
  for epoch=1,2,,Tepoch=1,2,\dots,T do
     for mini-batch = 1,2,…,NN do
        Sample a mini-batch {(xi,yi)}i=1n\left\{\left(x_{i},y_{i}\right)\right\}^{n}_{i=1} from SS;
        for i=1,2,,ni=1,2,\dots,n do
           for j=1,2,,mj=1,2,\dots,m do
              Obtain adversarial data x~ij\tilde{x}_{ij} of xi{x}_{i} (e.g., PGD by Algorithm 2);
              Calculate wij{w}_{ij} according to 𝒱(x~ij,yi)\mathcal{V}_{(\tilde{x}_{ij},y_{i})} by Eq. (7);
           end for
        end for
        θθηi=1nθ[[𝒞j=1mwij]+CE(fθ(xi),yi)+j=1mwijKL(fθ(x~ij),fθ(xij))]/n\theta\leftarrow\theta-\eta\sum_{i=1}^{n}\nabla_{\theta}\left[\left[\mathcal{C}-\sum_{j=1}^{m}{w}_{ij}\right]_{+}\ell_{CE}\left(f_{\theta}\left({x}_{i}\right),y_{i}\right)+\sum_{j=1}^{m}{w}_{ij}\ell_{KL}\left(f_{\theta}\left(\tilde{x}_{ij}\right),f_{\theta}\left({x}_{ij}\right)\right)\right]/n;
     end for
  end for

Selection of Hyper-parameters. It is an open question how to define the decreasing function gg in Eq. (7). (or given the definition of gg in Eq. (10), how to select the hyper-parameters under different situations.) In our experiments, given an alternative value set {0.25,0.5,1.5,2,4}\{0.25,0.5,1.5,2,4\}, we choose α,β\alpha,\beta in Eq. (10) that α=2\alpha=2, β=0.5\beta=0.5, and given an alternative value set {0.1,0.2,0.4,0.6}\{0.1,0.2,0.4,0.6\}, we choose 𝒞\mathcal{C} in Eq. (5) that 𝒞=0.1\mathcal{C}=0.1. Our experiments show that the performance has a minor variation between different hyper-parameters, and we choose the optimal hyper-parameters depending on whose robustness attacked by PGD is the best.

Natural Data Term. To avoid discarding instances during training, we impose the non-negative coefficient [𝒞j=1mω(x~ij,yi)]+[\mathcal{C}-\sum_{j=1}^{m}\omega(\tilde{x}_{ij},y_{i})]_{+} in Eq. (5) to assign some weight to the natural data term. The definition of the natural data term can vary as required by different tasks. When practitioners only focus on the robustness under attacks simulated in training, this term can be eliminated. When practitioners focus on the robustness under attacks simulated in training and the accuracy on natural data, this term can be defined as the loss of the natural instance. When practitioners focus on the robustness under attacks both simulated and unseen in training, this term can be defined as the loss of the adversarial variant. In our experiments, natural data term in LRAT (the ([N]+P+C) in our ablation study) is the loss sum of the natural instance and the PGD adversarial variant, which aims to achieve robustness and accuracy.

Appendix D Experimental Resources

We implement all methods on Python 3.73.7 (Pytorch 1.7.11.7.1) with a NVIDIA GeForce RTX 3090 GPU with AMD Ryzen Threadripper 3960X 24 Core Processor. The CIFAR-10 dataset and the SVHN dataset can be downloaded via Pytorch. See the codes submitted. Given the 50,00050,000 images from the CIFAR-10 training set and 73,25773,257 digits from the SVHN training set, we conduct the adversarial training on ResNet-18 and Wide ResNet-32 for classification. DNNS are trained using SGD with 0.90.9 momentum, the initial learning rate of 0.010.01 and the batch size of 128128 for 100100 epochs.

Appendix E Additional Experiments on the SVHN

We also justify the efficacy of LRAT on the SVHN. In the experiments, we employ ResNet-18 and consider LL_{\infty}-norm bounded perturbation that x~xϵ||\tilde{x}-x{||}_{\infty}\leq\epsilon in both training and evaluations. All images of the SVHN are normalized into [0,1]. Tables 5 reports the medians and standard deviations of the results. Compared with SAT, LRAT significantly boosts adversarial robustness under PGD and CW, and the efficacy of LRAT does not diminish under AA. Compared with GAIRAT, LRAT has a great improvement under CW and AA.

Table 5: Test accuracy (%) of LRAT and other methods on the SVHN.
Methods Natural PGD CW AA
AT    92.38    55.97 ±\pm 0.18    52.90 ±\pm 0.21    47.84 ±\pm 0.17
GAIRAT    90.31    62.96 ±\pm 0.10    47.17 ±\pm 0.17    38.74 ±\pm 0.19
LRAT    92.30    59.76 ±\pm 0.14    55.47 ±\pm 0.18    47.65 ±\pm 0.27

Appendix F Discussions on the defense against unseen attacks

As a general framework, LRAT provides insights on how to design powerful reweighting adversarial training under different adversarial attacks. Duo to the inconsistent vulnerability in different views, reweighting adversarial training has the risk of weakening the ability to defend against unseen attacks. Thus, we recommend that practitioners simulate diverse attacks during training. Note that it does not mean that practitioners should use different attacks indiscriminately—for instance, during standard adversarial training, mixing some weak adversarial data into PGD adversarial data will weaken the robustness on the contrary. The recommended diversity is diverse information focused on during adversarial data generation, such as the difference between misleading the classifier with the lowest probability of predicting the correct label (PGD) and misleading the classifier with the highest probability of predicting a wrong label (CW).