\ul
Robust Physical-World Attacks on Face Recognition
Abstract
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications. However, recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition. In this work, we study sticker-based physical attacks on face recognition for better understanding its adversarial robustness. To this end, we first analyze in-depth the complicated physical-world conditions confronted by attacking face recognition, including the different variations of stickers, faces, and environmental conditions. Then, we propose a novel robust physical attack framework, dubbed PadvFace, to model these challenging variations specifically. Furthermore, considering the difference in attack complexity, we propose an efficient Curriculum Adversarial Attack (CAA) algorithm that gradually adapts adversarial stickers to environmental variations from easy to complex. Finally, we construct a standardized testing protocol to facilitate the fair evaluation of physical attacks on face recognition, and extensive experiments on both dodging and impersonation attacks demonstrate the superior performance of the proposed method.
1 Introduction
Face recognition has achieved substantial success with the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications, such as video surveillance and face authentication (Huang et al. 2020b; Deng et al. 2019; Wang et al. 2018). However, some recent works demonstrate that DNN-based face recognition models are very vulnerable to adversarial examples, even small and malicious perturbations can cause incorrect predictions (Sharif et al. 2016, 2019; Dong et al. 2019; Komkov and Petiushko 2020; Xiao et al. 2021). For instance, when wearing an adversarial eyeglass frame, an attacker can deceive face recognition to be incorrectly recognized as another identity (Sharif et al. 2016). Such an adversarial phenomenon has raised serious concerns about the security of face recognition and it is imperative to understand its adversarial robustness.
Adversarial attack has been the most commonly adopted surrogate for the evaluation of adversarial robustness. Existing attack methods on face recognition can be categorized into two types: (1) digital attacks where an attacker can perturb input images of face recognition directly in the digital domain (Goodfellow, Shlens, and Szegedy 2015; Dong et al. 2019; Qiu et al. 2020), and (2) physical attacks realized by imposing adversarial perturbations to real faces in the physical world, e.g., wearable adversarial stickers (Sharif et al. 2016, 2019; Komkov and Petiushko 2020). As attackers usually cannot access and modify the digital input of physical-world face recognition systems, physical attacks are more practical for evaluating their adversarial robustness. However, in contrast to the plethora of digital attack methods, few works are proposed to address physical attacks on face recognition, which remains challenging due to the complicated physical condition variations.
In this work, we study the sticker-based physical attacks that aim to generate wearable adversarial stickers to deceive state-of-the-art face recognition, for a better understanding of its adversarial robustness. For robust physical attacks, an adversarial sticker should survive against complicated physical-world conditions. To this end, we first provide an in-depth analysis of different physical-world conditions when conducting attacks on face recognition, including sticker and face variations, as well as environmental condition variations such as lighting conditions, camera angles, etc. Then, we propose a novel robust physical attack framework, dubbed PadvFace, that considers and models these physical-world condition variations specifically. Though some prior works show the possibility of performing physical attacks on face recognition (Sharif et al. 2016, 2019; Komkov and Petiushko 2020), their performance is still unsatisfactory where only partial environmental variations are considered. For instance, the work of (Komkov and Petiushko 2020) adopted advhat to attack physical-world face recognition systems. Yet it did not address the chromatic aberration of stickers, facial variations, as well as adequate environmental conditions. In this work, we demonstrate that these physical-world variations also influence attack performance significantly .
In terms of the optimization in physical attacks, Expectation Over Transformation (EOT) (Athalye et al. 2018) is a standard optimizer that aggregates different physical-world condition variations to generate robust perturbations but simply treats each of them equally. However, we have the following observations: (1) the attack complexity of an adversarial sticker varies with different physical-world conditions, and (2) the optimization of physical attacks generally lead to the non-convex optimization problem due to the high non-linearity of DNNs. Thus, simply adapting the adversarial sticker to all kinds of physical-world variations equally could make the optimization difficult and lead to inferior solutions.
To alleviate these, we propose a novel Curriculum Adversarial Attack (CAA) algorithm that advocates for exploring the attack complexity difference of physical-world conditions and gradually aggregates these conditions from easy to complex during the optimization. CAA adheres to the principles of curriculum learning (Bengio et al. 2009), which has shown the benefits in obtaining better local minima and superior generalization for the non-convex optimization. Finally, we build a standardized testing protocol for physical attacks on face recognition and conduct a comprehensive experimental study on the adversarial robustness of state-of-the-art face recognition models, under both dodging and impersonation attacks. Extensive experimental results demonstrate the superior performance of the proposed method.
The contributions of our work are four-fold:
-
•
We propose a novel physical attack method, dubbed PadvFace, that models complicated physical-world condition variations in attacking face recognition.
-
•
We explore the attack complexity with various physical-world conditions and propose an efficient curriculum adversarial attack (CAA) algorithm.
-
•
We build a standardized testing protocol for facilitating the fair evaluation of physical attacks on face recognition.
-
•
We conduct a comprehensive experimental study and obtain the superior performance of physical attacks.
2 Related Work
Physical-world attacks aim to deceive deep neural networks by perturbing objects in the physical world (Athalye et al. 2018; Wang et al. 2019; Xu et al. 2020). It is usually realized by firstly generating adversarial perturbations in the digital space and then fabricating and attacking in the physical world. Due to the complicated physical-world variations, there will be inevitable distortions when directly imposing the digital perturbations in the physical world. Hence, the research focus of current physical attacks lies in how to efficiently model and incorporate complicated physical-world conditions.
Currently, most physical attacks focus on image classification (Duan et al. 2020; Eykholt et al. 2018; Zhao and Stamm 2020; Jan et al. 2019; Li, Schmidt, and Kolter 2019) and object detection (Huang et al. 2020a; Zhang et al. 2018; Chen et al. 2018; Zhao et al. 2019; Zolfi et al. 2021). However, face recognition is a quite different task with different model properties and objectives (Huang et al. 2020b; Deng et al. 2019; Wang et al. 2018). This makes attacking face recognition specifically different when conducting it in the physical world since more facial variations need to be involved beyond general environmental conditions. Recently, there have been some attempts of physical attacks on face recognition (Sharif et al. 2016, 2019; Komkov and Petiushko 2020; Pautov et al. 2019; Yin et al. 2021). Due to better reproducibility and being harmless to human beings, sticker-based attacks have been the mainstream approaches.
For sticker-based adversarial attacks, Sharif et al. (2016; 2019) explored adversarial eyeglass frames for physical attacks. They demonstrated that it was possible to deceive face recognition models by wearing an adversarial eyeglass. However, they did not involve the illumination or face variations. Furthermore, they only considered limited-scale face recognition models that were trained to recognize up to 143 identities and conducted attacks by perturbing face classification scores. In contrast, state-of-the-art (SOTA) face recognition models, such as ArcFace (Deng et al. 2019) or CosFace (Wang et al. 2018), are based on the pair-wised (cosine) similarity and usually trained on tens of thousands of identities and millions of training images. Thus, such eyeglass-based attack methods fail to fool SOTA face recognition models, which is verified by Komkovand and Petiushko (2020). The work of (Komkov and Petiushko 2020) proposed advhat to deceive the ArcFace model that trained on the large-scale dataset MS1MV2 (Deng et al. 2019). They verified the adversarial hat could reduce the cosine similarity of two facial images from the same person. However, they did not consider the facial variations of attackers, as well as the chromatic aberration of adversarial stickers induced by printers and cameras.
Meanwhile, existing physical attacks commonly adopt the EOT optimizer (Athalye et al. 2018) that treats different environmental variations equally during the optimization. However, we demonstrate in this work that the attacking complexity varies with physical-world conditions and it is better to consider such characteristics for robust attacks.

3 Proposed Method
3.1 Preliminary
Let be an input facial image and be an anchor facial image, be the face recognition model and denotes the learned feature embedding of . The state-of-the-art face recognition model is generally realized based on the similarity between and . There are two types of adversarial attacks on face recognition: dodging attacks that aim to reduce the similarity between facial images from the same identity and impersonation attacks that aim to increase the similarity between facial images from different identities. For dodging attacks where and are captured from the same identity, the optimization of adversarial sticker can be formulated as
(1) |
where denotes the attack loss, e.g., the cosine loss . In contrast, impersonation attacks, where and are sampled from two different identities, can be optimized by minimizing .
3.2 Physical Attack Challenges
The physical-world condition variations are challenging when attacking face recognition. Firstly, there would be physical-world variations w.r.t. the sticker: 1) spatial constraints that the sticker cannot cover all parts of faces, such as facial organs; 2) inevitable deformation and position disturbance when wearing the sticker and fitting it to the real face. 3) chromatic aberration of the sticker caused by the printers and cameras. The sticker is first fabricated by printers and then wore and photographed by cameras for attacking face recognition. Due to the limitation of printer resolution and different shooting conditions of photographing, there would be the chromatic aberration of the sticker between the digital space and the physical world.
Secondly, there would be physical-world variations w.r.t. the adversarial face: 1) photographing variations containing camera angles, head poses, and lighting conditions, etc.; 2) internal facial variations of attackers, such as different facial expressions and movements.
3.3 Robust PadvFace Framework
In this section, we propose a robust physical attack framework on face recognition, dubbed PadvFace, which considers and models the challenging physical-world conditions. Specifically, we adopt a rectangular sticker pasted on the forehead of an attacker without covering facial organs. The overall framework of the proposed PadvFace is illustrated in Fig. 1.
The rectangular sticker is firstly fed into a Digital-to-Physical (D2P) module to model the chromatic aberration induced by printers and cameras. Then, a sticker transformation module is introduced to simulate variations w.r.t. the sticker when pasting on a real-world face. In the meantime, an initial mask is also fed into by sharing the transformation with that on , generating the blending mask . After these, the sticker is blended with a randomly selected facial image according to the blending mask , resulting in an initial adversarial image . This initial adversarial image further inputs a transformation module to simulate the environmental variations such as different poses and lighting conditions, leading to the ultimate adversarial facial image for deceiving face recognition, i.e.,
(2) |
where is a random Gaussian noise and , is a randomly selected facial image. More details of each module are introduced as follows.
Digital-to-Physical (D2P) Module. There are two types of chromatic aberration: 1) the fabrication error induced by printers, which mainly refers to the color deviation between the digital color and its printed version due to the limitation of printer resolution; 2) photographing error caused by cameras, such as sensor noises and different lighting conditions. To alleviate these issues, we develop a Digital-to-Physical (D2P) module to simulate the chromatic aberration of the sticker inspired by (Xu et al. 2020). Specifically, the proposed D2P module is realized by training a multi-layer perception (MLP) to learn a 1:1 mapping from a customized digital color palette to its physically printed and photographed version, as illustrated in Fig. 2 (a-b), where Fig. 2 (c) presents the learned color palette from the proposed D2P module. Note that only parts of these color palettes are shown for the brief illustration, and the full color palettes and details of D2P are provided in Appendix.

Sticker Transformation Module contains: 1) sticker deformation when pasting on a real-world face, including the off-plane bending and 3D rotations. Following (Komkov and Petiushko 2020), we adopt a parabolic transformation operator to simulate the off-plane bending and a 3D transformation for rotations; 2) position disturbance as it is hard to paste the sticker precisely at the same position as designed, such as random rotations and translations.
Face Transformation Module. As analyzed above, the attacker also undergoes a set of physical-world variations induced by different poses, lighting conditions, and internal facial variations, etc. To model these variations, we sample facial images from both physical and synthetic transformations. Specifically, for internal facial variations, we capture real-world facial images with different facial expressions and movements, leading to a set of facial images . To simulate the variations induced by different camera angles, poses and lighting conditions, we consider transformation that includes random rotation, scaling, translation, contrast and brightness on the adversarial facial images.
Module | Variations |
D2P | Chromatic aberration from printers and cameras |
Parabolic transformation, rotation, translation | |
Rotation, scaling, translation, contrast, brightness | |
Facial expressions, facial movements | |
Random Gaussian noise |
The overall physical-world variations considered by the proposed PadvFace are summarized in Table 1. To address these challenging variations, EOT algorithm (Athalye et al. 2018) is commonly used to implement robust physical attacks. Specifically, EOT first samples transformations from and then optimizes the following objective as
(3) |
where we adopt and for shorthand. is the corresponding adversarial image generated by Eq. (2). is the total-variation loss introduced to enhance the smoothness of the sticker and is a regularization parameter. The D2P module and the synthetic transformations in and are all differentiable. And model (3) can be solved by stochastic gradient descent algorithm.
3.4 Curriculum Adversarial Attack
We have the following observations for model (3). Firstly, for generating an adversarial sticker, the attack complexity varies with different physical-world conditions. In Fig. 3, we show the dodging attack performance of a fixed sticker under various facial variations, illumination variations, and 100 random sampled transformations from . And higher adversarial cosine similarity indicates lower attack performance and higher attack complexity. The results show that the difficulty of physical attacks varies with different conditions. Secondly, due to the high nonlinearity of DNNs, model (3) generally leads to a non-convex optimization problem. Hence, directly fitting the adversarial sticker to all kinds of physical-world conditions in model (3) could make the optimization difficult, resulting in inefficient solutions.

In light of these, we propose an efficient curriculum adversarial attack (CAA) algorithm to gradually optimize adversarial stickers from easy to complex physical-world conditions. Given an adversarial sticker , larger attack loss of indicates higher attack complexity under the condition . Thus, can serve as an appropriate surrogate for the measurement of the complexity of . Based on this, we assign a learnable weight parameter for each transformation in and formulate the objective of CAA as
(4) |
where is a regularizer and is a curriculum parameter.
Let , model (4) can be solved by alternatively optimizing and while keeping one of them fixed. Firstly, given , the optimization w.r.t. reduced to
(5) |
which can be solved by stochastic gradient descent method. Secondly, given , the optimal is determined by
(6) |
leading to a closed-form solution as . Thus, easier transformation with lower attack loss will be assigned with a larger weight and dominate the updating of in the following step. On the other side, the value of is monotonically increased to involve more and more complex transformations during the optimization. As a result, the proposed CAA algorithm can generate the sticker from easy to complex transformations gradually. The overall algorithm of CAA is reported in Algorithm 1.
CAA adheres to the principle of curriculum learning (Bengio et al. 2009), which learns from easy to complex tasks and has shown the benefits in obtaining better local minima and superior generalization for many non-convex problems (Huang et al. 2020b; Kumar, Packer, and Koller 2010; Fan et al. 2017; Cai et al. 2018). To the best of our knowledge, we are the first to explore the complexity of physical-world conditions in adversarial attacks and to aggregate them via curriculum learning towards the robust performance.
4 Experiments
4.1 Testing Protocol
The constructed testing protocol is shown in Figure 4, which contains two phases: (1) attack launching stage for collecting facial images and generating adversarial stickers, and (2) attack evaluation stage for evaluating attacking performance under various physical conditions.
Methods | Experimenters | D | I | Illus | FaceVars | Poses | Images |
Advhat | 10 | 3 | 1 | 8 | 128 | ||
Ours | 10 | 3 | 4 | 35 | 5880 |
Attack Launching Stage. For each experimenter, we first take 4 videos under the normal light with 4 different facial variations: happy, sad, neutral, and mouth-open. We use iPhone-12 to take 19201080 resolution videos and each video lasts about 3 5 seconds. The camera is placed in front of the experimenter with a distance of 50cm and the experimenter is asked to sit steady without pose variations. Then, we randomly sample a single frame for each video as , which would be taken as inputs of attack models for generating robust adversarial stickers.
Attack Evaluation Stage. The stickers generated in the launching stage are firstly printed by Canon Generic Plus PCL6 and then wore to deceive face recognition systems. We consider complicated environmental variations for the evaluations of this stage, including (1) different head poses. Existing methods usually acquire this by asking the experimenter to make certain pose changes, which is hard to control and cannot lead to a fair comparison between different methods due to the poor repeatability. To alleviate this issue, we customize a cruciform rail to make accurate movements for reducing the potential effects by uncontrolled experimenter movements. The experimenter is asked to sit in front of the cruciform rail with a distance of 50cm, and the cruciform rail carries the camera and moves in four directions (up, down, left, and right) sequentially to capture facial images with different poses. As a result, we can obtain the facial images with fine-controlled pose variations. (2) different lighting conditions. To imitate the illumination variations, we select a room without any extra window and use a KN-18C annular light as the light source, which is also placed in front of the experimenter with a distance of 50cm. We select three different light intensities, refer to dark/normal/light respectively, and examples of face images under different lighting conditions are provided in Appendix. (3) facial variations. As considered in the attack launching stage, we also involve four facial variations: happy/sad/neutral/mouth-open.
For each experimenter with a certain sticker, we take six videos: (a) three videos under the dark/normal/light lighting conditions with the neutral expression, and (b) three videos with the happy/sad/mouth-open variations under the normal illumination. The movement ranges of the cruciform rail are kept same for each video, i.e., left-right angles and the up-down angles . Afterwards, about 35 frames are captured from each video as the adversarial images . For a better evaluation, we also collect six videos for each experimenter without any stickers, i.e., benign faces .
We take the ‘neutral’ facial image captured from the attack launching round as an anchor , and calculate the benign cosine similarity and the adversarial cosine similarity . Instead of the attack success rate, taking the cosine similarity as the metric would lead to more precise evaluation without the effects caused by the threshold setting.
The statistics of the physical evaluation cases are presented in Table 2, where we also tabulate those of Advhat (Komkov and Petiushko 2020) for comparison. It is worth noting that the evaluation of physical attacks is much more challenging than that of digital attacks, making the number of experimenters or attack cases relatively small. Yet our evaluations are already on the largest scale compared to existing works.

Experimental Setting. We take the state-of-the-art ArcFace model†††https://github.com/deepinsight/insightface/wiki/Model-Zoo trained on the large-scale dataset MS1MV2 as our attacked face recognition model, which has 99.77% benign recognition accuracy on the LFW benchmark (Huang et al. 2008). The default sticker size is 400900 in pixels, obtained 13cm5.8cm in the physical world. All experiments are conducted with the TensorFlow platform and a NVIDIA Tesla P40 GPU. The detailed setting of the proposed CAA algorithm is provided in Appendix.
In the following, we denote the proposed method that considers all physical variations with the CAA optimizer as PadvFace- and introduce its two variants: PadvFace- that does not involve facial variations in and illumination transformations in , and PadvFace- realized by further substituting the standard EOT optimizer for the CAA optimizer in PadvFace-.
4.2 Evaluations of Physical Attacks
In this section, we evaluate the proposed PadvFace with dodging and impersonation attacks in the physical world. And we take Advhat (Komkov and Petiushko 2020) as the main baseline, which is the most comparative sticker-based attack method for our evaluations on large-scale face recognition. Since Advhat did not capture the internal facial variations and illumination variations during the optimization, we align this setting and adopt the PadvFace- for a fair comparison. Nevertheless, our PadvFace- still keeps the D2P module and CAA optimization algorithm, which are the two main differences in contrast to Advhat. In consequence, we evaluate both methods with the ‘neutral’ expression under the ‘normal’ light and Fig. 5 provides some attack examples. The numerical results of dodging and impersonation attacks on 10 experimenters are reported in Table 3, where and denote the benign and the adversarial cosine similarity, respectively. For dodging attacks, lower denotes better performance, while for impersonation attacks, higher denotes better performance.
Dodging Attack () | Impersonation Attack () | |||||||
ID | ID | |||||||
01 | 0.91 | 0.29 | 0.27 | 0102 | 0.16 | 0.28 | 0.46 | |
02 | 0.94 | 0.43 | 0.33 | 0203 | 0.09 | 0.26 | 0.35 | |
03 | 0.91 | 0.33 | 0.33 | 0304 | -0.08 | 0.17 | 0.21 | |
04 | 0.88 | 0.42 | 0.25 | 0409 | 0.12 | 0.19 | 0.21 | |
05 | 0.94 | 0.60 | 0.51 | 0504 | 0.04 | 0.21 | 0.24 | |
06 | 0.93 | 0.38 | 0.32 | 0607 | 0.10 | 0.26 | 0.29 | |
07 | 0.93 | 0.34 | 0.31 | 0708 | 0.20 | 0.33 | 0.37 | |
08 | 0.95 | 0.32 | 0.26 | 0809 | 0.12 | 0.24 | 0.33 | |
09 | 0.93 | 0.37 | 0.28 | 0901 | 0.09 | 0.19 | 0.19 | |
10 | 0.89 | 0.28 | 0.20 | 1003 | 0.03 | 0.32 | 0.41 | |
Average | 0.92 | 0.38 | 0.31 | Average | 0.09 | 0.24 | 0.30 |
The results of dodging attacks are shown in the left of Table 3. Compared to the benign similarity without any sticker, our proposed method achieves significantly lower adversarial cosine similarities, demonstrating its superior dodging attack performance. For example, for ID=01, after wearing the adversarial sticker, the cosine similarity drops from 0.91 to 0.27, posing a serious threat to the physical-world face recognition system. Furthermore, compared with Advhat, we also obtain significant performance improvements under most cases. On the average performance of all 10 experimenters, our method achieves 0.31 adversarial cosine similarity while that of Advhat is only 0.38, leading to 18.4% relative improvement.
As for impersonation attacks, we randomly select a single victim anchor from 10 experimenters for each attacker, obtaining 10 ‘attackervictim’ pairs. The evaluations are shown in the right of Table 3. As shown in the Table, our proposed method can significantly increase the cosine similarities of two different identities by wearing the adversarial stickers under all cases. For instance, for the attacking pair (10 ), the adversarial cosine similarity increases from 0.03 to 0.41. Furthermore, we also obtain better attack performance than Advhat on most attacking pairs. On the average of all 10 cases, our method leads to an average cosine similarity of 0.30 while that of Advhat is 0.24, the relative performance improvement over Advhat is 25%. In addition, compared with dodging attacks, impersonation attacks that aim to deceive a specific identity are generally more difficult.
We also conduct the paired t-test further and observe that our PadvFace-B is significantly better than Advhat with p=0.005 for both dodging and impersonation attacks. In summary, the superior performance of dodging and impersonation attacks on PadvFace- over Advhat demonstrates the effectiveness of the developed D2P module and CAA optimizer.

Dodging Attack() | |||
ID | 04 | 07 | 09 |
Dark | 0.81 0.48 0.38 | 0.92 0.38 0.34 | 0.91 0.56 0.48 |
Normal | 0.83 0.50 0.38 | 0.95 0.44 0.38 | 0.93 0.50 0.46 |
Light | 0.81 0.49 0.36 | 0.96 0.39 0.29 | 0.90 0.49 0.43 |
Average | 0.82 / 0.49 / 0.37 | 0.94 / 0.40 / 0.34 | 0.92 / 0.52 / 0.45 |
Impersonation Attack() | |||
ID | 0708 | 0809 | 0910 |
Dark | 0.21 0.36 0.37 | 0.10 0.33 0.39 | 0.03 0.32 0.40 |
Normal | 0.20 0.37 0.40 | 0.12 0.33 0.37 | 0.03 0.33 0.41 |
Light | 0.22 0.36 0.39 | 0.14 0.37 0.42 | 0.04 0.37 0.41 |
Average | 0.21 / 0.36 / 0.39 | 0.12 / 0.35 / 0.40 | 0.03 / 0.34 / 0.41 |
4.3 Experiments of Environmental Variations
With the proposed standardized testing protocol, we can make more quantitative analyses of environmental variations. In this section, we analyze in-depth the impact of internal facial variations and illumination variations. We compare the performance of PadvFace-B and PadvFace-F under both dodging and impersonation attacks.
The evaluation results under different lighting conditions are given in Table 4, where all experimenters are asked to keep the neutral expression during the attack evaluation stage. We have the following observations: (1) the attacking performance varies under different illumination conditions, and (2) PadvFace-F that incorporates the illumination transformations in consistently outperforms PadvFace-B that learned with a fixed normal illumination under all cases. Specifically, for ID=09, PadvFace-F has 12.5% performance improvement than PadvFace-B (0.52 vs. 0.45) for dodging attacks, and 19.9% performance improvement (0.34 vs. 0.41) for impersonation attacks.
The evaluations under internal facial variations are presented in Table 5, where all adversarial images are collected under the normal illumination. It can be observed that when attackers make different facial variations, the attack performance would vary as well. This point can be further verified by the superior performance of PadvFace-F over PadvFace-B, where PadvFace-F involves facial variations during the optimization but PadvFace-B learns only based on neutral expression. Specifically, for ID=09, PadvFace-F achieves 9.4% dodging attack performance improvement (0.53 vs. 0.48) and 13.0% impersonation attack performance improvement (0.31 vs. 0.35) compared with PadvFace-B. These experimental results demonstrate that considering internal facial variations and illumination variations during the attacking process can largely boost the robustness of learned adversarial stickers.
Dodging Attack () | |||
ID | 04 | 07 | 09 |
Happy | 0.80 0.41 0.32 | 0.94 0.40 0.34 | 0.90 0.51 0.49 |
Sad | 0.66 0.29 0.18 | 0.93 0.43 0.35 | 0.79 0.47 0.43 |
Neutral | 0.83 0.50 0.38 | 0.95 0.44 0.38 | 0.93 0.50 0.46 |
Mouth-open | 0.78 0.47 0.32 | 0.91 0.45 0.35 | 0.82 0.65 0.54 |
Average | 0.77 / 0.42 / 0.30 | 0.93 / 0.43 / 0.35 | 0.86 / 0.53 / 0.48 |
Impersonation Attack () | |||
ID | 0708 | 0809 | 0910 |
Happy | 0.22 0.36 0.39 | 0.10 0.32 0.39 | 0.02 0.34 0.37 |
Sad | 0.18 0.22 0.31 | 0.12 0.36 0.40 | 0.07 0.32 0.35 |
Neutral | 0.20 0.37 0.40 | 0.12 0.33 0.37 | 0.03 0.33 0.41 |
Mouth-open | 0.21 0.34 0.38 | 0.15 0.34 0.39 | 0.06 0.26 0.29 |
Average | 0.20 / 0.32 / 0.37 | 0.12 / 0.34 / 0.39 | 0.04 / 0.31 / 0.35 |
4.4 Ablation Study

D2P Module. We utilize PadvFace-S for evaluating the effectiveness of the proposed D2P module. We randomly select three experimenters and conduct dodging attacks in physical world. The evaluation results are presented in Table 6, where ‘w/ D2P’ refers to PadvFace-S and ‘w/o D2P’ refers to the variant of PadvFace-S without the D2P module. The benign and adversarial images are captured under the ‘neutral’ expression and ‘normal’ illumination condition. As can be observed, the D2P module can significantly benefit the performance of physical attacks for all three experimenters, demonstrating the effectiveness of modeling the chromatic aberration induced by printers and cameras.
ID | benign | w/o D2P | w/ D2P |
01 | 0.87 | 0.29 | 0.22 |
02 | 0.95 | 0.38 | 0.32 |
10 | 0.89 | 0.19 | 0.16 |
CAA Algorithm. In Fig. 6, we plot the tendency curves of the cosine similarity loss with a certain experimenter, to explore the difference in optimizing process between CAA and EOT. CAA refers to PadvFace-F and EOT refers to the variant of PadvFace-F by replacing the CAA optimizer with the EOT optimizer. The cosine similarity loss at each iteration in Fig. 6 is calculated by the average of the adversarial cosine losses of 400 randomly sampled transformations of . As can be expected, learning with easy physical-world conditions causes a relatively slower convergence rate in the early optimization stage compared with the EOT optimizer. However, as the iteration increases, more and more complex physical-world conditions are involved and the proposed CAA algorithm leads to better performance with lower cosine similarity loss at the end of the learning process.
4.5 Discussions
Inconspicuousness. For physical attacks, the inconspicuousness aims to make adversarial perturbations unnoticed. However, based on our experiments, attacking physical-world face recognition is intrinsically hard, and the current attack performance is far from satisfactory even without the inconspicuousness constraint. Thus, in this paper, we primarily focus on efficiently modeling the complicated physical-world conditions in attacking face recognition, and leaving the inconspicuousness of the adversarial stickers to future work.
Transferability. We evaluate the attack robustness of the proposed PadvFace when transferring to other face recognition models. The average cosine similarity of 10 experimenters are provided in Table 7, with ArcFace as the source model. As can be observed, our proposed method can achieve consistently better performance than Advhat on all target models. Note that we do not specifically impose constraints on the transferability of PadvFace. Nevertheless, we believe that it could be possible to introduce the advanced study from another branch of adversarial attacks, i.e., transfer attacks, to further improve the transferability.
5 Conclusion
In this work, we study the adversarial vulnerability of physical-world face recognition by sticker-based adversarial attacks. For robust adversarial attacks, we analyze in detail the complicated physical-world condition variations in attacking face recognition and propose a novel physical attack method that considers and models these variations. We further propose an efficient curriculum adversarial attack algorithm that gradually learns the sticker from easy to complex physical-world variations. We construct a standardized testing protocol for facilitating the fair evaluation of physical attacks on face recognition. Extensive experimental results demonstrate the effectiveness of the proposed method for dodging and impersonation physical attacks.
References
- Athalye et al. (2018) Athalye, A.; Engstrom, L.; Ilyas, A.; and Kwok, K. 2018. Synthesizing robust adversarial examples. In ICML, 284–293. PMLR.
- Bengio et al. (2009) Bengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In ICML, 41–48.
- Cai et al. (2018) Cai, Q.-Z.; Du, M.; Liu, C.; and Song, D. 2018. Curriculum adversarial training. IJCAI.
- Chen et al. (2018) Chen, S.-T.; Cornelius, C.; Martin, J.; and Chau, D. H. P. 2018. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In MLKDD, 52–68. Springer.
- Deng et al. (2019) Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 4690–4699.
- Dong et al. (2019) Dong, Y.; Su, H.; Wu, B.; Li, Z.; Liu, W.; Zhang, T.; and Zhu, J. 2019. Efficient decision-based black-box adversarial attacks on face recognition. In CVPR, 7714–7722.
- Duan et al. (2020) Duan, R.; Ma, X.; Wang, Y.; Bailey, J.; Qin, A. K.; and Yang, Y. 2020. Adversarial camouflage: Hiding physical-world attacks with natural styles. In CVPR, 1000–1008.
- Eykholt et al. (2018) Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; Prakash, A.; Kohno, T.; and Song, D. 2018. Robust physical-world attacks on deep learning visual classification. In CVPR, 1625–1634.
- Fan et al. (2017) Fan, Y.; He, R.; Liang, J.; and Hu, B. 2017. Self-paced learning: an implicit regularization perspective. In AAAI, 1877–1883.
- Goodfellow, Shlens, and Szegedy (2015) Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and harnessing adversarial examples. In ICLR.
- Huang et al. (2008) Huang, G. B.; Mattar, M.; Berg, T.; and Learned-Miller, E. 2008. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in’Real-Life’Images: detection, alignment, and recognition.
- Huang et al. (2020a) Huang, L.; Gao, C.; Zhou, Y.; Xie, C.; Yuille, A. L.; Zou, C.; and Liu, N. 2020a. Universal physical camouflage attacks on object detectors. In CVPR, 720–729.
- Huang et al. (2020b) Huang, Y.; Wang, Y.; Tai, Y.; Liu, X.; Shen, P.; Li, S.; Li, J.; and Huang, F. 2020b. Curricularface: adaptive curriculum learning loss for deep face recognition. In CVPR, 5901–5910.
- Jan et al. (2019) Jan, S. T.; Messou, J.; Lin, Y.-C.; Huang, J.-B.; and Wang, G. 2019. Connecting the digital and physical world: Improving the robustness of adversarial attacks. In AAAI, 962–969.
- Komkov and Petiushko (2020) Komkov, S.; and Petiushko, A. 2020. Advhat: Real-world adversarial attack on arcface face id system. In ICPR, 819–826. IEEE.
- Kumar, Packer, and Koller (2010) Kumar, M. P.; Packer, B.; and Koller, D. 2010. Self-Paced Learning for Latent Variable Models. In NeurIPS, volume 1, 2.
- Li, Schmidt, and Kolter (2019) Li, J.; Schmidt, F.; and Kolter, Z. 2019. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In ICML, 3896–3904. PMLR.
- Pautov et al. (2019) Pautov, M.; Melnikov, G.; Kaziakhmedov, E.; Kireev, K.; and Petiushko, A. 2019. On adversarial patches: real-world attack on arcface-100 face recognition system. In SIBIRCON, 0391–0396. IEEE.
- Qiu et al. (2020) Qiu, H.; Xiao, C.; Yang, L.; Yan, X.; Lee, H.; and Li, B. 2020. SemanticAdv: Generating Adversarial Examples via Attribute-conditioned Image Editing. In ECCV, 19–37. Springer.
- Sharif et al. (2016) Sharif, M.; Bhagavatula, S.; Bauer, L.; and Reiter, M. K. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In ACM CCS, 1528–1540.
- Sharif et al. (2019) Sharif, M.; Bhagavatula, S.; Bauer, L.; and Reiter, M. K. 2019. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security (TOPS), 22(3): 1–30.
- Wang et al. (2018) Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; and Liu, W. 2018. Cosface: Large margin cosine loss for deep face recognition. In CVPR, 5265–5274.
- Wang et al. (2019) Wang, Z.; Zheng, S.; Song, M.; Wang, Q.; Rahimpour, A.; and Qi, H. 2019. advPattern: physical-world attacks on deep person re-identification via adversarially transformable patterns. In ICCV, 8341–8350.
- Xiao et al. (2021) Xiao, Z.; Gao, X.; Fu, C.; Dong, Y.; Gao, W.; Zhang, X.; Zhou, J.; and Zhu, J. 2021. Improving Transferability of Adversarial Patches on Face Recognition With Generative Models. In CVPR.
- Xu et al. (2020) Xu, K.; Zhang, G.; Liu, S.; Fan, Q.; Sun, M.; Chen, H.; Chen, P.-Y.; Wang, Y.; and Lin, X. 2020. Adversarial t-shirt! evading person detectors in a physical world. In ECCV, 665–681. Springer.
- Yang et al. (2020) Yang, X.; Yang, D.; Dong, Y.; Yu, W.; Su, H.; and Zhu, J. 2020. Delving into the adversarial robustness on face recognition. arXiv preprint arXiv:2007.04118.
- Yin et al. (2021) Yin, B.; Wang, W.; Yao, T.; Guo, J.; Kong, Z.; Ding, S.; Li, J.; and Liu, C. 2021. Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. IJCAI.
- Zhang et al. (2018) Zhang, Y.; Foroosh, H.; David, P.; and Gong, B. 2018. CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In ICLR.
- Zhao and Stamm (2020) Zhao, X.; and Stamm, M. C. 2020. Defenses Against Multi-sticker Physical Domain Attacks on Classifiers. In ECCV, 202–219. Springer.
- Zhao et al. (2019) Zhao, Y.; Zhu, H.; Liang, R.; Shen, Q.; Zhang, S.; and Chen, K. 2019. Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors. In ACM CCS, 1989–2004.
- Zolfi et al. (2021) Zolfi, A.; Kravchik, M.; Elovici, Y.; and Shabtai, A. 2021. The Translucent Patch: A Physical and Universal Attack on Object Detectors. In CVPR, 15232–15241.
Appendix A Appendix
A.1 Details of Digital-to-Physical (D2P) Module
As analyzed in Sec. 3.3, the D2P module aims to imitate the chromatic aberration induced by printers and cameras during physical attacks. It is realized by a two-layer MLP with 100 hidden nodes to learn a 1:1 color mapping from the digital space to the physical space. In the following, we first report the details of producing color palettes and then provide the training details of the MLP. Furthermore, we provide the evaluation results of the D2P module on adversarial stickers.

Color Palettes. For the learning of the MLP, we need to capture both RGB colors in the digital space and their corresponding values after printing and photographing in the physical world. To this end, we first construct a set of color anchors in the digital space. Since we can not enumerate all colors in RGB space, we only make a subset of it. Specifically, our color anchors consist of 512 colors generated by sampling Red, Green, and Blue colors, respectively.
To better capture the corresponding colors after printing and photographing in the physical world, we reshape to the size of 1632 and replicate each color anchor of to 4040 pixel square, leading to the digital color palette with 6401280 in pixels in Fig. A7 (a). Then, we print the digital color palette and photograph it under the normal illumination with the distance of 50cm in our testing environment (refers to Sec. 4.1), obtaining the physical color palette in Fig. A7 (b). In addition, we average the pixel square of , where and denote the height and the width of (b), resulting in with the size of 1632 as the corresponding colors of in the physical world.
MLP Training Details. Taking the digital color anchors as the input and its physical counterpart as the ground truth, we train our MLP through Adam optimizer with 100,000 epochs. The initial learning rate is 0.01 that decays by the factor of 10 at the epochs of 50,000 and 70,000, respectively. As a result, we can obtain the learned color palette by D2P in Fig A7 (c), which only has the Mean Square Error (MSE) of 0.0001 with the ground truth of (b).

Further Evaluation. This part further evaluates the performance of the learned D2P module on adversarial stickers. As shown in Fig. A8, given an arbitrary digital adversarial sticker in (a), we first print and photograph it in our testing environment, obtaining the physical sticker in (b). In the meantime, the digital sticker is also fed into the D2P module to obtain the learned D2P sticker in (c). We test Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity (MSSIM), and MSE, between pairs ‘Digital-to-Physical’ stickers, i.e., (a) and (b), as well as ‘D2P-to-Physical’ stickers, i.e., (c) and (b), in Fig. A8 for evaluating the performance of trained MLP. Corresponding results are reported in Table A8. Note that higher MSSIM and lower PSNR and MSE denote better results. As can be observed, the adversarial sticker learned by our D2P is closer to the physical adversarial sticker than the digital one. Therefore, the proposed D2P module can effectively address the chromatic aberration induced by printers and cameras during physical attacks.
Sticker Metrics | PSNR (dB) | MSSIM | MSE |
Digital-to-Physical | 18.27 | 0.55 | 0.014 |
D2P-to-Physical | 21.58 | 0.62 | 0.006 |
Module | Variations | Min | Max |
Parabolic angle | - | + | |
Parabolic rate | -2 | +2 | |
Rotation | - | + | |
Translation | -1 | +1 | |
Rotation | - | + | |
Scaling | 0.94 | 1.06 | |
Translation | -2 | +2 | |
Contrast | 0.5 | 1.1 | |
Brightness | 0.05 | 0.1 |
A.2 Details of Curriculum Adversarial Attack (CAA) Algorithm
For the proposed curriculum adversarial attack method, the weight of TV loss is set to in model (4). More specifically, in Algorithm 1, for the updating of , the learning rate is set to 0.02 with the momentum of 0.95. We adopt three curriculum learning stages, i.e., in Algorithm 1. The number of inner iteration is set to 2000/2000/3000 for each curriculum stage, respectively. Moreover, at the beginning of each curriculum learning stage , we determine the curriculum parameter as follows: let denote the curriculum proportion, where . We set the curriculum proportion to , 0.8 and 1.0, respectively. Besides, each inner updating (step3 and step4 of Algorithm 1) is performed based on a randomly sampled mini-batch of size 32 from for the training efficiency.
In addition, we provide the settings of the transformation parameters of . Note that contains four facial variations (i.e., happy/neutral/sad/mouth-open) and is a random Gaussian noise with and . The transformation parameters of and are given in Table A9.
A.3 Examples of Three Light Conditions
We provide examples of adversarial faces under three real-world light conditions (i.e., dark/normal/light) used in our experiments in Fig. A9. All facial images come from our collected dataset following the proposed testing protocol.
