Rethinking Impersonation and Dodging Attacks on Face Recognition Systems
Abstract.
Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations. Adversarial attacks on FR encompass two types: impersonation (targeted) attacks and dodging (untargeted) attacks. Previous methods often achieve a successful impersonation attack on FR, however, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting. In this paper, our key insight is that the generation of adversarial examples should perform both impersonation and dodging attacks simultaneously. To this end, we propose a novel attack method termed as Adversarial Pruning (Adv-Pruning), to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities. Adv-Pruning consists of Priming, Pruning, and Restoration stages. Concretely, we propose Adversarial Priority Quantification to measure the region-wise priority of original adversarial perturbations, identifying and releasing those with minimal impact on absolute model output variances. Then, Biased Gradient Adaptation is presented to adapt the adversarial examples to traverse the decision boundaries of both the attacker and victim by adding perturbations favoring dodging attacks on the vacated regions, preserving the prioritized features of the original perturbations while boosting dodging performance. As a result, we can maintain the impersonation capabilities of original adversarial examples while effectively enhancing dodging capabilities. Comprehensive experiments demonstrate the superiority of our method compared with state-of-the-art adversarial attack methods.
1. Introduction
Thanks to the ceaseless advancements in deep learning, Face Recognition (FR) has achieved exceptional performance (Schroff et al., 2015; Wang et al., 2018; Deng et al., 2022; An et al., 2021; Boutros et al., 2023; Li et al., 2023a). However, the vulnerability of existing FR models poses a significant threat to their security (Das et al., 2021; Zhou et al., 2022a, b, 2023c; Narayan et al., 2023; Wang et al., 2024b; Zhou et al., 2024b), with adversarial attacks being one of the key concerns. Hence, there is an urgent need to enhance the performance of adversarial face examples to expose more blind spots in FR models. As a result, several research endeavors have been directed towards this realm. A multitude of adversarial attacks have been developed to create adversarial face examples with characteristics such as stealthiness (Qiu et al., 2020; Yang et al., 2021; Cherepanova et al., 2021; Hu et al., 2022; Shamshad et al., 2023), transferability (Zhong and Deng, 2021; Li et al., 2023d; Zhou et al., 2023b, 2024a), and physical attack capability (Yin et al., 2021; Yang et al., 2023; Li et al., 2023c). These efforts contribute to enhancing the effectiveness of adversarial attacks on FR. Nevertheless, these studies primarily concentrate on bolstering either impersonation attacks or dodging attacks, overlooking the exploration of the effectiveness of dodging attacks when crafting adversarial face examples using impersonation attacks.
In real-world deployment contexts, individuals with malicious intent are prone to creating adversarial face examples incorporating their own facial features to manipulate FR systems to mistakenly identify them as pre-defined victims during impersonation attacks. Concurrently, the individuals strive to evade accurate identification as perpetrators, thereby circumventing detection and preventing legal accountability. This requires the creation of adversarial examples capable of executing both impersonation and dodging attacks simultaneously. In the realm of adversarial attacks on image classification, a successful impersonation attack typically implies a successful dodging attack. However, FR is an open-set task (Wang et al., 2018; Deng et al., 2022), which is quite different from image classification. In the real-world deployment of FR systems, accurately predicting the class probability of identities presents an extreme challenge. Therefore, we extract embeddings from two face images using the FR model. Subsequently, the distance between the two embeddings is used to determine whether the images belong to the same identity. If the distance falls below a predefined threshold, the two images are recognized as belonging to the same identity; otherwise, they are classified as different identities. Based on the measurement of FR, there are two decision boundaries for each FR model when crafting adversarial examples as shown in Figure 1. As a result, there exists the benign samples that can be classified as two different identities in theory. We denote these samples as multi-identity samples.
The existence of multi-identity samples implies that a successful impersonation attack on FR does not necessarily guarantee a successful dodging attack on FR. Existing research indicates that an adversarial sample was located near the decision boundary (Heo et al., 2019; Cao and Gong, 2017). Suppose we generate adversarial face examples using previous methods. In the white-box setting, both the structures and parameters of the victim models are known, enabling the generation of adversarial face examples that can cross the decision boundaries of both attacker and victim, as shown in Figure 1. However, in the black-box setting, the decision boundaries of black-box models differ from those of the surrogate models. Consequently, adversarial examples generated on the surrogate model lie near the decision boundary of the victim, preventing them from crossing the decision boundary of the attacker. As such, the majority of adversarial face examples crafted by previous methods, which can successfully perform impersonation attacks, fail to perform dodging attacks in the black-box setting.
In this paper, we propose a novel attack method, termed as Adversarial Pruning (Adv-Pruning). In the realm of adversarial attacks on FR, previous impersonation methods have achieved a significant level of sophistication. However, there remains a pressing need to bolster the efficacy of adversarial face examples in dodging attacks. Consequently, our research is directed towards enhancing the dodging attack performance of adversarial face examples while maintaining the impersonation attack performance. Specifically, we introduce an attack consisting of three stages: Priming, Pruning, and Restoration. In the Priming stage, we optimize the adversarial examples to ensure adequate attack potential. In the Pruning stage, considering the pruning concept in model compression, we propose Adversarial Priority Quantification to measure the region-wise priority of original adversarial perturbations using a priority measure that is directly proportional to the supremum of the absolute model output variances. After processing by Adversarial Priority Quantification, we prune the adversarial face examples to free up less prioritized adversarial perturbations. In the Restoration stage, we propose Biased Gradient Adaptation to add biased gradient perturbations favoring dodging attacks on the pruned regions to adapt the adversarial face examples into the space that can be classified as the victim while remaining unidentifiable as the attacker, thereby enhancing the dodging performance of the adversarial face examples without compromising the prioritized features of original adversarial perturbations. As illustrated in the top of Figure 1, after undergoing these stages, adversarial face examples generated by our method can successfully traverse the decision boundaries of both the attacker and victim of the black-box model, achieving successful black-box impersonation and dodging attacks.
Our main contributions are summarized as follows:
-
•
We offer a new perspective for adversarial attacks on FR models that the generation of adversarial examples should perform both impersonation and dodging attacks simultaneously. To the best of our knowledge, this is the first work that studies the universality of multi-identity samples among adversarial face examples crafted by impersonation attacks.
-
•
We propose a novel adversarial attack method called Adversarial Pruning (Adv-Pruning). Adversarial Priority Quantification is presented to quantify the priority of the adversarial perturbations with minimal impact on absolute model output variances. Biased Gradient Adaptation is designed to adapt the adversarial examples to traverse both the decision boundaries of attacker and victim using biased gradients.
-
•
Extensive experiments demonstrate that our proposed method achieves superior performance compared to the state-of-the-art adversarial attack methods. Moreover, our presented method could be plugged into various FR systems and adversarial attack methods.
2. Related Work
2.1. Adversarial Attacks
The primary objective of adversarial attacks is to introduce imperceptible perturbations to benign images to deceive machine learning systems and cause them to make mistakes (Szegedy et al., 2014; Goodfellow et al., 2015). The existence of adversarial examples poses a significant threat to the security of current machine learning systems. Lots of efforts have been dedicated to researching adversarial attacks in order to enhance the robustness of these systems (Long et al., 2022; Liang et al., 2023; Lu et al., 2023; Shayegani et al., 2024; Ge et al., 2023a; Zhou et al., 2023a; Zhang et al., 2022b; Mingxing et al., 2021). To improve the performance of black-box adversarial attacks, DI (Xie et al., 2019) applies random transformations to adversarial examples in each iteration to achieve a data augmentation effect. VMI-FGSM (Wang and He, 2021) employs gradient variance to stabilize the updating process of adversarial examples, boosting the black-box performance. SSA (Long et al., 2022) transforms adversarial examples into the frequency domain and uses spectrum transformation to augment them. SIA (Wang et al., 2023) applies a random image transformation to each image block, generating a varied collection of images that are then employed for gradient calculation. BSR (Wang et al., 2024a) divides the input image into multiple blocks, subsequently shuffling and rotating these blocks in a random manner, creating a collection of new images for the purpose of gradient calculation. DA (Haleta et al., 2021) utilizes dispersion amplification to enhance the multi-task attack capability of adversarial attacks. Despite their gratifying progress, these studies neglect the consideration of pruning adversarial examples by introducing pruning methods into the realm of adversarial attacks. In our research, we propose a novel pruning method capable of identifying and freeing up the adversarial perturbations with minimal impact on absolute model output variances, thereby sparsifying regions for adding adversarial perturbations with the aim of dodging capabilities improvement.
2.2. Adversarial Attacks on Face Recognition
Based on the restriction of the adversarial perturbations, adversarial attacks on FR can be classified into two categories: restricted attacks and unrestricted attacks. Restricted attacks on FR are the attacks that generate adversarial examples in a restricted bound (e.g. bound). To enhance the transferability of adversarial attacks on FR, Zhong and Deng (Zhong and Deng, 2021) propose DFANet, which applies dropout on the feature maps of the convolutional layers to achieve ensemble-like effects. In addition, Zhou et al. (Zhou et al., 2023b) introduces BPFA, which further improves the transferability of adversarial attacks on FR by incorporating beneficial perturbations (Wen and Itti, 2020) on the feature maps of the FR models, resulting in hard model augmentation effects. Li et al. (Li et al., 2023d) leverages extra information from FR-related tasks and uses a multi-task optimization framework to enhance the transferability of crafted adversarial examples. Zhou el al. (Zhou et al., 2024c) propose an adversarial attack that can attack both FR and Face Anti-spoofing (FAS) models simultaneously, aiming to enhance the practicability of adversarial attacks on FR systems. The unrestricted adversarial attacks on FR are the attacks that generate adversarial examples without the restriction of a predefined perturbation bound. They mainly focus on physical attacks (Xiao et al., 2021; Yang et al., 2023; Li et al., 2023c), attribute editing (Qiu et al., 2020; Jia et al., 2022) and generating adversarial examples based on makeup transfer (Yin et al., 2021; Hu et al., 2022; Shamshad et al., 2023). The existing literature on both restricted and unrestricted adversarial attacks on FR systems has successfully enhanced the performance of these attacks. Nevertheless, it remains under-explored in the correlation between impersonation and dodging attacks on FR. This paper addresses this by investigating the correlation between impersonation and dodging attacks and introducing a novel attack method that bolsters the dodging capabilities while preserving the impersonation capabilities of previous methods.
3. Methodology
3.1. Problem Formulation
Let denote the FR model used by the victim to extract the embedding from a face image . We refer to and as the attacker and victim images, respectively. The objective of the impersonation attacks explored in our research is to manipulate in order to misclassify as , while ensuring that bears a close visual resemblance to . By contrast, the objective of the dodging attacks proposed in this study is to render unable to identify as , while simultaneously ensuring that bears a visual resemblance to . For the sake of clarity and conciseness, the detailed optimization objectives for both impersonation and dodging attacks are provided in the supplementary.
Few works explore the correlation between impersonation and dodging attacks on FR. In the following, we delve into the correlation between these two types of attacks and propose a novel method to enhance dodging attacks while maintaining impersonation attacks. An overview of the proposed method is illustrated in Figure 2. As depicted in Figure 2, our proposed method is structured into three stages: Priming, Pruning, and Restoration. Through the sequential application of these stages, we are able to generate adversarial examples that exhibit a potent combination of impersonation and dodging attack capabilities.
3.2. Exploring the Impersonation and Dodging Attack on Face Recognition
In most cases, the victim model is not accessible to the attacker, making it extremely challenging to optimize the objectives for black-box attacks directly. To circumvent this issue, a common approach is to leverage a surrogate model accessible to the attacker to generate adversarial examples that can be transferred to the victim model for an effective attack (Yuan et al., 2022; Zhang et al., 2022a; Naseer et al., 2023; Wang et al., 2023, 2024a; kanth Nakka and Salzmann, 2021; Gao et al., 2023; Ge et al., 2023b; Li et al., 2023b; Chen et al., 2023).
For impersonation attacks, the loss can be formulated as follows:
(1) |
where represents the operation that normalizes . is the adversarial example which is initialized with the same value as . The loss function of dodging attacks can be formulated as:
(2) |
As the FR task is an open-set task, it is impractical to predict the classes of users during the practical deployment of the FR model. Therefore, we need to compare the distance between two face images to discern whether they depict the same identity or not. Based on the identification method in FR, multi-identity samples exist theoretically. Our experiments verify the existence of such samples among benign face images (see the supplementary). The existence of multi-identity samples raises a question:
Does the success of an impersonation attack imply the success of dodging attacks on FR systems?
To this end, we generate adversarial face examples using the previous impersonation attack and evaluate its dodging Attack Success Rate (ASR). Our experiment confirms that the majority of adversarial examples crafted through previous methods, which are successful in performing impersonation attacks, fail to successfully execute dodging attacks in the black-box setting.
Nonetheless, in real-world adversarial attacks, attackers do not want the adversarial face examples to be recognized as themselves, as this may lead to legal consequences. Hence, it is crucial to research attack techniques that can execute both impersonation and dodging attacks simultaneously. Previous methods on FR systems have shown a remarkably high level of impersonation ASR in black-box settings. Therefore, our objective is to enhance the dodging performance while maintaining the impersonation effectiveness of previous attack methods.
To accomplish this objective, a straightforward approach is to generate adversarial face examples using a multi-task attack strategy. In the following, we will take the Lagrangian attack strategy as an example for its simplicity. The Lagrangian attack strategy utilizes the following loss function to craft adversarial examples:
(3) |
However, due to the conflict between the optimization between and , there exists a trade-off between the performance of impersonation and dodging performance, leading to subpar performance (See Section 4.4). Suppose we can mitigate the trade-off, we will achieve a better dodging performance while maintaining the impersonation performance.
3.3. Adversarial Pruning Attack
To accomplish this objective, a straightforward approach is to fine-tune the adversarial face examples generated by the Lagrangian attack with a lower value in order to enhance the performance of dodging attacks. However, this method does not enhance the dodging attack performance without compromising the impersonation attack performance (see Fine-tuning in Table 3). We contend that this issue arises because the newly introduced adversarial perturbation that favors dodging attacks ends up disrupting the prioritized features of existing adversarial perturbation. While it may improve the performance of dodging attacks, it inevitably diminishes the performance of impersonation attacks. To address this, we introduce new perturbations favoring dodging attacks in regions where original perturbations are not added. Nevertheless, identifying suitable areas for these new perturbations is challenging due to their scarcity. Therefore, we propose a novel pruning method to release less prioritized adversarial perturbations with minimal impact on the absolute model output variances, thereby creating space to introduce perturbations that facilitate dodging attacks.
Our proposed Adv-Pruning and be combined with various adversarial attacks. In the following, we will introduce our proposed Adv-Pruning based on Lagrangian attack in detail. In the Priming Stage, we utilize Equation (3) as the Priming loss to craft the adversarial face examples:
(4) |
where is the iteration of the optimization process of adversarial examples, and is the step size when optimizing the adversarial face examples in the Priming stage, and is the projection function that projects onto the norm bound.
Adversarial Priority Quantification. After completing the Priming Stage, we obtain an adversarial example with varying magnitudes of gradient perturbations across different regions. Following this, we proceed to the Pruning stage to process the crafted adversarial example. In order to prune the adversarial perturbation, our initial step is to assess its priority. To estimate this priority, we propose Adversarial Priority Quantification to quantify the priority of adversarial perturbations. Specifically, Adversarial Priority Quantification utilizes the magnitude of the adversarial perturbation as a measure. A lower magnitude implies a lesser impact on the performance of the adversarial examples generated after sparsification, as the supremum of absolute model output variances is directly proportional to the magnitude of the adversarial perturbations. The proof is in the supplementary.
Let the adversarial examples be . The formula to calculate the priority can be expressed as:
(5) |
where . , , and are the channel number, height, and width of the face images, respectively.
Once the priority values of the adversarial perturbations are quantified, we employ these values to release less prioritized adversarial perturbations. Let be the sparsity ratio for pruning the adversarial face examples that measure the ratio of perturbations to be set to zero. Let be the number of adversarial perturbation elements. We arrange the elements in a flattened vector of in ascending order (from the lowest to the highest):
(6) |
where is the flatten operation.
Let be the set of the elements of the adversarial perturbations to be pruned. Given the priority calculation method for pruning, the value of can be calculated as follows:
(7) |
where the colon denotes the slice operation to obtain the first elements. The pruning mask, which has the same shape as , can be obtained by utilizing :
(8) |
By utilizing the mask, we can apply the following formula to prune the adversarial example:
(9) |
where is the adversarial face example after pruning.
Biased Gradient Adaptation. During the Restoration stage, we restore the adversarial face examples in the previously pruned region using our proposed Biased Gradient Adaptation. Biased Gradient Adaptation uses the following loss function to craft gradient biased to the dodging attacks to adapt the crafted adversarial examples into the space that favors dodging attacks.
(10) |
where is a weight that is lower than that is objective for crafting adversarial face examples that favor dodging attacks. The mask representing the regions for restoring the adversarial examples can be denoted as:
(11) |
Subsequently, we utilize the following formula to restore the pruned adversarial face examples:
(12) |
where is the step size when optimizing the adversarial face examples in the Restoration stage, is the adversarial example crafted by the Priming Stage, and is the biased gradient. The pseudo-code of our proposed method based on the Lagrangian attack is illustrated in the supplementary.
4. Experiments
4.1. Experimental Setting
Datasets. Face images play a pivotal role in multimedia processing applications. Therefore, the research on adversarial attacks on FR has a significant impact on security and privacy in multimedia processing. We opt to use the LFW (Huang et al., 2007), CelebA-HQ (Karras et al., 2018), FFHQ (Karras et al., 2019), and BUPT-Balancedface (Wang et al., 2019) datasets for our experiments. LFW serves as an unconstrained face dataset for FR. CelebA-HQ and FFHQ consist of high-quality images. The BUPT-BalancedFace dataset is designed to address the racial imbalance present in existing face datasets. The LFW and CelebA-HQ utilized in our experiments are identical to those employed in (Zhou et al., 2023b, 2024a), while FFHQ is the corresponding dataset provided by the Sibling-Attack official page, ensuring the consistency for analysis. For BUPT-Balancedface, we randomly selected 1,000 pairs from the dataset for evaluating the performance of the adversarial examples.
LFW | CelebA-HQ | ||||||
---|---|---|---|---|---|---|---|
Surrogate Model | Attack | IR152 | FaceNet | MF | IR152 | FaceNet | MF |
IR152 (He et al., 2016) | DI | 95.4 / 100.0 | 5.8 / 11.3 | 0.2 / 0.8 | 87.9 / 100.0 | 8.4 / 16.4 | 0.4 / 1.2 |
VMI | 92.7 / 100.0 | 17.3 / 32.5 | 1.2 / 9.5 | 92.2 / 100.0 | 14.3 / 27.9 | 1.2 / 4.1 | |
SSA | 78.8 / 100.0 | 5.7 / 22.5 | 0.9 / 10.6 | 83.8 / 99.9 | 7.2 / 19.6 | 0.4 / 5.6 | |
DFANet | 98.9 / 100.0 | 1.4 / 4.2 | 0.0 / 0.3 | 98.9 / 100.0 | 2.3 / 6.0 | 0.0 / 0.4 | |
SIA | 81.7 / 100.0 | 13.0 / 37.5 | 0.8 / 8.9 | 78.4 / 100.0 | 13.2 / 35.6 | 0.7 / 7.4 | |
BSR | 52.4 / 100.0 | 5.3 / 17.6 | 0.1 / 1.5 | 48.5 / 99.9 | 5.4 / 18.0 | 0.3 / 1.9 | |
BPFA | 92.6 / 100.0 | 1.7 / 7.3 | 0.0 / 1.2 | 90.4 / 100.0 | 2.1 / 8.1 | 0.1 / 0.8 | |
FaceNet (Schroff et al., 2015) | DI | 5.3 / 10.3 | 99.8 / 99.9 | 3.1 / 10.3 | 1.5 / 3.1 | 99.4 / 99.9 | 1.8 / 4.7 |
VMI | 9.7 / 14.3 | 99.8 / 99.9 | 6.2 / 13.2 | 3.1 / 7.1 | 99.3 / 99.8 | 3.6 / 9.3 | |
SSA | 6.0 / 14.0 | 97.5 / 99.9 | 6.6 / 26.2 | 2.0 / 5.5 | 96.9 / 99.7 | 4.2 / 14.6 | |
DFANet | 1.6 / 3.3 | 99.8 / 99.9 | 0.4 / 2.7 | 0.5 / 2.6 | 99.1 / 100.0 | 0.8 / 4.1 | |
SIA | 11.2 / 20.6 | 99.5 / 99.9 | 8.7 / 21.2 | 4.0 / 8.9 | 99.4 / 99.9 | 5.4 / 13.7 | |
BSR | 12.2 / 19.2 | 98.6 / 99.9 | 9.0 / 17.8 | 4.6 / 10.1 | 98.8 / 99.9 | 5.3 / 14.1 | |
BPFA | 4.7 / 16.8 | 98.6 / 100.0 | 1.6 / 15.0 | 1.1 / 4.2 | 99.0 / 100.0 | 0.6 / 5.1 | |
MF (Deng et al., 2022) | DI | 2.2 / 7.3 | 18.2 / 36.4 | 99.2 / 100.0 | 0.1 / 2.5 | 12.1 / 31.3 | 95.2 / 100.0 |
VMI | 1.0 / 2.8 | 8.4 / 20.9 | 99.7 / 100.0 | 0.2 / 0.4 | 5.2 / 15.0 | 98.2 / 100.0 | |
SSA | 0.7 / 4.1 | 6.1 / 23.5 | 98.3 / 100.0 | 0.0 / 0.6 | 3.9 / 18.5 | 93.3 / 100.0 | |
DFANet | 0.2 / 1.0 | 1.5 / 5.8 | 99.6 / 100.0 | 0.0 / 0.2 | 1.1 / 7.7 | 99.1 / 100.0 | |
SIA | 1.0 / 5.9 | 10.6 / 36.6 | 98.4 / 100.0 | 0.1 / 2.4 | 9.0 / 24.4 | 96.3 / 100.0 | |
BSR | 0.4 / 1.5 | 3.7 / 14.7 | 84.9 / 100.0 | 0.1 / 0.6 | 2.9 / 12.6 | 77.6 / 100.0 | |
BPFA | 0.9 / 4.1 | 4.6 / 20.4 | 97.7 / 100.0 | 0.0 / 2.3 | 4.0 / 20.4 | 96.2 / 100.0 |
Face Recognition Models. The normal trained FR models employed in our experiments include IR152 (He et al., 2016), FaceNet (Schroff et al., 2015), MobileFace (abbreviated as MF) (Deng et al., 2022), ArcFace (Deng et al., 2022), CircleLoss (Sun et al., 2020), CurricularFace (Huang et al., 2020), MagFace (Meng et al., 2021), MV-Softmax (Wang et al., 2020), and NPCFace (Zeng et al., 2020). IR152, FaceNet, and MF are identical to those used in (Yin et al., 2021; Hu et al., 2022; Zhou et al., 2023b, 2024a). ArcFace, CircleLoss, CurricularFace, MagFace, MV-Softmax, and NPCFace are the official models available in FaceX-ZOO (Wang et al., 2021). Additionally, we incorporate adversarial robust FR models in our experiments, denoted as IR152adv, FaceNetadv, and MFadv, which are identical to those used in (Zhou et al., 2023b). For calculating the ASR in impersonation and dodging attacks, we choose the thresholds based on [email protected] on the entire LFW dataset.
Attack Setting. Without any particular emphasis, we set the maximum allowable perturbation magnitude to 10 based on the norm bound and utilize the Lagrangian attack method as the attack in both the Priming and Restoration stages. Additionally, we specify the maximum number of iterative steps as 200. For both the Priming and Restoration stages, the step size is uniformly designated as 1.0.
Evaluation Metrics. We employ Attack Success Rate (ASR) to evaluate the performance of various attacks. ASR signifies the proportion of successfully attacked adversarial examples out of all the adversarial examples. We use ASRi and ASRd to denote impersonation and dodging ASR, respectively. The detailed calculation methods for ASRi and ASRd are provided in the supplementary.
Compared methods. Our proposed attack is a restricted attack method that aims to maliciously attack FR systems to expose more blind spots of them. It is not fair to compare our method with unrestricted attacks that do not limit the magnitude of the adversarial perturbations. Therefore, we choose restricted attacks on FR that aim to maliciously attack FR systems (Zhong and Deng, 2021) (Zhou et al., 2023b) (Li et al., 2023d) (Yang et al., 2020) and state-of-the-art transfer attacks (Xie et al., 2019) (Long et al., 2022) (Wang et al., 2023) (Wang et al., 2024a) as our baseline.
ASRd | ASRi | |||
---|---|---|---|---|
Attack | IR152 | FaceNet | MF | |
Lagrangian | 3.9 | 26.5 | 100.0 | 26.0 |
Lagrangian + Ours | 7.3 | 36.4 | 100.0 | 26.6 |
DA | 11.0 | 35.6 | 99.1 | 37.4 |
DA + Ours | 17.5 | 44.9 | 99.4 | 37.8 |
4.2. Comparison Study
We compare our proposed attack method with the state-of-the-art attacks on multiple FR models and datasets. Several adversarial examples are illustrated in Figure 5. The attack performance results are shown in Table 1. Table 1 illustrates that the incorporation of our proposed attack method significantly enhances the dodging ASR of adversarial attacks. It is worth noting that the average black-box impersonation ASRs of the baseline attacks in Table 1 also increase after integrating our proposed attack method. This demonstrates the effectiveness of our proposed method in improving the dodging attack performance while simultaneously maintaining the impersonation attack performance. Furthermore, we conducted a comparison between our proposed Adv-Pruning and multi-task attacks using MF as the surrogate model on LFW based on DI. For our proposed method, we choose the corresponding multi-task attack as the attack for both the Priming and Restoration stages. The dodging ASR and average black-box impersonation ASR results are shown in Table 2. Table 2 underscores the effectiveness of our method in enhancing the dodging performance of multi-task attacks while maintaining the impersonation performance. To further validate our proposed attack method on additional FR models, we selected SIA (Wang et al., 2023) as Baseline and IR152 as the surrogate model. The experimental settings are consistent with those described in Table 1. The dodging ASR across multiple FR models is demonstrated in Figure 3. As depicted in Figure 3, the dodging ASR improves on multiple FR models after integrating our proposed method, further confirming the effectiveness of our attack.
In practical application scenarios, victims can employ adversarial robust models to defend against adversarial attacks. Consequently, it becomes crucial to evaluate the performance of adversarial attacks on these robust models. In this study, we generate adversarial examples on the LFW dataset using MF as the surrogate model and assess the performance of various attacks on the adversarial robust models. The results are presented in Figure 4. The letters following the en dash represent the surrogate models, with ‘I’, ‘F’, and ‘M’ corresponding to IR152adv, FaceNetadv, and MFadv, respectively. Figure 4 illustrates that the inclusion of our proposed method leads to improvements in both dodging and impersonation performance. These results serve as evidence of the effectiveness of our proposed method on adversarial robust models.
JPEG compression is a widely adopted method for image compression during transmission, concurrently acting as a defense mechanism against adversarial examples. To assess the effectiveness of our proposed attack under JPEG compression, we utilize DI as the baseline attack and MF as the surrogate model, evaluating the attack performance on ArcFace and CurricularFace models with experimental settings consistent with those described in Table 1. The results are illustrated in Figure 6. These results demonstrate that across varying levels of JPEG compression, our proposed attack method consistently outperforms the baseline attack, thereby highlighting its effectiveness under JPEG compression.
The experimental results on negative cosine similarity loss, Sibling-Attack, and LGC are presented in the supplementary.
4.3. Ablation Study
ASRd | ASRi | |||
---|---|---|---|---|
Attack | IR152 | FaceNet | MF | |
Baseline | 2.2 | 18.2 | 99.2 | 25.6 |
Lagrangian | 3.9 | 26.5 | 100.0 | 26.0 |
Fine-tuning | 4.2 | 26.3 | 100.0 | 25.6 |
RZ | 0.6 | 4.4 | 94.8 | 15.1 |
Pruning | 3.4 | 24.8 | 100.0 | 25.4 |
Adv-Pruning | 5.4 | 32.5 | 100.0 | 26.3 |
To delve into the properties of our proposed attack method, we conducted an ablation experiment using DI as the Baseline attack, with MF serving as the surrogate model on the LFW dataset. To confirm the effectiveness of our pruning method, we employed the Random Zeroing (RZ) method, which randomly sets adversarial perturbations to zero. We applied this method and our pruning method to free up 20% of the adversarial perturbations crafted by the Lagrangian attack. For Fine-tuning, we utilized the Lagrangian attack, setting the weight of the impersonation loss to 12.0, as a method to further optimize the Lagrangian adversarial examples that were initially crafted with an impersonation loss weight of 6.0.
The dodging attack ASR and average black-box impersonation ASR results are shown in Table 3. Table 3 demonstrates that our proposed pruning method for adversarial examples achieves a significantly smaller decrease in ASR than RZ after pruning 20% of adversarial perturbations, indicating the effectiveness of our pruning method. After being processed using the Pruning and Restoration stage of our proposed Adv-Pruning method, both impersonation and dodging ASR of the crafted adversarial face examples are recovered and higher than the Baseline attack method. These results demonstrate the effectiveness of our proposed Adv-Pruning in improving the dodging performance of adversarial attacks on FR without compromising the impersonation attack performance.
The sparsity ratio quantifies the proportion of adversarial perturbations that are allowed to be discarded during the Pruning stage. This ratio greatly impacts the performance of our proposed attack method. Hence, we conducted a sensitivity study on the sparsity ratio to analyze its effect on performance of the algorithm. The Lagrangian attack method based on DI is selected as the Baseline. We conduct a hyperparameter sensitivity study on LFW using FaceNet as the surrogate model, and adjust the value of to ensure that the average black-box impersonation ASR results were within a 0.4% absolute difference compared to the Baseline. The dodging ASR results are illustrated in the right plot of Figure 7. The results illustrate that the dodging ASR of our proposed method initially increases and then decreases as the sparsity ratio increases. When the sparsity ratio increases, a greater number of adversarial perturbations are pruned, creating more empty regions for the adversarial perturbations that favor dodging attacks in the Restoration stage. If the sparsity ratio is set to a too-high value, an excessive number of adversarial perturbations are allowed to be freed up, resulting in a degradation of performance for the adversarial examples crafted by the Priming stage. Consequently, the performance of adversarial face examples will decrease.
More ablation studies are in the supplementary.
4.4. Analytical Study
Owing to the inherent conflict during the optimization process of impersonation and dodging losses in the black-box setting, there exists a trade-off between impersonation and dodging performance. We craft adversarial face examples using Lagrangian attack and our proposed Adv-Pruning on LFW based on DI. The average black-box results are demonstrated in the left plot of Figure 7.
The results illustrate that our proposed method can reduce the trade-off between impersonation and dodging performance in the black-box setting. The pruning operation of our proposed Adv-Pruning serves to sparsify the adversarial perturbations while preserving the impersonation performance. On the other hand, the restoration operation tends to introduce adversarial perturbations in the pruned areas, specifically favoring dodging attacks. These operations effectively enhance the dodging attack performance while maintaining the impersonation attack performance, ultimately mitigating the trade-off.
The analytical study of multi-identity samples among the benign face images and universality of multi-identity samples among adversarial face examples are in the supplementary.
5. Conclusion
In this paper, we delve into the issue of multi-identity samples among adversarial face examples. Our research reveals the universality of multi-identity samples among adversarial face examples crafted by previous impersonation attacks and the success of an impersonation attack may not necessarily imply the success of dodging attacks on FR systems in the black-box setting. In order to improve dodging performance without compromising impersonation performance, we proposed a novel attack, namely Adv-Pruning. Adv-Pruning comprises Priming, Pruning, and Restoration Stages. Leveraging our proposed Adversarial Priority Quantification, we identify less prioritized adversarial perturbations with minimal impact on absolute model output variances. Through our proposed Biased Gradient Adaptation, biased gradient perturbations are applied to the sparsified regions, adapting adversarial face examples to a space favoring dodging attacks. Extensive experiments demonstrate the effectiveness of our proposed method.
Acknowledgements.
This work was supported in part by the Natural Science Foundation of China under Grant 61972169,62372203 and 62302186, in part by the National Key Research and Development Program of China (2022YFB2601802), in part by the Major Scientific and Technological Project of Hubei Province (2022BAA046, 2022BAA042), in part by the Knowledge Innovation Program of Wuhan-Basic Research, in part by China Postdoctoral Science Foundation 2022M711251.References
- (1)
- An et al. (2021) Xiang An, Xuhan Zhu, Yuan Gao, Yang Xiao, Yongle Zhao, Ziyong Feng, Lan Wu, Bin Qin, Ming Zhang, Debing Zhang, and Ying Fu. 2021. Partial FC: Training 10 Million Identities on a Single Machine. In IEEE/CVF International Conference on Computer Vision Workshops. 1445–1449.
- Boutros et al. (2023) Fadi Boutros, Jonas Henry Grebe, Arjan Kuijper, and Naser Damer. 2023. IDiff-Face: Synthetic-based Face Recognition through Fizzy Identity-Conditioned Diffusion Models. In IEEE/CVF International Conference on Computer Vision. 19593–19604.
- Cao and Gong (2017) Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification. Proceedings of the 33rd Annual Computer Security Applications Conference (2017).
- Chen et al. (2023) Bin Chen, Jia-Li Yin, Shukai Chen, Bohao Chen, and Ximeng Liu. 2023. An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability. In International Conference on Computer Vision. 4466–4475.
- Cherepanova et al. (2021) Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P Dickerson, Gavin Taylor, and Tom Goldstein. 2021. LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition. In International Conference on Learning Representations.
- Das et al. (2021) Sowmen Das, Selim Seferbekov, Arup Datta, Md Saiful Islam, and Md Ruhul Amin. 2021. Towards solving the deepfake problem: An analysis on improving deepfake detection using dynamic face augmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3776–3785.
- Deng et al. (2022) Jiankang Deng, Jia Guo, Jing Yang, Niannan Xue, Irene Kotsia, and Stefanos Zafeiriou. 2022. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 10 (2022), 5962–5979.
- Gao et al. (2023) Junqi Gao, Biqing Qi, Yao Li, Zhichang Guo, Dong Li, Yuming Xing, and Dazhi Zhang. 2023. Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability. In Advances in Neural Information Processing Systems.
- Ge et al. (2023a) Zhijin Ge, Fanhua Shang, Hongying Liu, Yuanyuan Liu, Liang Wan, Wei Feng, and Xiaosen Wang. 2023a. Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer. In Proceedings of the 31st ACM International Conference on Multimedia. 4440–4449.
- Ge et al. (2023b) Zhijin Ge, Xiaosen Wang, Hongying Liu, Fanhua Shang, and Yuanyuan Liu. 2023b. Boosting Adversarial Transferability by Achieving Flat Local Maxima. In Advances in Neural Information Processing Systems.
- Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
- Haleta et al. (2021) Pavlo Haleta, Dmytro Likhomanov, and Oleksandra Sokol. 2021. Multitask adversarial attack with dispersion amplification. EURASIP Journal on Information Security 1 (2021), 10.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
- Heo et al. (2019) Byeongho Heo, Minsik Lee, Sangdoo Yun, and Jin Young Choi. 2019. Knowledge distillation with adversarial samples supporting decision boundary. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 3771–3778.
- Hu et al. (2022) Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, and Libing Wu. 2022. Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition. 14994–15003.
- Huang et al. (2007) Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. 2007. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. Technical Report 07-49. University of Massachusetts, Amherst.
- Huang et al. (2020) Yuge Huang, Yuhan Wang, Ying Tai, Xiaoming Liu, Pengcheng Shen, Shaoxin Li, Jilin Li, and Feiyue Huang. 2020. Curricularface: adaptive curriculum learning loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5901–5910.
- Jia et al. (2022) Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, and Chao Ma. 2022. Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition. In Advances in Neural Information Processing Systems, Vol. 35. 34136–34147.
- kanth Nakka and Salzmann (2021) Krishna kanth Nakka and Mathieu Salzmann. 2021. learning transferable adversarial perturbations. In Advances in Neural Information Processing Systems, Vol. 34. 13950–13962.
- Karras et al. (2018) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representation.
- Karras et al. (2019) Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4401–4410.
- Li et al. (2023a) Jingzhi Li, Zidong Guo, Hui Li, Seungju Han, Ji-Won Baek, Min Yang, Ran Yang, and Sungjoo Suh. 2023a. Rethinking feature-based knowledge distillation for face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20156–20165.
- Li et al. (2023b) Qizhang Li, Yiwen Guo, Wangmeng Zuo, and Hao Chen. 2023b. Improving Adversarial Transferability via Intermediate-level Perturbation Decay. In Advances in Neural Information Processing Systems, Vol. 36. 32900–32912.
- Li et al. (2023c) Yanjie Li, Yiquan Li, Xuelong Dai, Songtao Guo, and Bin Xiao. 2023c. Physical-world optical adversarial attacks on 3d face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 24699–24708.
- Li et al. (2023d) Zexin Li, Bangjie Yin, Taiping Yao, Junfeng Guo, Shouhong Ding, Simin Chen, and Cong Liu. 2023d. Sibling-attack: Rethinking transferable adversarial attacks against face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 24626–24637.
- Liang et al. (2023) Chumeng Liang, Xiaoyu Wu, Yang Hua, Jiaru Zhang, Yiming Xue, Tao Song, Zhengui Xue, Ruhui Ma, and Haibing Guan. 2023. Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples. In Proceedings of the 40th International Conference on Machine Learning. 20763–20786.
- Long et al. (2022) Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, and Jingkuan Song. 2022. Frequency Domain Model Augmentation for Adversarial Attack. In European Conference on Computer Vision, Vol. 13664. 549–566.
- Lu et al. (2023) Dong Lu, Zhiqiang Wang, Teng Wang, Weili Guan, Hongchang Gao, and Feng Zheng. 2023. Set-level guidance attack: Boosting adversarial transferability of vision-language pre-training models. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 102–111.
- Meng et al. (2021) Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. 2021. Magface: A universal representation for face recognition and quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14225–14234.
- Mingxing et al. (2021) Duan Mingxing, Kenli Li, Lingxi Xie, Qi Tian, and Bin Xiao. 2021. Towards multiple black-boxes attack via adversarial example generation network. In Proceedings of the 29th ACM International Conference on Multimedia. 264–272.
- Narayan et al. (2023) Kartik Narayan, Harsh Agarwal, Kartik Thakral, Surbhi Mittal, Mayank Vatsa, and Richa Singh. 2023. Df-platter: Multi-face heterogeneous deepfake dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9739–9748.
- Naseer et al. (2023) Muzammal Naseer, Ahmad Mahmood, Salman Khan, and Fahad Khan. 2023. Boosting Adversarial Transferability using Dynamic Cues. In International Conference on Learning Representation.
- Qiu et al. (2020) Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, and Bo Li. 2020. Semanticadv: Generating adversarial examples via attribute-conditioned image editing. In Proceedings of the European Conference on Computer Vision. 19–37.
- Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 815–823.
- Shamshad et al. (2023) Fahad Shamshad, Muzammal Naseer, and Karthik Nandakumar. 2023. CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20595–20605.
- Shayegani et al. (2024) Erfan Shayegani, Yue Dong, and Nael Abu-Ghazaleh. 2024. Jailbreak in pieces: Compositional adversarial attacks on multi-modal language models. In International Conference on Learning Representation.
- Sun et al. (2020) Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. 2020. Circle loss: A unified perspective of pair similarity optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6398–6407.
- Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
- Wang et al. (2018) Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5265–5274.
- Wang et al. (2021) Jun Wang, Yinglu Liu, Yibo Hu, Hailin Shi, and Tao Mei. 2021. Facex-zoo: A pytorch toolbox for face recognition. In Proceedings of the 29th ACM International Conference on Multimedia. 3779–3782.
- Wang et al. (2024a) Kunyu Wang, Xuanran He, Wenxuan Wang, and Xiaosen Wang. 2024a. Boosting Adversarial Transferability by Block Shuffle and Rotation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Wang et al. (2019) Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. 2019. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 692–702.
- Wang and He (2021) Xiaosen Wang and Kun He. 2021. Enhancing the Transferability of Adversarial Attacks Through Variance Tuning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1924–1933.
- Wang et al. (2024b) Xudong Wang, Ke-Yue Zhang, Taiping Yao, Qianyu Zhou, Shouhong Ding, Pingyang Dai, and Rongrong Ji. 2024b. TF-FAS: Twofold-Element Fine-Grained Semantic Guidance for Generalizable Face Anti-Spoofing. In European Conference on Computer Vision.
- Wang et al. (2020) Xiaobo Wang, Shifeng Zhang, Shuo Wang, Tianyu Fu, Hailin Shi, and Tao Mei. 2020. Mis-classified vector guided softmax loss for face recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 12241–12248.
- Wang et al. (2023) Xiaosen Wang, Zeliang Zhang, and Jianping Zhang. 2023. Structure invariant transformation for better adversarial transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4607–4619.
- Wen and Itti (2020) Shixian Wen and Laurent Itti. 2020. Beneficial Perturbations Network for Defending Adversarial Examples. arXiv preprint arXiv:2009.12724 (2020).
- Xiao et al. (2021) Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, and Jun Zhu. 2021. Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11840–11849.
- Xie et al. (2019) Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. 2019. Improving Transferability of Adversarial Examples With Input Diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2730–2739.
- Yang et al. (2021) Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, and Hui Xue. 2021. Towards Face Encryption by Generating Adversarial Identity Masks. In Proceedings of the IEEE International Conference on Computer Vision. 3877–3887.
- Yang et al. (2023) Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, and Jun Zhu. 2023. Towards effective adversarial textured 3d meshes on physical face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4119–4128.
- Yang et al. (2020) Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, and Jun Zhu. 2020. Robfr: Benchmarking adversarial robustness on face recognition. arXiv preprint arXiv:2007.04118 (2020).
- Yin et al. (2021) Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, and Cong Liu. 2021. Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. In International Joint Conference on Artificial Intelligence. 1252–1258.
- Yuan et al. (2022) Zheng Yuan, Jie Zhang, and Shiguang Shan. 2022. Adaptive image transformations for transfer-based adversarial attack. In European Conference on Computer Vision. 1–17.
- Zeng et al. (2020) Dan Zeng, Hailin Shi, Hang Du, Jun Wang, Zhen Lei, and Tao Mei. 2020. Npcface: Negative-positive collaborative training for large-scale face recognition. arXiv preprint arXiv:2007.10172 (2020).
- Zhang et al. (2022a) Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, and Michael R. Lyu. 2022a. Improving Adversarial Transferability via Neuron Attribution-based Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14973–14982.
- Zhang et al. (2022b) Jiaming Zhang, Qi Yi, and Jitao Sang. 2022b. Towards adversarial attack on vision-language pre-training models. In Proceedings of the 30th ACM International Conference on Multimedia. 5005–5013.
- Zhong and Deng (2021) Yaoyao Zhong and Weihong Deng. 2021. Towards Transferable Adversarial Attack Against Deep Face Recognition. IEEE Transactions on Information Forensics and Security 16 (2021), 1452–1466.
- Zhou et al. (2024a) Fengfan Zhou, Hefei Ling, Yuxuan Shi, Jiazhong Chen, and Ping Li. 2024a. Improving visual quality and transferability of adversarial attacks on face recognition simultaneously with adversarial restoration. In IEEE International Conference on Acoustics, Speech and Signal Processing. 4540–4544.
- Zhou et al. (2023b) Fengfan Zhou, Hefei Ling, Yuxuan Shi, Jiazhong Chen, Zongyi Li, and Ping Li. 2023b. Improving the transferability of adversarial attacks on face recognition with beneficial perturbation feature augmentation. IEEE Transactions on Computational Social Systems (2023).
- Zhou et al. (2024c) Fengfan Zhou, Qianyu Zhou, Xiangtai Li, Xuequan Lu, Lizhuang Ma, and Hefei Ling. 2024c. Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models. arXiv preprint arXiv:2405.16940 (2024).
- Zhou et al. (2024b) Qianyu Zhou, Ke-Yue Zhang, Taiping Yao, Xuequan Lu, Shouhong Ding, and Lizhuang Ma. 2024b. Test-Time Domain Generalization for Face Anti-Spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Zhou et al. (2023c) Qianyu Zhou, Ke-Yue Zhang, Taiping Yao, Xuequan Lu, Ran Yi, Shouhong Ding, and Lizhuang Ma. 2023c. Instance-Aware Domain Generalization for Face Anti-Spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20453–20463.
- Zhou et al. (2022a) Qianyu Zhou, Ke-Yue Zhang, Taiping Yao, Ran Yi, Shouhong Ding, and Lizhuang Ma. 2022a. Adaptive mixture of experts learning for generalizable face anti-spoofing. In Proceedings of the 30th ACM International Conference on Multimedia. 6009–6018.
- Zhou et al. (2022b) Qianyu Zhou, Ke-Yue Zhang, Taiping Yao, Ran Yi, Kekai Sheng, Shouhong Ding, and Lizhuang Ma. 2022b. Generative domain adaptation for face anti-spoofing. In European Conference on Computer Vision. 335–356.
- Zhou et al. (2023a) Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, and Hai Jin. 2023a. Advclip: Downstream-agnostic adversarial examples in multimodal contrastive learning. In Proceedings of the 31st ACM International Conference on Multimedia. 6311–6320.