This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

1 Evaluated attacks.

Label flipping (LF) attack: The attackers flip the label ll of each training example to Ll1L-l-1, where LL is the total kinds of labels.

LIE attack: LIE attack ALittleIsEnough is a coordinate-wise model poisoning attack. The attacker first calculates mean and standard deviation of all benign updates, and then adds noise associating with the number of attackers and the standard deviation to the mean in each dimension. Finally, the attacker uploads malicious resultant to the server.

AGR-tailored (AGRT) attack: In order to construct a poisoned local update, the adversary AGRTailored performs the following optimization problem:

argmax𝛾\displaystyle\underset{\gamma}{\text{argmax }} 𝒈𝒜(𝒈~{i[n]}𝒈{i[n,K]})\displaystyle\|\boldsymbol{g}-\mathscr{A}(\widetilde{\boldsymbol{g}}_{\{i\in[n]\}}\cup\boldsymbol{g}_{\{i\in[n,K]\}})\| (1)
𝒈~{i[n]}=𝒈+γ𝚫~;𝒈=FedAvg(𝒈{i[K]}),\displaystyle\widetilde{\boldsymbol{g}}_{\{i\in[n]\}}=\boldsymbol{g}+\gamma\widetilde{\boldsymbol{\Delta}};\boldsymbol{g}=\textit{FedAvg}(\boldsymbol{g}_{\{i\in[K]\}}),

where 𝒜\mathscr{A} is the known defense method, 𝒈{i[n]}\boldsymbol{g}_{\{i\in[n]\}} are the benign updates that the adversary knows, nn denotes the number of attackers,𝒈\boldsymbol{g} is a reference benign aggregation obtained by FedAvg FedAvg that averages all the benign updates that the attacker knows.

2 Evaluated defenses.

Krum: Krum calculates the Euclidean distance between any two local gradients and selects the one that is closest to the Kn2K-n-2 neighboring local gradients.

FABA: FABA removes the local update that is farthest from the average of the local updates repeatedly until the number of eliminated updates reaches a predefined threshold.

Median: Median directly takes the coordinate-wise median value in each dimension of all local gradient vectors as the new global gradient vector

DnC: DnC leverages singular value decomposition (SVD) to extract the common features between benign and poisoned gradients, and randomly samples a subset of parameters of each local gradient as its substitution, which will be projected along their top right singular eigenvector. Then an outlier score is obtained by computing the inner product of substitution and the projection, and βn\beta\cdot n local gradients with the highest scores will be removed; here, β\beta refers to a filtering fraction.

CC: CC clips the local updates with large magnitudes, with the tuition that attackers may upload such updates to dominate the global model.

3 Parameters setting.

We set the number of clients K=50K=50 for both datasets. To reduce the total communication rounds between clients and the server, we set the local epoch of each client to be 33. The total iteration T=100T=100. The importance of historical information λ=0.1\lambda=0.1. For MNIST, we set the estimated maximum cosine similarity cmax=0.7c_{max}=0.7, minimum cosine similarity cmin=0.3c_{min}=0.3, and the acceptable difference between clusters α=0.1\alpha=-0.1. For CIFAR-10, we set cmax=0.3c_{max}=0.3, cmin=0.1c_{min}=0.1, α=0\alpha=0.