This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Double-Private Distributed Estimation Algorithm Using Differential Privacy and a Key-Like Proportionate Matrix with Its Performance Analysis

Mehdi Korki, Fatemehsadat Hosseiniamin, Hadi Zayyani, and Mehdi Bekrani M. Korki is with the School of Science, Computing, and Engineering Technologies, Swinburne University of Technology, Melbourne, Australia (e-mail: [email protected]).F. Hosseiniamin, H. Zayyani, and M. Bekrani are with the Department of Electrical and Computer Engineering, Qom University of Technology (QUT), Qom, Iran (e-mails: [email protected], [email protected], [email protected]).
Abstract

In this brief, we present an enhanced privacy-preserving distributed estimation algorithm, referred to as the “Double-Private Algorithm,” which combines the principles of both differential privacy (DP) and cryptography. The proposed algorithm enhances privacy by introducing DP noise into the intermediate estimations of neighboring nodes. Additionally, we employ an inverse of a closed-form reproducible proportionate gain matrix as the cryptographic key matrix to fortify the privacy protection within the proposed double private algorithm. We improve the algorithm by transmitting alternative variable vectors instead of raw measurements, resulting in enhanced key matrix reconstruction performance. This innovative approach mitigate noise impact, enhancing overall algorithm effectiveness. We also establish an upper bound for the norm of the error between the non-private Diffusion Least Mean Square (DLMS) algorithm and our double private algorithm. Further, we determine a sufficient condition for the step-size to ensure the mean convergence of the proposed algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm, particularly its ability to attain the final Mean Square Deviation (MSD) comparable to that of the non-private DLMS.

Index Terms:
Distributed estimation, privacy, proportionate, diffusion LMS.

I Introduction

The distributed estimation finds applications in a diverse range of multi agent network scenarios such as Wireless Sensor Networks (WSN), communication networks, and biological networks [1]. The multiple agents or nodes of the network can collaborate to generate estimations of an unknown vector. Collaboration can be achieved through various strategies, including incremental, consensus, and diffusion methods. Among these, the diffusion approach stands out for its versatility, scalability, minimal storage requirements, and ease of implementation [1], [2].

In the context of the Adapt-Then-Combine (ATC) diffusion algorithms [3]-[13], the process unfolds in two distinct steps. Initially, agents update their individual estimates based on prior estimations and current measurements, guided by a local cost function —an operation referred to as the adaptation step. During this phase, each local agent accesses its own data while preserving node-level privacy. Subsequently, in the combination step, neighboring agents collaborate with the local agent by sharing their own estimations. This collaborative effort is aimed at refining the overall estimate. However, this step introduces privacy concerns. In the presence of malicious or adversarial agents, there is a potential for unauthorized access to the global estimation of the network. Consequently, a robust privacy-preserving strategy is essential to protect against eavesdropping. In essence, a solution is required to ensure secure transmission between network agents.

The secure solution may involve a cryptographic method that requires key exchange. This key exchange process can significantly increase the communication load over the network, demanding considerable power for both communication and computations, as noted in [14]. An alternative approach that does not require constant communication for key exchange can be highly effective in preventing eavesdropping by adversaries. Another viable solution includes simple methods, such as noise-injecting algorithms mentioned in [15]-[18]. Among these methods, differential privacy (DP) techniques [17]-[18] are widely employed. These techniques involve injecting uncorrelated noise into the shared information signal to ensure privacy. Various noise types, including Gaussian, Laplacian, and offset-symmetric Gaussian (OSGT), are commonly used in this context, as discussed in [18].

Furthermore, the literature suggests several privacy-preserving diffusion algorithms [19]-[21]. In [19], a private-partial distributed LMS is proposed, where the combination step is replaced by an average consensus method using perturbed noise. More recently, the authors in [20] introduced DP schemes that incorporate noise at all stages of the diffusion algorithm. Further, in [21], the authors develop a privacy-preserving distributed projection least mean squares (LMS) strategy for linear multitask networks, where agents aim to enhance their local inference performance while protecting individual task privacy. It involves sending noisy estimates to neighbors, with the noise level optimized to balance accuracy and privacy.

In this brief, we aim to enhance privacy through the simultaneous application of both aforementioned techniques, which is called a double private algorithm. Initially, we employ a differential privacy technique by introducing noise into the intermediate estimations. Subsequently, in the second phase, we leverage a cryptographic-like approach, utilizing a key matrix to perform multiplication on the intermediate estimations. This approach offers a dual layer of security: even if an adversary gain access to the differential-privacy noise, they would still require the knowledge of the key matrix. Conversely, if the adversary manages to obtain the key, they would additionally need to decipher the noise sequence. The proposed algorithm in [20] emphasizes both guaranteed performance and privacy in distributed learning, while our double private algorithm enhances privacy with simultaneous privacy-preserving mechanisms, albeit without explicit performance guarantees.

Another noteworthy advantage of the proposed double private scheme lies in its simplicity when it comes to encryption and decryption. This simplicity arises from the fact that both the proportionate gain matrix and its inverse are diagonal matrices. We have also implemented a novel approach to mitigate the impact of noise on the reconstruction of the key matrix. Specifically, we propose sending an alternative variable vector instead of raw measurements and regression vectors. Through this innovative modification, we have observed a notable enhancement in the performance of matrix reconstruction, consequently leading to improved overall performance of the proposed algorithm.

Furthermore, the paper provides two essential mathematical analyses of the proposed method. The first analysis calculates an upper bound for the l2l_{2}-norm of the error between the non-private estimation and double private estimation. The second analysis establishes a sufficient condition for the step-size value to ensure the mean convergence of the proposed algorithm. Simulation results demonstrate that the proposed double-private algorithm can attain the final mean square deviation (MSD) comparable to that of the non-private DLMS algorithm with a delay.

II System Model and Problem Formulation

A network topology of NN agents (nodes) is assumed in which the kk’th agent collaborate with its neighborhood nodes collected in 𝒩k\mathcal{N}_{k} encompassing itself. The kk’th agent observes a linear measurement dk,id_{k,i} of an unknown L×1L\times 1 vector denoted by 𝝎o{\mbox{\boldmath$\omega$}}^{o} as dk,i=uk,iT𝝎o+vk,id_{k,i}={\textbf{u}}_{k,i}^{T}{\mbox{\boldmath$\omega$}}^{o}+v_{k,i}, where ii is the discrete time index, uk,i{\textbf{u}}_{k,i} is the known L×1L\times 1 regression vector, and vk,iv_{k,i} is the measurement noise of kk’th agent at time index ii.

In a privacy-preserving ATC diffusion algorithm, there are two steps of adaptation and combination. In the adaptation step, the intermediate estimations are computed as ϕk,i=𝝎k,i+μlpk,i{\mbox{\boldmath$\phi$}}_{k,i}={\mbox{\boldmath$\omega$}}_{k,i}+\mu_{l}{\textbf{p}}_{k,i}, where 𝝎k,i{\mbox{\boldmath$\omega$}}_{k,i} is the estimation of node kk at the end of index ii, and pk,il𝒩kcl,kul,iTf(dl,iul,iT𝝎k,i){\textbf{p}}_{k,i}\triangleq\sum_{l\in\mathcal{N}_{k}}c_{l,k}{\textbf{u}}^{T}_{l,i}\mathrm{f}(d_{l,i}-{\textbf{u}}^{T}_{l,i}{\mbox{\boldmath$\omega$}}_{k,i}) where function f(.)\mathrm{f}(.) is dependent on the local cost function defined for the algorithm. For example, in classical DLMS algorithm, this function is f(x)=x\mathrm{f}(x)=x. In an uncooperative scenario, some adversary agents try to inject false data to abrupt the process of distributed estimation or at least eavesdrop the intermediate estimations and reach to the final estimate of unknown vector. By privacy preserving distributed estimation algorithms, we want to prevent them to access to the true estimations. So, for preserving the privacy, a disturbed (or encrypted) version of ϕk,i{\mbox{\boldmath$\phi$}}_{k,i} which is nominated by ϕ~k,i\tilde{{\mbox{\boldmath$\phi$}}}_{k,i} is transmitted to the neighbors by assuming AWGN channels between nodes. At the combination step, the perturbed intermediate estimation ϕ~k,i\tilde{{\mbox{\boldmath$\phi$}}}_{k,i} plus noise is received by the neighbors. Then, after de-perturbing (or decryption), the de-perturbed version of intermediate estimation which is ϕ~~l,i\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i} is received by agent kk which is combined as 𝝎k,i+1=l𝒩kal,kϕ~~l,i{\mbox{\boldmath$\omega$}}_{k,i+1}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}. The aim of privacy-preserving diffusion algorithm is to devise a well distributed algorithm with high privacy as possible.

III The Proposed double private proportionate generalized correntropy-based Diffusion LMS Algorithm

In this section, we first explain the proportionate generalized correntropy-based diffusion LMS algorithm (PGCDLMS) since the proposed double-private algorithm is an extension of this algorithm. Then, the proposed double-private PGCDLMS (DP-PGCDLMS) is explained.

III-A Proportionate generalized correntropy-based Diffusion LMS Algorithm

The PGCDLMS is essentially a proportionate DLMS (PDLMS) algorithm in which the proportionate gain matrix is obtained in a closed form [13]. So, the adaptation of PGCDLMS is as follows:

ϕk,i=𝝎k,i+μkGkpk,i,{\mbox{\boldmath$\phi$}}_{k,i}={\mbox{\boldmath$\omega$}}_{k,i}+\mu_{k}{\textbf{G}}_{k}{\textbf{p}}_{k,i}, (1)

where pk,i=[pk,i,1,,pk,i,L]T=l𝒩kclkul,i(dl(i)ul,iT𝝎k,i){\textbf{p}}_{k,i}=[p_{k,i,1},...,p_{k,i,L}]^{T}=\sum_{l\in\cal N_{\mathrm{k}}}c_{lk}{\textbf{u}}_{l,i}(d_{l}(i)-{\textbf{u}}^{T}_{l,i}{\mbox{\boldmath$\omega$}}_{k,i}) and the gain matrix Gk{\textbf{G}}_{k} is obtained by optimizing a cost function defined by generalized correntropy [13]. The advantage of PGCDLMS algorithm is that there is an optimum closed-form formula for the gain matrix Gk,i=diag(gk,i,r){\textbf{G}}_{k,i}=\mathrm{diag}(g^{*}_{k,i,r}) which is

gk,i,r=[βμk(Ln(λk,iβαμkαA|pk,i,r|α))1/αpk,i,r1],\displaystyle g^{*}_{k,i,r}=\Big{[}\frac{\beta}{\mu_{k}}\Big{(}-\mathrm{Ln}(\frac{\lambda_{k,i}\beta^{\alpha}}{\mu_{k}^{\alpha}A}|p_{k,i,r}|^{-\alpha})\Big{)}^{1/\alpha}p_{k,i,r}^{-1}\Big{]}, (2)

where α\alpha is the exponent parameter of generalized correntropy kernel of the algorithm, β\beta is the bandwidth parameter of the correntropy, and λk,i=λk,i\lambda_{k,i}=\lambda^{*}_{k,i} is

λk,i=exp(1+r(βαμkαLn(βαμkαA|pk,i,r|α)pk,i,rα)r(βαμkαpk,i,rα)).\lambda^{*}_{k,i}=\exp\Big{(}\frac{1+\sum_{r}\big{(}\frac{\beta^{\alpha}}{\mu_{k}^{\alpha}}\mathrm{Ln}(\frac{\beta^{\alpha}}{\mu_{k}^{\alpha}A}|p_{k,i,r}|^{-\alpha})p_{k,i,r}^{-\alpha})}{\sum_{r}(-\frac{\beta^{\alpha}}{\mu_{k}^{\alpha}}p_{k,i,r}^{-\alpha})}\Big{)}. (3)

III-B The proposed double-private version of PGCDLMS

The basic idea of DP-PGCDLMS is to use Gk,i1{\textbf{G}}^{-1}_{k,i} as the perturbation matrix which is multiplied by the intermediate estimation vector and then adding the differential privacy noise to that. So, we have

ϕ~k,i=Gk,i1ϕk,i+𝜼k,i,\tilde{{\mbox{\boldmath$\phi$}}}_{k,i}={\textbf{G}}^{-1}_{k,i}{\mbox{\boldmath$\phi$}}_{k,i}+{\mbox{\boldmath$\eta$}}_{k,i}, (4)

where ϕ~k,i\tilde{{\mbox{\boldmath$\phi$}}}_{k,i} is the double private intermediate vector, 𝜼k,i{\mbox{\boldmath$\eta$}}_{k,i} is the differential privacy noise, and Gk,i1{\textbf{G}}^{-1}_{k,i} is the inverse of proportionate diagonal gain matrix used as a key matrix to perturb the data of intermediate estimation. In fact, the perturbation is an encryption mechanism to hide the intermediate estimation from unauthorized adversary agent which want to have access to the estimation. The perturbed vector ϕ~l,i\tilde{{\mbox{\boldmath$\phi$}}}_{l,i} of neighbor nodes is transmitted to the local node of kk. So, the received perturbed vector of node kk from node ll, assuming an AWGN channel between nodes, is rl,i=ϕ~l,i+hl,i{\textbf{r}}_{l,i}=\tilde{{\mbox{\boldmath$\phi$}}}_{l,i}+{\textbf{h}}_{l,i}, where hl,i{\textbf{h}}_{l,i} is the received noise vector. Then, the decrypted intermediate estimation is defined as

ϕ~~l,i=G~l,i(rl,i𝜼l,i),\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}=\tilde{{\textbf{G}}}_{l,i}({\textbf{r}}_{l,i}-{\mbox{\boldmath$\eta$}}_{l,i}), (5)

where G~l,i\tilde{{\textbf{G}}}_{l,i} is the reconstructed key matrix, and it is assumed that the differential privacy noise 𝜼l,i{\mbox{\boldmath$\eta$}}_{l,i} is known for all honest agents. The noise generators are often implemented by an Linear feedback shift registers (LFSR) which is started by an initial condition. It is not difficult to set all the honest agents have the same LFSR and same initial conditions. So, the point is that adding a noise which is not known for adversaries, increase the privacy of the algorithm. It is one aspect of privacy preserving mechanism in our proposed double private scheme. The other aspect is to use a Key-like gain matrix to perturb the intermediate estimation. If the privacy noise is eavesdropped, then we have another second privacy preserving mechanism. There are three cases for the reconstructed key matrix G~l,i=G^l,i+Vl,i\tilde{{\textbf{G}}}_{l,i}=\hat{{\textbf{G}}}_{l,i}+{\textbf{V}}_{l,i}, where Vl,i{\textbf{V}}_{l,i} is the key error matrix. In first case, the reconstructed key matrix is approximately the true gain matrix i.e. G~l,i=Gl,i+Vl,i\tilde{{\textbf{G}}}_{l,i}={\textbf{G}}_{l,i}+{\textbf{V}}_{l,i} which can be obtained by, for example sharing the key matrix beforehand (Vl,i=0{\textbf{V}}_{l,i}=0) or transmitting the key matrix between nodes. As it is expected, this case is not practical because of high communication load for transmitting the key matrix or because of the danger of eavesdropping by adversaries. So, this case which we nominate the corresponding algorithm as oracle-DP-PGCDLMS is not practically feasible. But, we use it as a reference of comparisons. In the second case, the reconstructed key matrix G~l,i=G^l,i=diag(g^l,i,r)\tilde{{\textbf{G}}}_{l,i}=\hat{{\textbf{G}}}_{l,i}=\mathrm{diag}(\hat{g}_{l,i,r}) is obtained by (2) using p^l,i=l𝒩kcl,kul,i(dl(i)ul,iT𝝎l,i)\hat{{\textbf{p}}}_{l,i}=\sum_{l^{{}^{\prime}}\in\cal N_{\mathrm{k}}}c_{l^{{}^{\prime}},k}{\textbf{u}}_{l^{{}^{\prime}},i}(d_{l^{{}^{\prime}}}(i)-{\textbf{u}}^{T}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\omega$}}_{l,i}). In this scenario, the vector pl,i{\textbf{p}}_{l,i} is reconstructed using the noisy exchanges dl(i)d_{l^{\prime}}(i) and ul,i{\textbf{u}}_{l^{\prime},i}. Consequently, the noisy nature of these variables results in a noisy version of p^l,i\hat{{\textbf{p}}}_{l,i}, thereby further amplifying the noise in the reconstructed key matrix. We designate this approach as DP-PGCDLMS-Version1. Alternatively, in the third case, we propose a direct exchange of pl,i{\textbf{p}}_{l,i} at the kk-th node. The noisy version of p^l,i=pl,i+𝝂l,i\hat{{\textbf{p}}}_{l,i}={\textbf{p}}_{l,i}+{\mbox{\boldmath$\nu$}}_{l,i} solely impacts the reconstruction of the key matrix, leading to a denoised version of the proposed algorithm. We identify this modified version as DP-PGCDLMS-Version2. The order of computational complexity plus extra communication loads of the proposed DP-PGCDLMS algorithms (version 1 and version 2) in comparison to some others are shown in Table 1. It is seen that the proposed algorithms are slightly more complex (the order of complexity is the same) than other algorithms and they need more communication load. Also, the DP-PGCDLMS-Version2 needs more communication loads in comparison to DP-PGCDLMS-Version1.

TABLE I: Computational Complexity per node kk and per iteration of algorithms (Nk=Card{𝒩k}N_{k}=\mathrm{Card}\{\cal N_{\mathrm{k}}\})
Algorithm Add Multiplication + Computation of G Extra Comm. load
DLMS [4] O(LNk)\!\!\!\begin{aligned} O(LN_{k})&\end{aligned} Q(LNk)\!\!\!\begin{aligned} Q(LN_{k})&\end{aligned} 0\!\!\!\begin{aligned} 0&\end{aligned}
PR-DLMS [10] O(LNk)\!\!\!\begin{aligned} O(LN_{k})&\end{aligned} O(LNk)+O(L2)\!\!\!\begin{aligned} O(LN_{k})+O(L^{2})&\end{aligned} 0\!\!\!\begin{aligned} 0&\end{aligned}
PGCDLMS [13] O(LNk)\!\!\!\begin{aligned} O(LN_{k})&\end{aligned} O(LNk)+O(L)\!\!\!\begin{aligned} O(LN_{k})+O(L)&\end{aligned} 0\!\!\!\begin{aligned} 0&\end{aligned}
DP-PGCDLMS-Version1 O(LNk)+O(L)\!\!\!\begin{aligned} O(LN_{k})\\ +O(L)&\end{aligned} O(LNk)+O(2L)\!\!\!\begin{aligned} O(LN_{k})+O(2L)&\end{aligned} O(LNk)\!\!\!\begin{aligned} O(LN_{k})\end{aligned}
DP-PGCDLMS-Version2 O(LNk)+O(L)\!\!\!\begin{aligned} O(LN_{k})\\ +O(L)&\end{aligned} O(LNk)+O(2L)\!\!\!\begin{aligned} O(LN_{k})+O(2L)&\end{aligned} O(2LNk)\!\!\!\begin{aligned} O(2LN_{k})\end{aligned}

IV Mathematical analysis

Two mathematical analyses are provided in this section. The one is calculating the upper bound for a defined error and the other is investigating the mean convergence of the algorithm which will be discussed later.

IV-A Upper bound for the error

In this part, to ensure that the privacy-preserving DP-PGCDLMS algorithm performance is near the performance of the non-private PGCDLMS, we calculate an upper bound on the l2l_{2}-norm of the error vector 𝚫𝝎=𝝎¯k,i𝝎k,i{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}}=\bar{{\mbox{\boldmath$\omega$}}}_{k,i}-{\mbox{\boldmath$\omega$}}_{k,i}, where 𝝎¯k,i\bar{{\mbox{\boldmath$\omega$}}}_{k,i} is the estimator of DP-PGCDLMS and 𝝎k,i{\mbox{\boldmath$\omega$}}_{k,i} is the estimator of PGCDLMS algorithm. So, we want to find the upper bound Cmax\mathrm{C}_{\mathrm{max}} in which we have D2=𝚫𝝎22=𝝎¯k,i𝝎k,i22D2,max\mathrm{D}_{2}=||{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}}||^{2}_{2}=||\bar{{\mbox{\boldmath$\omega$}}}_{k,i}-{\mbox{\boldmath$\omega$}}_{k,i}||^{2}_{2}\leq\mathrm{D}_{2,\mathrm{max}}. In this regard, from (5), we can write

ϕ~~l,i=G~l,i(ϕ~l,i𝜼l,i+Vl,i).\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}=\tilde{{\textbf{G}}}_{l,i}(\tilde{{\mbox{\boldmath$\phi$}}}_{l,i}-{\mbox{\boldmath$\eta$}}_{l,i}+{\textbf{V}}_{l,i}). (6)

Then, substituting (4) into (6), we have ϕ~~l,i=G~l,iGl,i1ϕl,i+rl,i\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}=\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\mbox{\boldmath$\phi$}}_{l,i}+{\textbf{r}}_{l,i}, where rl,i=G~l,ihl,i{\textbf{r}}_{l,i}=\tilde{{\textbf{G}}}_{l,i}{\textbf{h}}_{l,i}. Then, since we have

𝝎¯k,i=l𝒩kal,kϕ~~l,i=l𝒩kal,kG~l,iGl,i1ϕl,i+nk,i,\bar{{\mbox{\boldmath$\omega$}}}_{k,i}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\mbox{\boldmath$\phi$}}_{l,i}+{\textbf{n}}_{k,i}, (7)

where nk,il𝒩kal,krl,i{\textbf{n}}_{k,i}\triangleq\sum_{l\in\mathcal{N}_{k}}a_{l,k}{\textbf{r}}_{l,i}, and 𝝎k,i=l𝒩kal,kϕl,i{\mbox{\boldmath$\omega$}}_{k,i}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}{\mbox{\boldmath$\phi$}}_{l,i}, So, the error vector 𝚫\Delta𝝎\omega can be written as

l𝒩kal,k(ϕ~~l,iϕl,i)=l𝒩kal,k(I~l,iIL)ϕl,i+nk,i,\sum_{l\in\mathcal{N}_{k}}a_{l,k}(\tilde{\tilde{{\mbox{\boldmath$\phi$}}}}_{l,i}-{\mbox{\boldmath$\phi$}}_{l,i})=\sum_{l\in\mathcal{N}_{k}}a_{l,k}(\tilde{{\textbf{I}}}_{l,i}-{\textbf{I}}_{L}){\mbox{\boldmath$\phi$}}_{l,i}+{\textbf{n}}_{k,i}, (8)

where IL{\textbf{I}}_{L} is the identity matrix with size L×LL\times L, and I~l,iG~l,iGl,i1\tilde{{\textbf{I}}}_{l,i}\triangleq\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}. It is easy to write I~l,i=(Gl,i+Vl,i)Gl,i1=IL+Vl,iGl,i1\tilde{{\textbf{I}}}_{l,i}=({\textbf{G}}_{l,i}+{\textbf{V}}_{l,i}){\textbf{G}}^{-1}_{l,i}={\textbf{I}}_{L}+{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}. So, putting together, the error vector 𝚫\Delta𝝎\omega is simplified to

𝚫𝝎=l𝒩kal,kVl,iGl,i1ϕl,i.{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\mbox{\boldmath$\phi$}}_{l,i}. (9)

Hence, from (9), D2=𝚫𝝎22=(𝚫𝝎)T𝚫𝝎\mathrm{D}_{2}=||{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}}||^{2}_{2}=({\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}})^{T}{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\omega$}} can be expanded as D2=l𝒩kl𝒩kal,kal,kϕl,iTGl,i1Vl,iTVl,iTGl,i1ϕl,i\mathrm{D}_{2}=\sum_{l\in\mathcal{N}_{k}}\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l,k}a_{l^{{}^{\prime}},k}{\mbox{\boldmath$\phi$}}^{T}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{V}}^{T}_{l,i}{\textbf{V}}^{T}_{l^{{}^{\prime}},i}{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\phi$}}_{l^{{}^{\prime}},i}. To find an upper bound, we can use the triangle inequality. Then, we have

D2=|D2|l𝒩kl𝒩kal,kal,k|ϕl,iTGl,i1Vl,iTVl,iTGl,i1ϕl,i|.\mathrm{D}_{2}=|\mathrm{D}_{2}|\leq\sum_{l\in\mathcal{N}_{k}}\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l,k}a_{l^{{}^{\prime}},k}|{\mbox{\boldmath$\phi$}}^{T}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{V}}^{T}_{l,i}{\textbf{V}}^{T}_{l^{{}^{\prime}},i}{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\phi$}}_{l^{{}^{\prime}},i}|. (10)

If we define fTϕl,iTGl,i1{\textbf{f}}^{T}\triangleq{\mbox{\boldmath$\phi$}}^{T}_{l,i}{\textbf{G}}^{-1}_{l,i} and gVl,iTVl,iTGl,i1ϕl,i{\textbf{g}}\triangleq{\textbf{V}}^{T}_{l,i}{\textbf{V}}^{T}_{l^{{}^{\prime}},i}{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\phi$}}_{l^{{}^{\prime}},i}, using the Cauchy-Schwartz inequality of vector norms i.e. |fTg|f.g|{\textbf{f}}^{T}{\textbf{g}}|\leq||{\textbf{f}}||.||{\textbf{g}}|| and AgA.g||{\textbf{A}}{\textbf{g}}||\leq||{\textbf{A}}||.||{\textbf{g}}||, we have

f=Gl,iϕl,i1Gl,i1.ϕl,i=Gl,i1,||{\textbf{f}}||=||{\textbf{G}}^{-1}_{l,i{\mbox{\boldmath$\phi$}}_{l,i}}||\leq||{\textbf{G}}^{-1}_{l,i}||.||{\mbox{\boldmath$\phi$}}_{l,i}||=||{\textbf{G}}^{-1}_{l,i}||, (11)

and

g=Vl,iTVl,iTGl,i1ϕl,iVl,iT.Vl,iT.Gl,i1,||{\textbf{g}}||=||{\textbf{V}}^{T}_{l,i}{\textbf{V}}^{T}_{l^{{}^{\prime}},i}{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\phi$}}_{l^{{}^{\prime}},i}||\leq||{\textbf{V}}^{T}_{l,i}||.||{\textbf{V}}^{T}_{l^{{}^{\prime}},i}||.||{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}||, (12)

where it is assumed for simplicity that ϕl,i=1||{\mbox{\boldmath$\phi$}}_{l,i}||=1, which is assured by a normalization step in the transmission step. Putting (10), (11), and (12) all together and using the triangle inequality, we conclude that

D2l𝒩kl𝒩kal,kal,kGl,i1.Vl,iT.Vl,iT.Gl,i1=\mathrm{D}_{2}\leq\sum_{l\in\mathcal{N}_{k}}\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l,k}a_{l^{{}^{\prime}},k}||{\textbf{G}}^{-1}_{l,i}||.||{\textbf{V}}^{T}_{l,i}||.||{\textbf{V}}^{T}_{l^{{}^{\prime}},i}||.||{\textbf{G}}^{-1}_{l^{{}^{\prime}},i}||=
(l𝒩kal,k||Gl,i1||.||Vl,iT||)2=D2,max.\Big{(}\sum_{l\in\mathcal{N}_{k}}a_{l,k}||{\textbf{G}}^{-1}_{l,i}||.||{\textbf{V}}^{T}_{l,i}||\Big{)}^{2}=\mathrm{D}_{2,\mathrm{max}}. (13)

IV-B Mean convergence performance

In this subsection, the mean convergence of the proposed DP-PGCDLMS is investigated under some assumption which will be presented in the following. Also, a complex sufficient condition is derived. Moreover, a more simple sufficient condition on the value of step-size μk\mu_{k} is derived. The assumptions which will be examined and verified experimentally are:

  • Assumption 1: The uncorrelatedness between Vl,i{\textbf{V}}_{l,i} and Gl,i1𝝎¯l,i{\textbf{G}}^{-1}_{l,i}\bar{{\mbox{\boldmath$\omega$}}}_{l,i}.

  • Assumption 2: Vl,i{\textbf{V}}_{l,i} and Gl,i1pl,i{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i} are uncorrelated, and E{Vl,i}=0\mathrm{E}\{{\textbf{V}}_{l,i}\}=0.

We define the error vector as

𝝎~k,i=𝝎¯k,i𝝎o.\tilde{{\mbox{\boldmath$\omega$}}}_{k,i}=\bar{{\mbox{\boldmath$\omega$}}}_{k,i}-{\mbox{\boldmath$\omega$}}^{o}. (14)

Neglecting the noise term nk,i{\textbf{n}}_{k,i} of (7), and using ϕl,i=𝝎¯l,i+μlpl,i{\mbox{\boldmath$\phi$}}_{l,i}=\bar{{\mbox{\boldmath$\omega$}}}_{l,i}+\mu_{l}{\textbf{p}}_{l,i}, by replacing (7) into (14), we have

𝝎~k,i+1=l𝒩kal,kG~l,iGl,i1(𝝎¯l,i+μlpl,i)𝝎o.\tilde{{\mbox{\boldmath$\omega$}}}_{k,i+1}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}(\bar{{\mbox{\boldmath$\omega$}}}_{l,i}+\mu_{l}{\textbf{p}}_{l,i})-{\mbox{\boldmath$\omega$}}^{o}. (15)

Now, writing 𝝎o=l𝒩kal,k𝝎o{\mbox{\boldmath$\omega$}}^{o}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}{\mbox{\boldmath$\omega$}}^{o}, and expanding (15), and using G~l,iGl,i1=IL+Vl,iGl,i1\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}={\textbf{I}}_{L}+{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}, we derive

𝝎~k,i+1=l𝒩kal,k𝝎~l,i+l𝒩kal,kVl,iGl,i1𝝎¯l,i\tilde{{\mbox{\boldmath$\omega$}}}_{k,i+1}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{{\mbox{\boldmath$\omega$}}}_{l,i}+\sum_{l\in\mathcal{N}_{k}}a_{l,k}{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}\bar{{\mbox{\boldmath$\omega$}}}_{l,i}
+μkl𝒩kal,kG~l,iGl,i1pl,i.+\mu_{k}\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i}. (16)

Now, taking the expectation operator E{.}\mathrm{E}\{.\} form both sides of (16), we have

𝝎~~k,i+1E{𝝎~k,i+1}=l𝒩kal,k𝝎~~l,i\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{k,i+1}\triangleq\mathrm{E}\{\tilde{{\mbox{\boldmath$\omega$}}}_{k,i+1}\}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l,i}
+l𝒩kal,kE{Vl,i}E{Gl,i1𝝎¯l,i}+μkl𝒩kal,kE{G~l,iGl,i1pl,i},+\sum_{l\in\mathcal{N}_{k}}a_{l,k}\mathrm{E}\{{\textbf{V}}_{l,i}\}\mathrm{E}\{{\textbf{G}}^{-1}_{l,i}\bar{{\mbox{\boldmath$\omega$}}}_{l,i}\}+\mu_{k}\sum_{l\in\mathcal{N}_{k}}a_{l,k}\mathrm{E}\{\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i}\}, (17)

where assumption 1 is used. Defining the expectation of the third term of (17) which is T3=E{G~l,iGl,i1pl,i}\mathrm{T}_{3}=\mathrm{E}\{\tilde{{\textbf{G}}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i}\}, we obtain

T3=E{(IL+Vl,iGl,i1)pl,i}=E{pl,i}+E{Vl,iGl,i1pl,i}\mathrm{T}_{3}=\mathrm{E}\{({\textbf{I}}_{L}+{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}){\textbf{p}}_{l,i}\}=\mathrm{E}\{{\textbf{p}}_{l,i}\}+\mathrm{E}\{{\textbf{V}}_{l,i}{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i}\}
=E{pl,i}+E{Vl,i}E{Gl,i1pl,i}=E{pl,i},=\mathrm{E}\{{\textbf{p}}_{l,i}\}+\mathrm{E}\{{\textbf{V}}_{l,i}\}\mathrm{E}\{{\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i}\}=\mathrm{E}\{{\textbf{p}}_{l,i}\}, (18)

where assumption 2 is used. To calculate E{pl,i}\mathrm{E}\{{\textbf{p}}_{l,i}\}, we write

pl,i=l𝒩kal,lul,i(dl,iul,iT𝝎¯l,i).{\textbf{p}}_{l,i}=\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l^{{}^{\prime}},l}{\textbf{u}}_{l^{{}^{\prime}},i}(d_{l^{{}^{\prime}},i}-{\textbf{u}}^{T}_{l^{{}^{\prime}},i}\bar{{\mbox{\boldmath$\omega$}}}_{l^{{}^{\prime}},i}). (19)

Replacing dl,i=ul,iT𝝎o+𝜼l,id_{l^{{}^{\prime}},i}={\textbf{u}}^{T}_{l^{{}^{\prime}},i}{\mbox{\boldmath$\omega$}}^{o}+{\mbox{\boldmath$\eta$}}_{l^{{}^{\prime}},i} into (19), we then have

pl,i=l𝒩kal,lul,iul,iT(𝝎o𝝎¯l,i).{\textbf{p}}_{l,i}=\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l^{{}^{\prime}},l}{\textbf{u}}_{l^{{}^{\prime}},i}{\textbf{u}}^{T}_{l^{{}^{\prime}},i}({\mbox{\boldmath$\omega$}}^{o}-\bar{{\mbox{\boldmath$\omega$}}}_{l^{{}^{\prime}},i}). (20)

Taking the expectation of both sides of (20), we reach

E{pl,i}=l𝒩kal,lRl,l𝝎~~l,i,\mathrm{E}\{{\textbf{p}}_{l,i}\}=-\sum_{l^{{}^{\prime}}\in\mathcal{N}_{k}}a_{l^{{}^{\prime}},l}{\textbf{R}}_{l^{{}^{\prime}},l}\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l^{{}^{\prime}},i}, (21)

where the covariance matrix Rl,lE{ul,iul,iT}{\textbf{R}}_{l^{{}^{\prime}},l}\triangleq\mathrm{E}\{{\textbf{u}}_{l^{{}^{\prime}},i}{\textbf{u}}^{T}_{l^{{}^{\prime}},i}\}. Now, substituting (21) and (18) into (17), and from assumption of being zero mean of Vl,i{\textbf{V}}_{l,i}, we find that

𝝎~~k,i+1=l𝒩kal,k𝝎~~l,iμkl𝒩kal,kl𝒩lal,lRl,l𝝎~~l,i\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{k,i+1}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l,i}-\mu_{k}\sum_{l\in\mathcal{N}_{k}}a_{l,k}\sum_{l^{{}^{\prime}}\in\mathcal{N}_{l}}a_{l^{{}^{\prime}},l}{\textbf{R}}_{l^{{}^{\prime}},l}\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l,i}
=l𝒩kal,k(ILμkl𝒩lal,lRl,l)𝝎~~l,i.=\sum_{l\in\mathcal{N}_{k}}a_{l,k}({\textbf{I}}_{L}-\mu_{k}\sum_{l^{{}^{\prime}}\in\mathcal{N}_{l}}a_{l^{{}^{\prime}},l}{\textbf{R}}_{l^{{}^{\prime}},l})\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l,i}. (22)

If we define Bll𝒩lal,lRl,l{\textbf{B}}_{l}\triangleq\sum_{l^{{}^{\prime}}\in\mathcal{N}_{l}}a_{l^{{}^{\prime}},l}{\textbf{R}}_{l^{{}^{\prime}},l}, then we have the following recursion formula for 𝝎~~k,i+1\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{k,i+1}:

𝝎~~k,i+1=l𝒩kal,k(ILμkBl)𝝎~~l,i.\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{k,i+1}=\sum_{l\in\mathcal{N}_{k}}a_{l,k}({\textbf{I}}_{L}-\mu_{k}{\textbf{B}}_{l})\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{l,i}. (23)

Let us define the following global quantities of 𝝎~~i=col{𝝎~~1,i,,𝝎~~N,i}\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{i}=\operatorname{col}\left\{\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{1,i},\ldots,\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{N,i}\right\}, =diag{B1,,BN}\mathcal{B}=\operatorname{diag}\left\{\mathbf{{\textbf{B}}}_{1},\ldots,\mathbf{{\textbf{B}}}_{N}\right\}, =diag{μ1IL,,μNIL}\mathcal{M}=\operatorname{diag}\left\{\mu_{1}{\textbf{I}}_{L},\ldots,\mu_{N}{\textbf{I}}_{L}\right\}, and 𝒜=𝐀IL\mathcal{A}=\mathbf{A}\otimes{\textbf{I}}_{L}, where (𝐀)l,k=al,k(\mathbf{A})_{l,k}=a_{l,k}. Note that operators col{}\operatorname{col}\{\cdot\} and \otimes denote the vectorization operation and the Kronecker product, respectively. Then (23) can be rewritten as:

𝝎~~i+1=𝒜T(ILN)𝝎~~i,\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{i+1}=\mathcal{A}^{\mathrm{T}}\left({\textbf{I}}_{LN}-\mathcal{M}\mathcal{B}\right)\tilde{\tilde{{\mbox{\boldmath$\omega$}}}}_{i}, (24)

It is seen from (24) that the combination matrix 𝒜T\mathcal{A}^{\mathrm{T}} appears pre-multiplying the block diagonal matrix (ILN)\left({\textbf{I}}_{LN}-\mathcal{M}\mathcal{B}\right). Employing the block maximum norm [1] with blocks of size L×LL\times L, we conclude that ρ()ρ(ILN)\rho(\mathcal{F})\leq\rho({\textbf{I}}_{LN}-\mathcal{M}\mathcal{B}), where =𝒜T(ILN)\mathcal{F}=\mathcal{A}^{\mathrm{T}}\left({\textbf{I}}_{LN}-\mathcal{M}\mathcal{B}\right) and ρ()\rho(\cdot) represents the spectral radius of the matrix therein. Therefore, the matrix \mathcal{F} becomes stable whenever the block-diagonal matrix (ILN)\left({\textbf{I}}_{LN}-\mathcal{M}\mathcal{B}\right) is stable. It is easily seen that this latter condition is guaranteed for step-sizes μk\mu_{k} satisfying 0<μk<2ρ(Bl)0<\mu_{k}<\frac{2}{\rho(\mathbf{{\textbf{B}}}_{l})} for k,l=1,2,Nk,l=1,2\ldots,N, or simply 0<μk<2λmax(Bl)0<\mu_{k}<\frac{2}{\lambda_{max}(\mathbf{{\textbf{B}}}_{l})}. Using the definition Bll𝒩lRl,l{\textbf{B}}_{l}\triangleq\sum_{l^{{}^{\prime}}\in\mathcal{N}_{l}}{\textbf{R}}_{l^{{}^{\prime}},l}, the convergence condition is simplified to

0<μk<2maxl=1,,Nλmax(Rl,l).0<\mu_{k}<\frac{2}{\max_{l=1,\ldots,N}\lambda_{\max}({\textbf{R}}_{l^{{}^{\prime}},l})}. (25)

For the special case when the regression vectors are white, i.e., Rl,l=σu2δl,l{\textbf{R}}_{l^{{}^{\prime}},l}=\sigma^{2}_{u}\delta_{l^{{}^{\prime}},l}, we can express (25) as 0<μk<2σu20<\mu_{k}<\frac{2}{\sigma^{2}_{u}}.

V Simulation Results

In this section, the simulation results are presented. The network used in the simulation has N=16N=16 agents, which is similar to that used in [12]. The size of the unknown vector is L=20L=20 and the elements are derived from a unit normal random variable with zero mean. The regression vector elements are also white unit normals with zero mean. The measurement noises are zero mean white Gaussian random variables with variances σu2=0.05\sigma^{2}_{u}=0.05. The noisy AWGN channels between nodes are zero mean Gaussian random variables with variances σv2=0.05\sigma^{2}_{v}=0.05. The combination coefficients al,ka_{l,k} and cl,kc_{l,k} are selected based on uniform policy [1]. For the performance metric, the MSD is used which is defined as MSD(dB)=20log10(𝝎𝝎o2)\mathrm{MSD}(\mathrm{dB})=20~{}\mathrm{log}_{10}(||{\mbox{\boldmath$\omega$}}-{\mbox{\boldmath$\omega$}}_{o}||_{2}). We examined the assumptions of mean convergence analysis via simulation experiment. We computed the correlation coefficient between random variables A=Vl,iA={\textbf{V}}_{l,i} and B=Gl,i1𝝎¯l,iB={\textbf{G}}^{-1}_{l,i}\bar{{\mbox{\boldmath$\omega$}}}_{l,i} which is r(A,B)=0.183r(A,B)=0.183. We computed the correlation coefficient between random variables C=Vl,iC={\textbf{V}}_{l,i} and D=Gl,i1pl,iD={\textbf{G}}^{-1}_{l,i}{\textbf{p}}_{l,i} which is r(C,D)=0.148r(C,D)=0.148. We also computed the mean value of the error of reconstruction matrix which was El,i{Vl,i}=0.0570\mathrm{E}_{l,i}\{{\textbf{V}}_{l,i}\}=0.057\approx 0. However, the assumptions are not exactly validated by the assumptions, but they are satisfied to some extent. Performing simulations with different value of α\alpha and β\beta show that the best value of α\alpha for acquiring minimum final MSD is α=1.5\alpha=1.5 and the proposed algorithm is not sensitive to value of β\beta. Hence, we use α=1.5\alpha=1.5 and β=10\beta=10 in the simulations. In the first experiment, the Oracle DP-PGCDLMS, DP-PGCDLMS-Version1, DP-PGCDLMS-Version2, PGCDLMS, and DLMS algorithms are compared in which the step-sizes are selected as 1, 0.1, 0.1, 0.05, 0.01, respectively. To investigate the tracking performance, we changed the value of unknown parameter vector abruptly at iteration index 1000. The result of MSD versus iteration number is depicted in Fig. 1. It is seen that the oracle DP-PGCDLMS, the non-private PGCDLMS, non-private DLMS have almost the same performance and the tacking capability of the proposed algorithm is acceptable. In the second experiment, the proposed DP-PGCDLMS algorithms are compared with non-private RPRDLMS [11], Partial-Private DLMS (PPDLMS) [19] in two cases of M=0.8LM=0.8L and M=LM=L, where MM is the compressed length. For the PP-DLMS, the step-size is selected as μ=0.01\mu=0.01. The results are shown in Fig. 2. It is observed that the proposed DP-PGCDLMS-Version2 exhibits faster convergence rate than DP-PGCDLMS-Version1. Additionally, both versions demonstrate lower final MSD and convergence rate than PPDLMS. Furthermore, the non-private RPRDLMS exhibits the lowest final MSD among all compared methods.

Refer to caption
Figure 1: MSD versus iteration number in the tracking case.
Refer to caption
Figure 2: MSD versus iteration number for performance comparison of various algorithms.

VI Conclusion and future work

In this paper, a privacy preserving distributed estimation algorithm is suggested which uses both cryptography-based methods and differential privacy (DP). The inverse of proportionate gain matrix in PGCDLMS is used as a key matrix to perturb the estimation to enhance the privacy. Also, DP noise is added to even yield more privacy. At the receiver of the local node, the noise is subtracted and the gain matrix is used as the key matrix to recover the intermediate estimation as a message. The benefit of using proportionate gain matrix in PGCDLMS is that it has closed form which enables us to reconstruct the key matrix without sharing the key matrix. Mathematical analysis of the proposed DP-PGCDLMS is provided in the paper. Simulation results show the effectiveness of the proposed algorithm. While we recognize the lack of explicit performance guarantees in our current algorithm version, we are dedicated to exploring methods to integrate these assurances in our future work.

References

  • [1] A. H. Sayed, Adaptation, Learning and Optimization over networks, Foundations and Trends in Machine Learning, 2014.
  • [2] S. Y. Tu, and A. H. Sayed, “Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation Over Adaptive Networks,” IEEE Trans. on Signal Proc., vol. 60, no. 12, pp. 6217–6234, Dec 2012.
  • [3] K. Kumar, et al, “Robust and sparsity-aware adaptive filters: A Review,” Elsevier Signal Processing., vol. 189, Dec. 2021.
  • [4] C. G. Lopes, and A. H. Sayed, “Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis,” IEEE Trans. on Signal Proc., vol. 56, pp. 3122–3136, 2008.
  • [5] F. S. Cattivelli, and A. H. Sayed, “Diffusion LMS Strategies for Distributed Estimation,” IEEE Trans. on Signal Proc., vol. 58, pp. 1035–1048, 2010.
  • [6] F. Wen, “Diffusion Least Mean P-power Algorithms for Distributed Estimation in alpha-Stable Noise Environments ,” Electron. Lett., vol. 49, no. 21, pp. 1355–1356, 2013.
  • [7] M. Korki, et al, “Weighted Diffusion Continuous Mixed p-norm Algorithm for Distributed Estimation in Non-uniform Noise Environment,” Elsevier Signal Processing., vol. 164, pp. 225–233, Nov 2019.
  • [8] W. Ma, B. Chen, J. Duan, and H. Zhao, “Diffusion maximum correntropy criterion algorithms for robust distributed estimation,” Digital Signal Processing., vol. 58, pp. 10-16, 2016.
  • [9] H. Zayyani, “Robust minimum disturbance diffusion LMS for distributed estimation,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 68, no. 1, pp. 521–525, 2020.
  • [10] S. H. Yim, H. S. Lee, and W. J. Song, “A proportionate diffusion LMS algorithm for sparse distributed estimation,” IEEE Trans. on Circuit and Systems-II: Express Briefs., vol. 62, no. 10, pp. 992–996, Oct 2015.
  • [11] H. Zayyani, et al, “A Robust Generalized Proportionate Diffusion LMS Algorithm for Distirbuted Estimation,” IEEE Trans on Circuits and systems-II: Express Briefs., vol. 68, no. 4, pp. 1552-1556 , April 2021.
  • [12] H. Zayyani, F. Oruji, and I. Fijalkow, “An Adversary-Resilient Doubly Compressed Diffusion LMS Algorithm for Distributed Estimation,” Circuit, System, and Signal Processing, vol. 41, pp. 6182–6205, 2022.
  • [13] F. Hosseiniamin, H. Zayyani, M. Korki, and M. Bekrani, “A Low Complexity Proportionate Generlized Correntropy-based Diffusion LMS Algorithm with Closed-form Gain Coefficients,” IEEE Trans on Circuits and systems-II: Express Briefs., Early access, 2023.
  • [14] R. L. Lagendijk, Z. Erkin, and M. Barni, “Encrypted signal processing for privacy protection: Conveying the utility of homomorphic encryption and multiparty computation,” IEEE Signal Process. Mag., vol. 30, no. 1, pp. 82–105, Jan 2013.
  • [15] J. He, and et al, “Privacy-preserving average-consensus: privacy analysis and algorithm design,” IEEE Trans. Signal Inf. Process. Netw., vol. 5, no. 1, pp. 127–138, 2019.
  • [16] A. Moradi, and et al, “Distributed Kalman filtering with privacy against honest-but-curiuos adversaries,” In Proc. 55th IEEE Asilomar Conf. Signals, Syst., Computers, pp. 790–794, 2021.
  • [17] J. He, and et al, “Differential private noise adding mechanism and its application on consensus algorithm,” IEEE Trans. on Signal Proc., vol. 68, pp. 4069–4082, July 2020.
  • [18] P. Sadeghi, and M. Korki, “Offset-Symmetric Gaussians for Differential Privacy,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 2394–2409, June 2022.
  • [19] V. C. Gogineni, and et al, “Communication-Efficient and Privacy-Aware Distributed LMS Algorithm,” In 2022 25th International Conference on Information Fusion (FUSION), July 2022.
  • [20] E. Rizk, S. Vlaskiy, and A. H. Sayed, “Enforcing Privacy in Distributed Learning With Performance Guarantees,” IEEE Transactions on Signal Processing, vol. 71, pp. 3385-3398, 2023.
  • [21] C. Wang, W. P. Tay, Y. Wei and Y. Wang, “Privacy-Preserving Distributed Projection LMS for Linear Multitask Networks,” IEEE Transactions on Signal Processing, vol. 69, pp. 6530-6545, 2021.