Note that, our original loss function is given by
(2) |
|
|
|
|
|
|
|
|
|
|
|
|
where . Since the Markov chain , we have the following inequality
(3) |
|
|
|
Given that
|
|
|
|
|
|
|
|
|
|
|
|
(4) |
|
|
|
|
Where represents information entropy and mutual information has the property that . Considering that and are independent of each other, and the property of mutual information where , then
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(5) |
|
|
|
|
Combining the equation 1 and the inequality 3, 1, we can get the following inequality
(6) |
|
|
|
So we can eliminate in the original loss function that are hard to calculate.
|
|
|
|
|
|
(7) |
|
|
|
Where . Then our target is to find the bound for . We can deduce sensitive information from sensitive attributes. There is another Markov chain . Then . Combining the assumption , the upper bound of is . We have already discussed the necessity of in the text. So we can prove that
|
|
|
|
|
|
|
|
(8) |
|
|
|
|
where , , .
∎