A Blockchain-Based Approach for Saving and Tracking Differential-Privacy Cost
Abstract
An increasing amount of users’ sensitive information is now being collected for analytics purposes. Differential privacy has been widely studied in the literature to protect the privacy of users’ information. The privacy parameter bounds the information about the dataset leaked by the noisy output. Oftentimes, a dataset needs to be used for answering multiple queries, so the level of privacy protection may degrade as more queries are answered. Thus, it is crucial to keep track of privacy budget spending, which should not exceed the given limit of privacy budget. Moreover, if a query has been answered before and is asked again on the same dataset, we may reuse the previous noisy response for the current query to save the privacy cost. In view of the above, we design an algorithm to reuse previous noisy responses if the same query is asked repeatedly. In particular, considering that different requests of the same query may have different privacy requirements, our algorithm can set the optimal reuse fraction of the old noisy response and add new noise to minimize the accumulated privacy cost. Furthermore, we design and implement a blockchain-based system for tracking and saving differential-privacy cost. As a result, the owner of the dataset will have full knowledge about how the dataset has been used and be confident that no new privacy cost will be incurred for answering queries once the specified privacy budget is exhausted.
Index Terms:
Blockchain, differential privacy, data analytics, Gaussian mechanism.I Introduction
Massive volumes of users’ sensitive information are being collected for data analytics and machine learning such as large-scale Internet of Things (IoT) data. Some IoT data contain users’ confidential information, for example, energy consumption or location data. They may expose a family’s habits [1, 2, 3, 4, 5]. To protect personal privacy, many countries have strict policies about how technology companies collect and process users’ data. However, the companies need to analyze users’ data for service quality improvement. To preserve privacy while revealing useful information about datasets, differential privacy (DP) has been proposed [6, 7, 8]. Intuitively, by incorporating some noise, the output of an algorithm under DP will not change significantly due to the presence or absence of one user’s information in the dataset. Due to its introduction [6, 7], DP has attracted much interest from both academia [9, 10, 11, 12, 13] and industry [14, 15, 16]. For example, Apple has incorporated DP into its mobile operating system iOS [14]; Google has implemented a DP tool called RAPPOR in the Chrome browser to collect information [15].
Roughly speaking, a randomized mechanism achieving -DP [6] means that except with a (typically small) probability , altering a record in a database cannot change the probability that an output is seen by more than a multiplicative factor . Thus, the information about the dataset leaked by the noisy output of an -DP algorithm is bounded by the privacy parameters and . Smaller and mean stronger privacy protection and less information leakage. Note that non-zero information leakage is necessary to achieve non-zero utility. Usually, a dataset may be used for answering multiple queries (e.g., for multiple analytics tasks), thus accumulating the information leakage and degrading the privacy protection level, which can be intuitively understood as the increase of privacy spending. Therefore, it is necessary to record the privacy cost to prevent it from exceeding the privacy budget. The privacy budget is used to quantify the privacy risk when a differential private scheme is applied to real-world applications. Besides, we reduce privacy cost by reusing old noisy response to answer the current query if the query was answered before.
Traditionally, the privacy cost incurred by answering queries on a dataset is claimed by the dataset holder. Users whose information is in the dataset are not clear about the usage. It is possible that privacy consumption has exceeded the privacy budget. To solve this problem, the emerging blockchain technology provides a new solution to manage the privacy cost. Blockchain is a chain of blocks storing cryptographic and tamper-resistant transaction records without using a centralized server [17, 18]. With blockchain recording how the dataset is used for answering queries, users have full knowledge of how their information is analyzed. Users can easily access the blockchain to check the consumption of the privacy budget. The dataset holder has the motivation to adopt our blockchain-based approach to provide the following accountability guarantee to users whose information is in the dataset: if the dataset holder uses the dataset more than the set of queries recorded by the blockchain, measures can be taken to catch the dataset holder with cheating because transactions written into the blockchain are tamper-resistant. Yang et al. [19] propose to leverage blockchain to track differential privacy budget, but they do not propose a mechanism to reuse noise. In contrast, we design a DP mechanism to effectively reuse previous queries’ results to reuse noise and reduce privacy cost. In Section VI-A, we present more detailed comparisons.
In view of the above, we propose a blockchain-based Algorithm 1 and implement it to track and manage differential-privacy cost, which uses blockchain to make the privacy spending transparent to the data owner. Consequently, the data owner can track how dataset used by checking blockchain transactions’ information, including each query’s type, the noisy response used to answer each query, the associated noise level added to the true query result, and the remaining privacy budget. In addition to providing transparency of privacy management, another advantage of our blockchain-based system is as follows. Once the specified privacy budget is exhausted, a smart contract implemented on the blockchain ensures that no new privacy cost will be incurred, and this can be verified. Furthermore, since the blockchain stores the noisy response used to answer each query, we also design an algorithm to minimize the accumulated privacy cost by reusing previous noisy response if the same query is asked again. Our algorithm (via a rigorous proof) is able to set the optimal reuse fraction of the old noisy response and add new noise (if necessary) considering different requests of the same query may be sent with different privacy requirements. In our blockchain-based system, reusing noisy responses not only saves privacy cost, but also reduces communication overhead when the noisy response is generated without contacting the server hosting the dataset.
Contributions. The major contributions of this paper are summarized as follows:
-
First, a novel privacy-preserving algorithm with a rigorous mathematical proof is designed to minimize accumulated privacy cost under a limited privacy budget by reusing previous noisy responses if the same query is received. Thus, a dataset can be used to answer more queries while preventing the privacy leakage, which is essential for the datasets with frequent queries, e.g., medical record datasets.
-
Second, our designed approach reduces the number of times to request the server significantly by taking advantage of recorded noisy results.
-
Third, we implement the proposed system and algorithm according to a detailed sequence diagram, and conduct experiments by using a real-world dataset. Numerical results demonstrate that our proposed system and algorithm are effective in saving the privacy cost while keeping accuracy.
Organization. The rest of the paper is organized as follows. Section II introduces preliminaries about differential privacy and blockchains. Section III presents system design including our proposed noise reuse algorithm. Section IV describes challenges in implementing our system. In Section V, we discuss experimental results to validate the effectiveness of our system. Section VI surveys related work. Section VIII concludes this paper and identifies future directions.
Notation. Throughout the paper, denotes the probability, and stands for the probability density function. The notation denotes a Gaussian random variable with zero mean and variance , and means a fresh Gaussian noise when it is used to generate a noisy query response. Notations used in the rest of the paper are summarized in Table I.
privacy parameters | |||
probability | |||
probability density function | |||
dataset | |||
neighbouring dataset of | |||
-sensitivity of query | |||
standard deviation of the Gaussian noise | |||
|
|||
randomized mechanisms | |||
the optimal fraction | |||
privacy loss | |||
|
|||
variance |
II Preliminaries
We organize this section on preliminaries as follows. In Section II-A, we introduce the formal definition of differential privacy. In Section II-B, we explain the concepts of blockchain, Ethereum and smart contract.
II-A Differential Privacy
Differential privacy intuitively means that the adversary cannot determine with high confidence whether the randomized output comes from a dataset or its neighboring dataset which differs from by one record. The formal definition of -differential privacy is given in Definition 1, and the notion of neighboring datasets is discussed in Remark 2.
Definition 1 (-Differential privacy [20]).
A randomized mechanism , which generates a randomized output given a dataset as the input, achieves -differential privacy if
(1) | |||
for and iterating through | |||
all pairs of neighboring datasets, and | |||
where denotes the probability, and the probability space is over the coin flips of the randomized mechanism .
Remark 1.
Remark 2 (Notion of neighboring datasets).
Two datasets and are called neighboring if they differ only in one tuple. There are still variants about this. In the first case, the sizes of and differ by one so that is obtained by adding one record to or deleting one record from . In the second case, and have the same size (say ), and have different records at only one of the positions. Finally, the notion of neighboring datasets can also be defined to include both the cases above. Our results in this paper apply to all of the above cases.
Among various mechanisms to achieve DP, the Gaussian mechanism for real-valued queries proposed in [6] has received much attention. The improved result given by [20] is Lemma 1.
Lemma 1 (Theorem A.1 by Dwork and Roth [20]).
To answer a query with -sensitivity , adding a zero-mean Gaussian noise with standard deviation (denoted by hereafter in this paper) to each dimension of the true query result achieves -differential privacy. The above -sensitivity of a query is defined as the maximal distance between the true query results for any two neighboring datasets and that differ in one record; i.e., .
II-B Blockchain, Ethereum and Smart Contracts
Blockchain. The blockchain technology is popularly used in systems requiring high security and transparency, such as Bitcoin and Ethereum [21]. The blockchain can be effectively used to solve the double-spending problem in Bitcoin transaction by using a peer-to-peer network. The solution is to hash transaction information in a chain of hash-based Proof-of-Work (PoW, used by Bitcoin) which is the consensus mechanism algorithm used to confirm transactions and produce new blocks to the chain. Once the record is formed, it cannot be changed except redoing Proof-of-Work.
Besides, the blockchain is constantly growing with appending ‘completed’ blocks. Blocks consisting of the most recent transactions are added to the chain in chronological order [22]. Each blockchain node can have a copy of the blockchain. The blockchain allows participants to track their transactions without centralized control.
Ethereum. Ethereum is a blockchain platform which allows users to create decentralized end-to-end applications [23]. The miners in Ethereum use Proof-of-Work consensus algorithm to complete transaction verification and synchronization. Besides, Ethereum can run smart contracts elaborated below.
Smart Contract. The smart contract was first proposed by Nick Szabo as a computerized transaction protocol that can execute terms of a contract automatically [24]. It intends to make a contract digitally, and allows to maintain credible transactions without a third party. With the development of blockchains, such as Ethereum, smart contracts are stored in the blockchain as scripts. A blockchain with a Turing-complete programming language allows everyone to customize smart contract scripts for transactions [25]. Smart contracts are triggered when transactions are created or generated on the blockchain to finishe specific tasks or services.
III System Description
Our blockchain-based system provides differentially private responses to queries while minimizing the privacy cost via noise reuse. We design a web application to implement our Algorithm 1, which generates noisy responses to queries with the minimal privacy cost by setting the optimal reuse fraction of the old noisy response and adding new noise (if necessary). For clarity, we defer Algorithm 1 and its discussion to Section III. The design of the system is illustrated in Fig. 1 and we discuss the details in the following. In Section V, we will discuss the implementation and experiments of our blockchain-based system, and present more figures about the implementation. In particular, Fig. 3 there shows the screenshot of our blockchain-based privacy management system [26], while Fig. 4 presents outputs while using the system.

III-A System Architecture
Our system includes the client, the blockchain, the server, and smart contract followed by more details as below.
Client: The primary function of the client is to transfer users’ queries to the blockchain smart contract. The client computes the required parameter standard deviation for the server to generate the Gaussian noise using the privacy parameters and and forwards the query to the blockchain. Also, the client can display the query result to the analyst after getting the noisy response to the query.
Blockchain Smart Contract: The blockchain serves as a middleware between the client and the server. It decides which query should be submitted to the server. The blockchain records the remaining privacy budget, query type, the noisy response to answer the query, the privacy parameters, and the amount of corresponding noise. If the remaining privacy budget is enough, the smart contract will execute the query match function with the recorded history. Otherwise, the smart contract will reject this query. If the current query does not match with any query in the history, the smart contract will call the server to calculate the result. If the query has been received before, the blockchain smart contract will not call the server if the noisy response can be completely generated by old noisy answers and will call the server if access to the dataset is still needed to generate the noisy response.
Server: The data provider hosts the server. The server provides APIs to answer analysts’ queries. When the API is called, the server will query the dataset to calculate the respective answer. After the true value is calculated, the server will add noise to perturb the answer. Then the server returns the noisy answer to the blockchain.
In the rest of the paper, we use Blockchain, Client, and Server to denote the blockchain, client, and server, respectively.
III-B System Functionality
Match query with query history and generate noisy response: Blockchain compares the current query type with saved query types to retrieve previous query results. If it is the first time for Blockchain to see the query, Blockchain will forward the query to the server, and Server will return the perturbed result which satisfies differential privacy to Blockchain. If the current query type matches previous answers’ query type, Blockchain will compare the computed amount of noise with all previously saved amounts of noise under the same query type. Based on the comparison result, Blockchain will completely reuse old responses or call Server.
Manage privacy budget: Blockchain updates the privacy budget as queries are answered and the Blockchain ensures no new privacy cost will be incurred for answering queries once the specified privacy budget is exhausted.
’s query type |
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
III-C Adversary Model
The adversary model for our system is similar to [19]. Assume that there are two kinds of adversaries:
First, adversaries can obtain perturbed query results. They may try to infer users’ real information using perturbed queries’ results.
Second, adversaries attempt to modify the privacy budget. For example, they would like to decrease the used privacy budget so that users may exceed the privacy budget. As a result, privacy will leak. However, in our case, the privacy budget is recorded on the blockchain. The adversaries cannot tamper it once the privacy budget is stored in the blockchain.
III-D Our Algorithm 1 based on Reusing Noise
We present our solution for reusing noise in Algorithm 1 in Section III-E. We consider real-valued queries so that the Gaussian mechanism can be used. Extensions to non-real-valued queries can be regarded as the future work, where we can apply the exponential mechanism of [12].
To clarify notation use, we note that means the -th query (ordered chronologically) and is answered by a randomized algorithm . A type -query means that the query’s type is . Queries asked at different time can have the same query type. This is the reason that we reuse noise in Algorithm 1.
Suppose a dataset has been used to answer queries , where the -th query for is answered under -differential privacy (by reusing noise, or generating fresh noise, or combining both). For , we define , where denotes the -sensitivity of , where we defer the discussion of to Section III-H. As presented in Algorithm 1, we have several cases discussed below. For better understanding of these cases, we later discuss an example given in Table II in Section II.
- Case 1):
-
Case 2):
If has been received before, suppose is a type -query, and among the previous queries , let consist of the corresponding noise amounts for previous instances of type -query; i.e., has been recorded in Blockchain and is a type -query.
Blockchain compares and the values in , resulting in the following subcases.
-
Case 2A):
If there exists such that , then is set as .
-
Case 2B):
This case considers that is less than which denotes the minimum in . Let denote the noisy response (kept in Blockchain) corresponding to ; specifically, if for some , then . Under , to minimize the privacy cost, we reuse fraction of noise in to generate (if the privacy budget allows). This will be obtained by Theorem 1’s Result (ii) to be presented in Section III-E. Specifically, under , as given by Line 22 of Algorithm 1, is set by . Note that if is multidimensional, independent Gaussian noise will be added to each dimension according to the above formula. This also applies to other places of this paper.
- Case 2C):
-
Case 2A):
III-E Explaining the Noise Reuse Rules of Algorithm 1
Our noise-reuse rules of Algorithm 1 are designed to minimize the accumulated privacy cost. To explain this, inspired by [13], we define the privacy loss to quantify privacy cost. We analyze the privacy loss to characterize how privacy degrades in a fine-grained manner, instead of using the composition theorem by Kairouz et al. [27]. Although [27] gives the state-of-the-art results for the composition of differentially private algorithms, the results do not assume the underlying mechanisms to achieve differential privacy. In our analysis, by analyzing the privacy loss of Gaussian mechanisms specifically, we can obtain smaller privacy cost.
For a randomized algorithm , neighboring datasets and , and output , the privacy loss represents the multiplicative difference between the probabilities that the same output is observed when the randomized algorithm is applied to and . Specifically, we define
(2) |
where denotes the probability density function.
For simplicity, we use probability density function in Eq. (2) above by assuming that the randomized algorithm has the continuous output. If has the discrete output, we replace by probability mass function .
When follows the probability distribution of random variable , follows the probability distribution of random variable , which we write as for simplicity.
We denote the composition of some randomized mechanisms for a positive integer by . For the composition, the privacy loss with respect to neighboring datasets and when the outputs of randomized mechanisms are is defined by
When follows the probability distribution of random variable for each , clearly follows the probability distribution of random variable , which we write as for simplicity.
With the privacy loss defined above, we now analyze how to reuse noise when a series of queries are answered under differential privacy. To this end, we present Theorem 1, which presents the optimal ratio of reusing noise to minimize privacy cost.
Theorem 1 (Optimal ratio of reusing noise to minimize privacy cost).
Suppose that before answering query and after answering , the privacy loss is given by for some . For the -th query , suppose that is the same as for some and we reuse fraction of noise in to generate for satisfying , where is a constant to be decided. If follows a Gaussian probability distribution with mean and standard deviation , we generate the noisy response to answer query as follows:
(3) |
so that follows a Gaussian probability distribution with mean and standard deviation .
Note that and are the same since and are the same. Then we have the following results.
-
(i)
After answering the queries , the privacy loss will be for .
- (ii)
Proof.
The proof is in Appendix -A. ∎
Eq. (4) of Theorem 1 clearly indicates the noise use ratio of Case 2B) in Algorithm 1 (see Line 22 of Algorithm 1), and the noise use ratio of Cases 2A) and 2C) in Algorithm 1 (see Lines 15 and 30 of Algorithm 1).
By considering in Result (i) of Theorem 1, we obtain Corollary 1, which presents the classical result on the privacy loss of a single run of the Gaussian mechanism.
Corollary 1.
By considering in Result (i) of Theorem 1, we have that for a randomized algorithm which adds Gaussian noise amount to a query , the privacy loss with respect to neighboring datasets and is given by for .
Corollary 1 has been shown in many prior studies [20, 9, 10] on the Gaussian mechanism for differential privacy.
By considering in Result (i) of Theorem 1, we obtain Corollary 2, which presents the privacy loss of the naive algorithm where the noisy response to each query is generated independently using fresh noise.
Corollary 2 (Privacy loss of the naive algorithm where each query is answered independently).
Suppose a dataset has been used to answer queries under differential privacy. Specifically, for , to answer the -th query under -differential privacy, a noisy response is generated by adding independent Gaussian noise to the true query result , where is the -sensitivity of . Then after answering queries independently as above, the privacy loss with respect to neighboring datasets and is given by for .
III-F Explaining Privacy Cost Update in Algorithm 1
Among the above cases, Cases 2A) and 2C) do not incur additional privacy cost since they just use previous noisy results and generate fresh Gaussian noise, without access to the dataset . In contrast, Cases 1) and 2B) incur additional privacy cost since they need to access the dataset to compute the true query result . Hence, in Algorithm 1, the privacy cost is updated in Cases 1) and 2B), but not in Cases 2A) and 2C). In this section, we explain the reason that the privacy cost is updated in Algorithm 1 according to Lines 3 and 5 for Case 1), and Lines 18 and 19 for Case 2B).
When our Algorithm 1 is used, we let the above randomized mechanism be our noisy response function . When on dataset are instantiated as , if the generation of on dataset uses for some , then the auxiliary information in the input to contains ( is ). For the consecutive use of our Algorithm 1, it will become clear that the privacy loss, defined by , follows a Gaussian probability distribution with mean and variance for some , denoted by . For such a reason that form of privacy loss, the corresponding differential-privacy level is given by the following lemma.
Lemma 2.
If the privacy loss of a randomized mechanism with respect to neighboring datasets and is given by for some , then achieves -differential privacy for and satisfying .
Proof.
The proof details are in Appendix -B. ∎
Based on the privacy loss defined above, we have the following theorem which explains the rules to update the privacy cost in our Algorithm 1.
Theorem 2.
We consider the consecutive use of Algorithm 1 here. Suppose that after answering and before answering query , the privacy loss with respect to neighboring datasets and is given by for some , and the corresponding privacy level can be given by -differential privacy. Then in Algorithm 1, after answering all queries , we have:
-
the privacy loss with respect to neighboring datasets and
-
①
will still be in Cases 2A) and 2C),
-
②
will be in Case 1) for ,
-
③
will be in Case 2B) for ;
-
①
-
the corresponding privacy level can be given by -differential privacy with the following :
-
④
in Cases 2A) and 2C),
-
⑤
in Case 1) for satisfying ,
-
⑥
in Case 2B) for satisfying .
-
④
Theorem 2 explains the rules to update the privacy cost in Algorithm 1. Specifically, Result ⑤ gives Lines 3 and 5 for Case 1), and Result ⑥ gives Lines 18 and 19 for Case 2B).
Proof.
The proof is in Appendix -C. ∎
III-G Analyzing the Total Privacy Cost
Based on Theorem 2, we now analyze the total privacy cost when our system calls Algorithm 1 consecutively.
At the beginning when no query has been answered, we have (note that ). Then by induction via Corollary 1 and Theorem 2, for the consecutive use of Algorithm 1, the privacy loss is always in the form of for some . In our Algorithm 1, the privacy loss changes only when the query being answered belongs to Cases 1) and 2B). More formally, we have the following theorem.
Theorem 3.
Among queries , let , , , and be the set of such that is in Cases 1), 2A), 2B), and 2C), respectively. For queries in Case 2B), let be the set of query types. In Case 2B), for query type , suppose the number of type- queries be , and let these type- queries be for indices (ordered chronologically) all belonging to . From Case 2B) of Algorithm 1, we have , and for , is answered by reusing fraction of old noise in ; more specifically, from Line 22 of Algorithm 1 in Section III-E for Case 2B). We also consider that is answered by reusing fraction of old noise in . Let the -sensitivity of a type- query be .
In the example provided in Table II, we have , , , and . . In Case 2B), the number of type- queries is , and these type- queries are and so and (also since reuses ); the number of type- queries is , and these type- queries are , and so and , (also since reuses ); the number of type- queries is , and this type- query is so (also since reuses ).
Then after Algorithm 1 is used to answer all queries with query being answered under -differential privacy, we have:
-
The total privacy loss with respect to neighboring datasets and is given by , where
(6) and the first summation is the contribution from queries in Case 1), and the second summation is the contribution from queries in Case 2B). When and iterate the space of neighboring datasets, the maximum of is ’s -sensitivity , and the maximum of both and are since and are both type- queries, we obtain
(7)
Proof.
The proof is in Appendix -D. ∎
Remark 3.
Theorem 3 can be used to understand that our Algorithm 1 incurs less privacy cost than that of the naive algorithm where queries are answered independently. As given in Corollary 2, the privacy loss with respect to neighboring datasets and is given by for . Clearly, for given by Eq. (6) above. From Lemma 2, the privacy cost of the naive algorithm can be given by -differential privacy for satisfying , which with Eq. (8) in Theorem 3 and the expression of in Lemma 1 implies , where the equal sign is taken only when all queries are different so no noise reuse is incurred in our Algorithm 1.
III-H Computing the -sensitivity of A Query
The -sensitivity of a query is defined as the maximal distance between the (true) query results for any neighboring datasets and that differ in one record: . For one-dimensional real-valued query , is simply the maximal absolute difference between and for any neighboring datasets and . In Section V for performance evaluation, we define neighboring datasets by considering modifying an entry. Then if the dataset has users’ information, and the domain of each user’s income is within the interval , then for query being the average income of all users is since this is the maximal variation in the output when a user’s record changes. Similarly, for query being the percentage of female users is .
IV Implementation Challenges of Our Blockchain-Based System
We now discuss challenges and countermeasures during the design and implementation of our blockchain-based system.
Smart Contract fetches external data. Ethereum blockchain applications, such as Bitcoin scripts and smart contracts are unable to access and fetch directly the external data they need. However, in our application, Blockchain needs to fetch data from Server then returns them to Client. This requires smart contract to send the HTTP POST request. Hence, we use the Provable, a service integrated with a number of blockchain protocols and can be accessed by non-blockchain applications as well. It guarantees that data fetched from the original data-source is genuine and untampered.
By using the Provable, smart contracts can directly access data from web sites or APIs. In our case, Blockchain can send HTTP requests to Server with parameters, and then process and store data after Server responds successfully.
Mathematical operations with Solidity. Blockchain is written using solidity language which is designed to target Ethereum Virtual Machine. However, current solidity language does not have inherent functions for complex mathematical operations, such as taking the square root or logarithm. We write a function to implement the square root operation. To avoid using Lemma 1 to compute logarithm in Blockchain, we generate Gaussian noise in Client, and pass the value to Blockchain as one of the parameters in function QueryMatch. Besides, current Solidity version cannot operate float or double type data. To keep the precision, we scale up the noise amount during calculation, and then scale down the value before returning the noisy data to analysts.

V Implementation and Experiments
In this section, we perform experiments to validate that the proposed system and algorithm are effective in saving privacy cost according to the system flow shown in Fig. 2. More specifically, a user sends a query through the UI, and then Client receives the query and forwards it to Blockchain smart contract. After the smart contract checks with stored data, it will decide whether to return the noisy response to Client directly or forward the request to Server. If Server receives the request, it will generate and return a noisy response to the smart contract.
V-A Experiment Setup
We prototype a web application based on the system description in Section III. We use the Javascript language to write Client, whereas the Solidity language is for Blockchain smart contract. Besides, Web3 is used as the Javascript API to exchange information between Client and Blockchain smart contract, and then Node.js and Express web framework are leveraged to set up Server. In addition, MongoDB is used as the database to host the real-world dataset. Our designed smart contracts are deployed on the Ropsten [28] testnet with the MetaMask extension of the Chrome browser. The Ropsten testnet is a testing blockchain environment maintained by Ethereum, and it implements the same Proof-of-Work protocol as the main Ethereum network. Fig. 3 shows the screenshot of our blockchain-based privacy management system. Fig. 4 presents outputs while sending queries using the system.
We evaluate the performance of the proposed differential privacy mechanism based on a real-world dataset containing American community survey samples extracted from the Integrated Public Use Microdata Series at https://www.ipums.org. There are 5000 records in the dataset. Each record includes the following numerical attributes: “Total personal income”, “Total family income”, “Age”, and categorical attributes: “Race”, “Citizenship status”. We set the privacy budget as and , which are commonly used to protect the privacy of a dataset [29, 30]. We consider five types of queries: “average personal income”, “average total family income”, “frequency of US citizens”, “frequency of white race”, and “frequency of age more than 60”. For the privacy parameter of each query , we sample uniformly from and sample uniformly from . The sensitivities of these queries are 202, 404, 0.0002, 0.0002, and 0.0002, respectively. We compute the sensitivity of a query based on Section III-H. For the query “average total personal income”, since the user’s total personal income ranges from to in the dataset mentioned above, we assume the domain of total personal income is in the range of for all possible datasets. The sensitivity is and the mechanism protects the privacy of all data within . Thus, it can protect the privacy of the dataset in our experiment. Suppose the received query is “average total family income”. In that case, we assume the maximal variation is for all possible datasets because the total family income’s range is in the dataset we use. The sensitivity is . Hence, our generated noise with the sensitivity of can protect the privacy of all data within . Therefore, it can protect the privacy of the dataset we use as well. The sensitivity for queries “frequency of US citizens”, “frequency of white race”, and “frequency of age more than 60” is .




V-B Experimental Results
The benchmark of our experiment is a naive scheme which does not contain Algorithm 1 in the smart contract. That is, every query will be forwarded by the smart contract to Server to get the noisy response. Hence, no differential privacy cost can be reused in the naive scheme.
First, we use an experiment to validate that our proposed Algorithm 1 is effective in saving privacy cost. Thus, we design a performance comparison experiment by tracking privacy cost using our Algorithm 1 and the naive scheme, respectively. Specifically, we deploy two smart contracts implementing our Algorithm 1 and the naive scheme respectively on the Ropsten testnet. Then, we send 150 requests randomly selected in five query types from Client of the web application, and record the privacy cost of each query. As shown in Fig. 5, compared with the naive scheme, the proposed algorithm saves significant privacy cost. When the number of the queries is 150, the differential-privacy cost of Algorithm 1 is about 52 less than that of the naive algorithm. We also observe that the privacy cost in the proposed scheme increases slowly when the number of queries increases, even trending to converge to a specific value. The reason is that, in Algorithm 1, for each query type, we can always partially or fully reuse previous noisy answers when the query type is asked for a second time or more. Therefore, in our scheme, many queries are answered without incurring additional privacy cost if noisy responses fully reuse previous noisy answers.
Second, to prove that the proposed Algorithm 1 retains the accuracy of the dataset, we design another experiment to compare the sum of relative errors. We use the same smart contracts as those in the last experiment. We accumulate relative errors incurred in each query. Fig. 6 shows that the sum of relative errors of Algorithm 1 is comparable with that of the naive scheme. Since relative errors are similar between two schemes, our results demonstrate that the proposed Algorithm 1 keeps the accuracy.
As a summary, Fig. 5 and Fig. 6 together demonstrate that our Algorithm 1 can save privacy cost significantly without sacrificing the accuracy of the dataset.
Third, we evaluate the latency of our system. The latency of our system is affected by the blockchain, MetaMask, network condition and the server. To shorten the speed of network transmission, we setup a local testnet at http://localhost:3000/ using Ganache-cli client with blockTime set as 15s [31]. Fig. 7 shows that the latency increases as the number of queries increases. The capacity of Ethereum’s throughput is 2060 Transmission Per Second (TPS) [32, 33, 34, 35, 36, 37]. When the number of queries reaches , the latency increases significantly. In addition to Ethereum’s throughput, both MetaMask and the capacity of the deployed device affect the latency. We test the case when the query results have been saved into the system. Thus, smart contract does not need to send requests to the server. We can obtain query results by using previous query results. The worst case is that smart contract has to send request every query which takes longer time to obtain the result because of the third party service Provable.

Forth, we evaluate the relationship between the query utility and the privacy budget. As defined in [38], the privacy utility of a mechanism satisfies -useful if with probability at least . Thus, a small means that the difference between the perturbed result and the actual result is small, which also reflects that the mechanism has a high utility. The noise added to a query can be calculated as , where . We set and . Appendix -E proves that when we set , . Fig. 8 and Fig. 9 illustrate how the utility and noise change as the privacy budget increases. Fig. 8 shows the value of decreases when the privacy budget increases, meaning that the utility increases. In addition, the amount of noise added reflects the query utility as well. When less noise is added to the query response, the more utility the response gains. Fig. 9 shows that how the noise changes with the privacy budget. As the privacy budget increases, noise decreases, which means that the query utility increases. The amount of noise depends on the privacy budget and the sensitivity value. Queries such as “Frequency of US citizens”, “Frequency of white race” and “Frequency of age more than 60” have the same sensitivity value , so the noise added to their responses is the same when the privacy budgets they use are equal.


Fifth, we evaluate the efficiency of computing resources used by the DP scheme using or without the blockchain. The scheme without the blockchain’s involvement is when data analysts call the server directly to obtain noisy answers. We simulate this case using “Apache JMeter” API testing software to send queries with differential privacy parameters to the server directly. We consider the CPU usage as the metric for evaluating the efficiency of computing resources [39, 40].
Experimental Settings: We use a MacBook Pro with 2.3 GHz Quad-Core Intel Core i5, and 8 GB 2133 MHz LPDDR3 to run applications. In the experiment, we firstly deploy a locally private blockchain testnet using “Go-Ethereum” platform with three nodes mining. We then hold a separate server locally for testing CPU usage for processing API requests sent from “Apache JMeter”. We randomly send 50 queries for testing. At the same time, we use the “Activity Monitor” software to track and obtain their CPU usage [41].
Experimental Results: Fig. 10 compares CPU usages spent by schemes with and without the blockchain. We can observe that the computation efficiency (i.e., CPU usage) in our blockchain-based scheme is higher than that using the server to handle directly. However, it is still acceptable. Node.js and Express.js web application frameworks, which we use to build the web server application, are CPU-intensive. One approach to save computing resources when using blockchain is to stop mining when there is not any incoming task. Only start mining when it is necessary. Therefore, our proposed scheme is efficient and practical with acceptable computation cost.

VI Related Work
In this section, we first compare our paper and a closely related study [19], and then discuss other related work.
VI-A Comparison with Yang et al. [19]
Yang et al. [19] utilize blockchain and differential privacy technologies to achieve the security and privacy protection during data sharing. Compared with [19], we summarize the differences between our work and [19] as follows.
-
Although Algorithm 1 of [19] claims to satisfy -differential privacy, it does not since the noisy output’s domain (i.e., the set of all possible values) depends on the input. The explanation is as follows. In [19], for two neighboring datasets and , there exists a subset of outputs such that but . This means , which violates -differential privacy for any .
-
In [19], when a query is asked for the first time, the Laplace mechanism of [7] for -differential privacy is used to add Laplace noise to the true query result. Afterwards, [19] adds new Laplacian noise on previous noisy output, which makes the new noisy response no longer follow Laplace distribution since the sum of independent Laplace random variables does not follow a Laplace distribution. Hence, the analysis in [19] is not effective.
We consider -differential privacy by using the Gaussian noise. The advantage of Gaussian noise over Laplace noise lies in the easier privacy analysis for the composition of different privacy-preserving algorithms, since the sum of independent Gaussian random variables still follows the Gaussian distribution, while the sum of independent Laplace random variables does not obey a Laplace distribution.
VI-B Other Related Work
Differential privacy, a strong mathematical model to guarantee the database’s privacy, has attracted much attention in recent years. Blockchain is a fast-growing technology to provide security and privacy in a decentralized manner [42, 18, 43, 44, 45, 46]. Feng et al. [47] summarize prior studies about privacy protection in blockchain system, including methodology for identity and transaction privacy preservation. In the following, we will introduce more recent studies utilizing blockchain or privacy techniques to provide privacy or security protection in identity, data, and transactions.
Leveraging Blockchains for Identity Privacy/Security Protection. A few studies have focused on leveraging the blockchain to guarantee privacy/security in access control management or identity protection. For example, Zyskind et al. [48] and Xia et al. [49] both use blockchain in access control management. Zyskind et al. [48] create a decentralized personal data management system to address users’ concerns about privacy when using third-party mobile platforms. Xia et al. [49] propose a permissioned blockchain-based data sharing framework to allow only verified users to access the cloud data. Lu et al. [50] develop a private and anonymous decentralized crowdsourcing system ZebraLancer, which overcomes data leakage and identity breach in traditional decentralized crowdsourcing. The above studies focuse on identity privacy because the Blockchain is anonymous, whereas they do not consider the privacy protection for the database.
Leveraging Blockchains for Data Privacy/Security Protection. In addition to the identity privacy preservation, Hu et al. [51] replace the central server with a smart contract and construct a decentralized privacy-preserving search scheme for computing encrypted data while ensuring the privacy of data to prevent from misbehavings of a malicious centralized server. Luongo et al. [52] use secure multi-party computation to design a privacy primitive named Keep which allows contracts to manage and use private data without exposing the data to the public blockchain for protecting smart contracts on public blockchains. Alternatively, we use differential privacy standard to guarantee privacy. Moreover, blockchains are popular to be used for security protection of data sharing in IoT scenarios [43, 44, 49].
Leveraging Blockchains for Transaction Privacy/Security Protection. Moreover, some previous studies use blockchain to guarantee security and privacy in transactions. For example, Henry et al. [17] propose that the blockchain should use mechanisms that piggyback on the overlay network, which is ready for announcing transactions to de-link users’ network-level information instead of using an external service such as Tor to protect users’ privacy. Gervais [32] propose a quantitative framework to analyze the security of Proof-of-Work in blockchains, where the framework’s inputs included security, consensus, and network parameters. Herrera-Joancomartí and Pérez-Solà [53] focuse on privacy in bitcoin transactions. Sani et al. [54] propose a new blockchain Xyreum with high-performance and scalability to secure transactions in the Industrial Internet of Things.
Leveraging differential privacy for protecting privacy of IoT data. IoT devices collect users’ usage status periodically, which may contain sensitive information such as the energy consumption or location information. To avoid the privacy leakage, some studies use differential privacy mechanisms to protect the privacy of data [1, 2, 3, 4, 5]. For example, Tudor et al. [1] propose a streaming-based framework, Bes, to disclose IoT data. Hassan et al. [2] treat each IoT node as a node of blockchain to guarantee the IoT nodes’ security, and they leverage differential privacy mechanisms to protect the privacy of data of each node. To prevent adversaries from intercepting the Internet traffic from/to the smart home gateway or profile residents’ behaviors through digital traces, Liu et al. [4] leverage differential privacy to develop a framework to prevent from attacks. However, none of them discusses how to reuse the differential privacy budget.
Differentially Private Algorithms. Some differential privacy algorithms are proposed to provide privacy protection. Xiao et al. [55] propose an algorithm to correlate the Lapalce noise added to different queries to improve the overall accuracy. Given a series of counting queries, the mechanism proposed by Li and Miklau [56] select a subset of queries to answer privately and use their noisy answers to derive answers for the remaining queries. For a set of non-overlapping counting queries, Kellaris and Papadopoulos et al. [57] pre-process the counts by elaborate grouping and smoothing them via averaging to reduce the sensitivity and thus the amount of injected noise. Given a workload of queries, Yaroslavtsev et al. [58] introduce a solution to balance accuracy and efficiency by answering some queries more accurately than others.
VII Discussion and Future Work
In this section, we will discuss the meaning of differential privacy parameters, privacy of the smart contract and queries.
VII-A Differential Privacy Parameters
The value of differential privacy parameter represents the level of the protection provided by differential privacy mechanisms. McSherry et al. [59, 60] quantify the strength of differential privacy as follows. When , the differential privacy mechanism provides perfect privacy. Then, when , the protection is considered as strong, while , the privacy protection is weak. The privacy parameter represents the small probability that a record gets altered in the database, so it should be very small. We sample uniformly from for each query.
VII-B Privacy for Smart Contract
A smart contract is publicly available when it is deployed to the public blockchain. In our experiment, attackers can obtain the algorithm implemented in the smart contract. However, they still cannot obtain the accurate responses from noisy results even if they obtain the algorithm. There are some approaches that can be used to protect the privacy of the smart contract as follows :
First, Kosba et al. [61] propose Hawk, a framework to build the privacy-preserving smart contracts. Hawk enables that programmers write private smart contracts without considering cryptography. The framework will generate a cryptographic protocol during compiling automatically.
Second, we can partially address this problem by deploying Ethereum to a private blockchain. We may combine the private blockchain and Proof-of-Authority [62] consensus mechanism. When Ethereum is deployed to the private blockchain, the private blockchain can set access control. Thus, the attackers need to break access control before accessing the smart contract. Therefore, when using a private blockchain, we consider access control to protect the smart contracts’ privacy.
As it is complex to protect the smart contract’s privacy, we would like to consider the smart contract’s privacy as our future work.
VII-C Privacy for Queries
Differential privacy (DP) mechanisms consider that data analysts are untrusted, and the curator who holds the database is trusted. The trusted curator stores the database and responds to statistical queries made by an untrusted analyst so that DP will not protect the privacy of data analysts’ queries. Moreover, DP supports statistical queries that may not include much sensitive information. When we use the smart contract, some data analysts may worry about the privacy of their queries. There are two ways to protect the privacy of queries :
First, we may use the private blockchain and conduct experiments using the private blockchain and Proof-of-Authority consensus mechanism. Ethereum also supports deploying the smart contract to the private blockchain. In this case, the smart contract can be considered trusted, so that the sensitive information of queries will not leak.
Second, we may combine other cryptographic techniques with differential privacy. For example, Agarwal et al. [63] design the encrypted databases (EDB) that support differentially-private statistical queries. In their paper, both the curator and the analyst are untrusted. The curator can outsource the database to an untrusted server securely by leveraging EDBs to encrypt operations. Since the privacy protection for queries is complicated and may involve more privacy and security techniques, we would like to consider the privacy of queries as our future work.
Third, query privacy can be avoided by fixing the types of queries. Since the privacy budget is quite limited, it is impossible to let data analysts ask too many questions. Thus, one solution is to control types of queries. The system builder may take some time to select commonly used questions by data analysts, and then they set a dropdown list for data analysts to select questions. In this case, our system will not leak the queries’ privacy because quries are standard.
VII-D Difference between Our Proposed Scheme and Normal DP Schemes
We pre-define queries for efficiently calculating the sensitivity values and saving users’ time. The sensitivity values for different queries should be pre-defined even if we do not use blockchain. When a query comes in many times with different differential privacy parameters, our scheme will play an essential role in saving the differential privacy budget. For example, many companies are trying to send the same query to a dataset because of the similar data analysis tasks. In this case, some of the privacy budgets can be saved. Since the privacy budget is a scarce resource regarding to the dataset, it is necessary to use our scheme.
However, when a query is seen for the first time, our scheme can only treat it as the new query. Since the differential privacy budget is quite limited, and sensitivities have to be calculated beforehand, a dataset will not support too many different statistical queries. If our system is implemented in the real world, similar to Fig. 3, a list of queries supported maybe provided to control the variations of queries instead of letting data analysts type in different queries freely.
VIII Conclusion
In this paper, we use a blockchain-based approach for tracking and saving differential-privacy cost. In our design, we propose an algorithm that reuses noise fully or partially for different instances of the same query type to minimize the accumulated privacy cost. The efficiency of the algorithm is proved via a rigorous mathematical proof. Moreover, we design a blockchain-based system for conducting real-world experiments to confirm the effectiveness of the proposed approach.
References
- [1] V. Tudor, V. Gulisano, M. Almgren, and M. Papatriantafilou, “Bes: Differentially private event aggregation for large-scale IoT-based systems,” Future Generation Computer Systems, vol. 108, pp. 1241–1257, 2020.
- [2] M. U. Hassan, M. H. Rehmani, and J. Chen, “Privacy preservation in blockchain based iot systems: Integration issues, prospects, challenges, and future research directions,” Future Generation Computer Systems, vol. 97, pp. 512–529, 2019.
- [3] J. Xiong, J. Ren, L. Chen, Z. Yao, M. Lin, D. Wu, and B. Niu, “Enhancing privacy and availability for data clustering in intelligent electrical service of IoT,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1530–1540, 2018.
- [4] J. Liu, C. Zhang, and Y. Fang, “Epic: A differential privacy framework to defend smart homes against internet traffic analysis,” IEEE Internet of Things Journal, vol. 5, no. 2, pp. 1206–1217, 2018.
- [5] K. Gai, Y. Wu, L. Zhu, Z. Zhang, and M. Qiu, “Differential privacy-based blockchain for industrial Internet-of-Things,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4156–4165, 2019.
- [6] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our data, ourselves: Privacy via distributed noise generation,” in International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), 2006, pp. 486–503.
- [7] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography Conference (TCC), 2006, pp. 265–284.
- [8] C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Theoretical Computer Science, vol. 9, no. 3-4, pp. 211–407, 2014.
- [9] C. Dwork and G. N. Rothblum, “Concentrated differential privacy,” arXiv preprint arXiv:1603.01887, 2016.
- [10] M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in Theory of Cryptography Conference (TCC), 2016, pp. 635–658.
- [11] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth, “Generalization in adaptive data analysis and holdout reuse,” in Conference on Neural Information Processing Systems (NIPS), 2015, pp. 2341–2349.
- [12] F. McSherry and K. Talwar, “Mechanism design via differential privacy,” in IEEE Symposium on Foundations of Computer Science (FOCS), 2007, pp. 94–103.
- [13] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in ACM Conference on Computer and Communications Security (CCS), 2016, pp. 308–318.
- [14] J. Tang, A. Korolova, X. Bai, X. Wang, and X. Wang, “Privacy loss in Apple’s implementation of differential privacy on macOS 10.12,” arXiv preprint arXiv:1709.02753, 2017.
- [15] Ú. Erlingsson, V. Pihur, and A. Korolova, “RAPPOR: Randomized aggregatable privacy-preserving ordinal response,” in ACM Conference on Computer and Communications Security (CCS), 2014, pp. 1054–1067.
- [16] B. Ding, J. Kulkarni, and S. Yekhanin, “Collecting telemetry data privately,” in Conference on Neural Information Processing Systems (NIPS), 2017, pp. 3571–3580.
- [17] R. Henry, A. Herzberg, and A. Kate, “Blockchain access privacy: Challenges and directions,” IEEE Security & Privacy, vol. 16, no. 4, pp. 38–45, 2018.
- [18] J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, “Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,” IEEE Internet of Things Journal, vol. 6, no. 6, pp. 10 700–10 714, Dec 2019.
- [19] M. Yang, A. Margheri, R. Hu, and V. Sassone, “Differentially private data sharing in a cloud federation with blockchain,” IEEE Cloud Computing, vol. 5, no. 6, pp. 69–79, 2018.
- [20] C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- [21] S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System,” https://bitcoin.org/bitcoin.pdf, 2008.
- [22] Investopedia, “Blockchain,” https://www.investopedia.com/terms/b/blockchain.asp.
- [23] G. Wood, “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum Project Yellow Paper, 2014.
- [24] N. Szabo, “Smart Contracts,” http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html, 1994.
- [25] V. Buterin, “A next-generation smart contract and decentralized application platform,” white paper, 2014.
- [26] L. M. Han, Y. Zhao, and J. Zhao, “Blockchain-based differential privacy cost management system,” arXiv preprint arXiv:2006.04693, 2020.
- [27] P. Kairouz, S. Oh, and P. Viswanath, “The composition theorem for differential privacy,” IEEE Transactions on Information Theory, vol. 63, no. 6, pp. 4037–4049, 2017.
- [28] “Ropsten’s official github page.” https://github.com/ethereum/ropsten, accessed on 9 January 2019.
- [29] D. Sánchez, J. Domingo-Ferrer, and S. Martínez, “Improving the utility of differential privacy via univariate microaggregation,” in International Conference on Privacy in Statistical Databases. Springer, 2014, pp. 130–142.
- [30] N. Wang, X. Xiao, Y. Yang, J. Zhao, S. C. Hui, H. Shin, J. Shin, and G. Yu, “Collecting and analyzing multidimensional data with local differential privacy,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 638–649.
- [31] “Ganache,” Accessed on 2020. [Online]. Available: https://www.trufflesuite.com/ganache
- [32] A. Gervais, G. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in ACM SIGSAC Conference on Computer and Communication Security (CCS), 2016.
- [33] B. Cao, Y. Li, L. Zhang, L. Zhang, S. Mumtaz, Z. Zhou, and M. Peng, “When Internet of Things meets blockchain: Challenges in distributed consensus,” IEEE Network, vol. 33, no. 6, pp. 133–139, 2019.
- [34] D. Vujičić, D. Jagodić, and S. Ranđić, “Blockchain technology, bitcoin, and ethereum: A brief overview,” in 2018 17th international symposium infoteh-jahorina (infoteh). IEEE, 2018, pp. 1–6.
- [35] J. Stark, “Making sense of ethereum’s layer 2 scaling solutions: state channels, plasma, and truebit,” 2018.
- [36] H. Tang, Y. Shi, and P. Dong, “Public blockchain evaluation using entropy and TOPSIS,” Expert Systems with Applications, vol. 117, pp. 204–210, 2019.
- [37] M. H. Miraz and D. C. Donald, “LApps: technological, legal and market potentials of blockchain lightning network applications,” in Proceedings of the 2019 3rd International Conference on Information System and Data Mining, 2019, pp. 185–189.
- [38] A. Blum, K. Ligett, and A. Roth, “A learning theory approach to noninteractive database privacy,” Journal of the ACM (JACM), vol. 60, no. 2, pp. 1–25, 2013.
- [39] P. Zheng, Z. Zheng, X. Luo, X. Chen, and X. Liu, “A detailed and real-time performance monitoring framework for blockchain systems,” in 2018 IEEE/ACM 40th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP). IEEE, 2018, pp. 134–143.
- [40] S. Morishima and H. Matsutani, “Accelerating blockchain search of full nodes using gpus,” in 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP). IEEE, 2018, pp. 244–248.
- [41] S. King and S. Nadal, “Ppcoin: Peer-to-peer crypto-currency with proof-of-stake,” self-published paper, August, vol. 19, p. 1, 2012.
- [42] X. Li, H. Li, H. Yan, Z. Cheng, W. Sun, and H. Zhu, “Mitigating query-flooding parameter duplication attack on regression models with high-dimensional gaussian mechanism,” 2020.
- [43] J. Kang, Z. Xiong, D. Niyato, D. Ye, D. I. Kim, and J. Zhao, “Toward secure blockchain-enabled internet of vehicles: Optimizing consensus management using reputation and contract theory,” IEEE Transactions on Vehicular Technology, vol. 68, no. 3, pp. 2906–2920, March 2019.
- [44] J. Kang, R. Yu, X. Huang, M. Wu, S. Maharjan, S. Xie, and Y. Zhang, “Blockchain for secure and efficient data sharing in vehicular edge computing and networks,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4660–4670, June 2019.
- [45] Y. C. Tsai, R. Tso, Z.-Y. Liu, and K. Chen, “An improved non-interactive zero-knowledge range proof for decentralized applications,” in 2019 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPCON). IEEE, 2019, pp. 129–134.
- [46] A. Fernández Anta, “Atomic appends: Selling cars and coordinating armies with multiple blockchains,” 2019.
- [47] Q. Feng, D. He, S. Zeadally, M. K. Khan, and N. Kumar, “A survey on privacy protection in blockchain system,” Journal of Network and Computer Applications, vol. 126, pp. 45–58, 2019.
- [48] G. Zyskind, O. Nathan, and A. Pentland, “Decentralizing privacy: Using blockchain to protect personal data,” in IEEE Security and Privacy Workshops (SPW), 2015, pp. 180–184.
- [49] Q. Xia, E. B. Sifah, A. Smahi, S. Amofa, and X. Zhang, “BBDS: Blockchain-based data sharing for electronic medical records in cloud environments,” Information, vol. 8, no. 2, p. 44, 2017.
- [50] Y. Lu, Q. Tang, and G. Wang, “Zebralancer: Private and anonymous crowdsourcing system atop open blockchain,” in 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2018, pp. 853–865.
- [51] S. Hu, C. Cai, Q. Wang, C. Wang, X. Luo, and K. Ren, “Searching an encrypted cloud meets blockchain: A decentralized, reliable and fair realization,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 2018, pp. 792–800.
- [52] M. Luongo and C. Pon, “The Keep network: A privacy layer for public blockchains,” https://keep. network/whitepaper, Tech. Rep., 2017.
- [53] J. Herrera-Joancomartí and C. Pérez-Solà, “Privacy in bitcoin transactions: New challenges from blockchain scalability solutions,” in Modeling Decisions for Artificial Intelligence, 2016, pp. 26–44.
- [54] A. S. Sani, D. Yuan, W. Bao, P. L. Yeoh, Z. Y. Dong, B. Vucetic, and E. Bertino, “Xyreum: A high-performance and scalable blockchain for iiot security and privacy,” in 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2019, pp. 1920–1930.
- [55] X. Xiao, G. Bender, M. Hay, and J. Gehrke, “iReduct: Differential privacy with reduced relative errors,” in ACM International Conference on Management of Data (SIGMOD), 2011, pp. 229–240.
- [56] C. Li and G. Miklau, “An adaptive mechanism for accurate query answering under differential privacy,” Proceedings of the VLDB Endowment, vol. 5, no. 6, pp. 514–525, 2012.
- [57] G. Kellaris and S. Papadopoulos, “Practical differential privacy via grouping and smoothing,” Proceedings of the VLDB Endowment, vol. 6, no. 5, pp. 301–312, 2013.
- [58] G. Yaroslavtsev, G. Cormode, C. M. Procopiuc, and D. Srivastava, “Accurate and efficient private release of datacubes and contingency tables,” in International Conference on Data Engineering (ICDE), 2013, pp. 745–756.
- [59] F. McSherry and R. Mahajan, “Differentially-private network trace analysis,” ACM SIGCOMM Computer Communication Review, vol. 40, no. 4, pp. 123–134, 2010.
- [60] P. Aaby, J. M. De Acuna, R. Macfarlane, and W. J. Buchanan, “Privacy parameter variation using RAPPOR on a malware dataset,” in 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). IEEE, 2018, pp. 938–945.
- [61] A. Kosba, A. Miller, E. Shi, Z. Wen, and C. Papamanthou, “Hawk: The blockchain model of cryptography and privacy-preserving smart contracts,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 839–858.
- [62] “Proof of Authority,” Accessed on 2020. [Online]. Available: https://github.com/paritytech/parity/wiki/Proof-of-Authority-Chains.
- [63] A. Agarwal, M. Herlihy, S. Kamara, and T. Moataz, “Encrypted databases for differential privacy,” Proceedings on Privacy Enhancing Technologies, vol. 2019, no. 3, pp. 170–190, 2019.
- [64] B. Balle and Y.-X. Wang, “Improving the Gaussian mechanism for differential privacy: Analytical calibration and optimal denoising,” in International Conference on Machine Learning (ICML), 2018.
- [65] F. Pukelsheim, “The three sigma rule,” The American Statistician, vol. 48, no. 2, pp. 88–91, 1994.
-A Proof of Theorem 1
Proof.
(i) As noted in the statement of Theorem 1, we suppose that before answering query and after answering , the privacy loss is given by for some . Later we will show the existence of such . Then when follows the probability distribution of random variable for each , we have the following for the privacy loss :
(9) |
where we use “” to mean “obeying the distribution”.
Now we need to analyze the privacy loss after answering the queries . We look at the privacy loss defined as follows:
(10) |
Hence, we use (9) to analyze (10). From (3), since is generated by reusing and generating additional noise (if necessary), where is an integer in as noted in the statement of Theorem 1, we have
(11) |
We now discuss the first term and the second term in the last row of (11). To begin with, from (9), the first term in the last row of (11) follows the Gaussian distribution . Next, we analyze the second term in the last row of (11).
When and take and respectively, and take the following defined and respectively:
(12) | ||||
(13) |
For being a neighboring dataset of , we further define
(14) | ||||
(15) |
so that
(16) | ||||
(17) |
Note that and are the same since and are the same. From the above analysis, we obtain :
(20) | |||
(21) |
where step (b) follows since where is a zero-mean Gaussian random variable with variance and is a zero-mean Gaussian random variable with variance .
Similarly, for dataset , we have :
(25) | |||
(26) |
where step (b) follows since where is a Gaussian random variable with variance and is a zero-mean Gaussian random variable with variance .
Then,
(27) |
The above (27) presents the second term in the last row of (11). At first glance, it may seem that the first term and the second term in the last row of (11) are dependent since they both involve . However, we have shown from (27) above that the second term in the last row of (11) depends on only the random variable (note that terms in (27) other than are all given), which is the amount of additional Gaussian noise used to generated according to (3) and (13); i.e., the second term in the last row of (11) is actually independent of the first term in the last row of (11). From (9), the first term in the last row of (11) follows the Gaussian distribution . Next, we show that (27) presenting the second term in the last row of (11) also follows a Gaussian distribution.
Since follows a zero-mean Gaussian distribution with variance , clearly follows a zero-mean Gaussian distribution with variance given by
(28) |
Since and are the same, we obtain from Eq. (14) and Eq. (15) that , which we use to write Eq. (28) as
(29) |
Summarizing the above, privacy loss is
(30) |
As noted in the statement of Theorem 1, we suppose that before answering query and after answering , the privacy loss is given by for some . With the above result (30), we can actually show that there indeed exists such . This follows from mathematical induction. For the base case; i.e., when only one query is answered, the result follows from Lemma 3 of [64]. The induction step is given by the above result (30). Hence, we have shown the existence of . With this result and (30), we have completed proving Result (i) of Theorem 1.
(ii) The optimal is obtained by minimizing and hence minimizing . Analyzing the monotonicity of this expression, we derive the optimal as in Eq. (4). The first-order derivative of to is:
(31) |
-
Case 1: if , when , and when . Hence, the optimal to minimize is at .
-
Case 2: if , when , and when . Hence, the optimal to minimize is at .
Thus, we obtain optimal values of as Eq. (4). ∎
-B Proof of Lemma 2
Proof.
Consider a query with -sensitivity being . Let be the mechanism of adding Gaussian noise amount to . From Corollary 1, the privacy loss of randomized mechanism with respect to neighboring datasets and is given by for . By considering the -sensitivity of (i.e., ) as , and are the same. In addition, from Theorem 5 of [64], letting (resp., ) satisfy -differential privacy can be converted to a condition on (resp., ). Then letting satisfy -differential privacy is the same as letting satisfy -differential privacy. From Lemma 1, achieves -differential privacy with ; i.e., if . Summarizing the above, we complete proving Lemma 2. ∎
-C Proof of Theorem 2
Proof.
We use Theorem 1 to show Results ① ② and ③ of Theorem 2. Proof of ①: In Case 2A) and Case 2C), can reuse previous noise. Hence, the privacy loss will still be according to Eq. (5).
Proof of ②: In Case 1), cannot reuse previous noisy answers, and the new noise follows . Thus, .
Proof of ③: In Case 2B), can reuse previous noisy answers partially, so we can prove it using Eq. (5).
Then, Lemma 2 further implies Results ④ ⑤ and ⑥ of Theorem 2.
Proof of ④: can fully reuse the old noisy result in Cases 2A) and 2C). Thus, the privacy level does not change.
Proof of ⑤: From Lemma 2, we have and
. The above two equations yield
. Hence, Gaussian(,,) = .
Proof of ⑥: From Lemma 2, we have and
.
The above two equations yield
. Then using the expression of from Lemma 1, we further obtain Result ⑥.
∎
-D Proof of Theorem 3
Proof.
First, from Theorem 2, after Algorithm 1 is used to answer all queries with query being answered under -differential privacy, the total privacy loss with respect to neighboring datasets and is given by for some .
Next, we use Theorem 2 to further show that the expression of is given by Eq. (6). From Theorem 2, among all queries, only queries belonging to Cases 1) and 2B) contribute to . Below we discuss the contributions respectively.
With denoting the set of such that is in Cases 1), we know from Result ② of Theorem 2 that the contributions of queries in Cases 1) to is given by
(32) |
Below we use Result ③ of Theorem 2 to compute the contributions of queries in Case 2B) to . For being the set of query types in Case 2B), we discuss each query type respectively.
From Result ③ of Theorem 2, the contribution to by answering under differential privacy is
Similarly, the contribution to by answering under differential privacy is
Similar analyses are repeated for additional type- queries in Case 2B). In particular, for each , the contribution to by answering under differential privacy is
(33) |
Summing all (33) for , we obtain that for each query type , the contributions to by answering under differential privacy is
(34) |
Since for are all type- queries, are all the same for . Hence, we write (34) as
(35) |
Summing all (35) for , the contributions to by answering all queries in Case 2B) is
(36) |
-E Utility of the Gaussian Mechanism.
Proof.
The noisy response for one-dimensional query is . Letting the probability of be , then we have
(37) |
where erf denotes the error function and the last step of Eq. (37) uses the fact that erf is an odd function.
According to the two-sigma rule of Gaussian distribution [65], which can also be obtained from above equation that values lie within two standard deviations of the mean. Thus, if we set , . ∎