Privacy-Preserving Image Classification in the Local Setting
Abstract
Image data has been greatly produced by individuals and commercial vendors in the daily life, and it has been used across various domains, like advertising, medical and traffic analysis. Recently, image data also appears to be greatly important in social utility, like emergency response. However, the privacy concern becomes the biggest obstacle that prevents further exploration of image data, due to that the image could reveal sensitive information, like the personal identity and locations. The recent developed Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing. In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input. To protect the image privacy, we propose to locally perturb the image representation before revealing to the data user. Subsequently, we analyze how the perturbation satisfies -LDP and affect the data utility regarding count-based and distance-based machine learning algorithm, and propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size. Our experiments show that DCAConv could maintain a high data utility while preserving the privacy regarding multiple image benchmark datasets.
Introduction
Image data has been greatly produced and shared in our daily life, and the usage of image data has been demonstrated through various applications, like preference analysis and advertising(?), medical (?) and traffic(?) analysis. Lately, image data becomes greatly important in social utility(?) as well, like emergency response. For example, in the Boston Marathon bombing, the authorities spent lots of effort to gather onsite photos from people in that area, and pore through thousands of photos to search for the suspects. However, the privacy concern comes to be the biggest obstacle that prevents people sharing this information, since the image may reveal where they are and what they are doing. Thus, exploring the privacy preserving machine learning technique with regard to image data becomes paramount importance.
In this paper, we study an image classification problem, in which there are two parties(?), the image owners, who hold a set of images; the untrustworthy data user, who would like to fit a machine learning model regarding these images. As an example, in the Marathon bombing case, the authorities serves as the data user, who asks for the onsite images to search for the suspects, and the people that provided the images serve as the data owner, who wishes to support the investigation while preserving the privacy. During the last decade, Differential Privacy (?) is one of the most popular schemes for privacy protection, which prevents the inference about individuals from participation of computation. It defines a mechanism that the computation of the data is robust to any change of any individual sample by introducing uncertainty into the algorithm. Recently, Local Differential Privacy(LDP)(?)(?) extends the technique into a local setting, which provides a stronger privacy protection that allows data owners to perturb their input before releasing.
Differentially private machine learning in the local setting has been rarely studied. Recently, (?) and (?) investigate the privacy-preserving distributed machine learning, in both work, the data owners interactively work with the data user to learn the model. Unlike these previous work, a “non-interactive” manner draws much of our interest that the data owners only perturb once and then share the data with the data user. Comparing to the interactive setting, the non-interactive way is more scalable and needs less communication cost. Randomized response(?) is proposed to provide the plausible deniability for individuals responding to sensitive surveys. It has been proved that this mechanism satisfies -LDP and is universally optimal for mutual information optimization problem(?), which provides a solid foundation for its usage in the area of machine learning. In this paper, we are interested in adopting randomized response to locally perturb the input and fit the count-based and distance-based machine learning models with such input. Specifically, we study the utility and privacy trade-off with respect to the Naive Bayes and KNN classifiers and show how the model utility is affected by the privacy parameters. Furthermore, in terms of image classification, we propose the DCAConv, a supervised image feature extractor, that improves the utility under the -LDP.
Overall, the contributions of our work are three folds:
-
•
We propose to use the LDP in the privacy preserving image classification problem. To the best of our knowledge, most LDP-based work focus on the frequency estimation of the categorical data, and we are the first to investigate the privacy preserving mechanism to fit the image data in the classification problem.
-
•
We analyze the utility and privacy trade-off regarding count-based and distance-based machine learning models, i.e. Naive Bayes and KNN classifiers. As a result, we show how the model utility is affected by the privacy parameters.
-
•
We develop the DCAConv, a supervised image feature extractor, which represents the image with a scalable domain size. DCAConv is evaluated in terms of image classification through multiple benchmark datasets, like texture and facial datasets. The results confirm that DCAConv could maintain a high data utility while preserving the privacy.
The rest of paper is organized as follow. The preliminary is in Section II. The problem definition and proposed solution is introduced in Section III. We describe DCAConv, the supervised image feature extractor in Section IV. The experiment and evaluation is in Section V. The related work is presented in Section VI. Section VII presents the conclusion.
Preliminary
Local Differential Privacy
Differential privacy (DP) ensures that an adversary should not be able to reliably infer any individual sample from the computation of a dataset, even with unbounded computational power and access to every other samples. Given two data database , it is said that are neighbors if they differs on at most one row. The formal definition of a -differential private mechanism over is defined below:
Definition 1 ()-differential privacy(?; ?): A randomized mechanism is -differentially private if for every two neighboring database and for any ,
(1) |
where denotes the probability of an event, denotes the set of all possible outputs of the algorithm . The smaller are, the closer and are, and the stronger privacy protection gains. When , the mechanism is -differentially private, which is a stronger privacy guarantee than -differential privacy with .
Similar to DP, a stronger setting of local privacy is investigated by Duchi et al. (?), namely Local Differential Privacy. It considers a scenario that a data curator needs to collect data from data owners, and infer the statistical information from these data. The data owner is assumed to trust no one, and would perturb the data before sharing with data curator. The formal definition is given below:
Definition 2 -Local Differential Privacy(?; ?): A randomized mechanism satisfies -Local Differential Privacy (-LDP) if for any input and and for any :
(2) |
Comparing to DP, LDP provides a stronger privacy model, while entails more noise.
Randomized Response
Randomized Response(?) is a decades-old technique which is designed to collect the statistical information regarding sensitive survey questions. The original technique is used for frequency estimation of a particular input. For social research, the interviewee is asked to provide an answer regarding the sensitive question, such as marijuana usage. To protect the privacy, the interviewee is asked to spin a spinner unobserved by the interviewer. Rather than answering the sensitive question directly, the interviewee only tells yes or no according to whether the spinner points to the true answer. Suppose that the interviewer wants to estimate the population of using marijuana and asks the interviewee if he has ever used the marijuana. And the probability that the spinner points to yes is , then the spinner points to no is with probability . Thus an unbiased maximum likelihood estimates of the true population proportion is given by , where is the number of participating interviewees and is the amount of reporting yes. It(?) shows that this mechanism satisfies -LDP.
Randomized response could also be generalized to multiple choices, where the domain of the choices is . Suppose the randomized method is , the perturbation is defined below:
(3) |
Proof.
(?) For any inputs and and output :
(4) |
∎
In terms of maintaining the original statistical information, even though (?) proves that randomized response is universally optimal for mutual information and -divergences optimization, there’s no clear guide about how to employ this mechanism in particular machine learning algorithm.
Discriminant Component Analysis
Principal Component Analysis is a well known tool for unsupervised learning which finds the subspace that preserves the most variance of the original data, however, it is not effective in the supervised learning since the labeling information is ignored in the computation. Discriminant Component Analysis (DCA)(?), basically a supervised PCA, is proposed recently as a complement for supervised learning, where the signal subspace components of DCA are associated with the discriminant power regarding the classification effectiveness while the noise subspace components of DCA are tightly coupled with the recoverability. Since the rank of the signal subspace is limited by the number of classes, DCA is capable to support classification using a small number of components.
Consider a -class data set consisting of samples , in which . Each sample associates with a class label where . Let denotes the centroid of the total mass, denotes the centroid of the samples in class , and denotes the number of samples in class . The signal matrix is represented by the between-class scatter matrix:
(5) |
The noise matrix is characterized by the following within-class scatter matrix:
(6) |
The total scatter matrix can be written as follows:
(7) |
In DCA, two ridge parameters and are incorporated in the noise matrix and the signal matrix as follows:
(8) |
(9) |
where is the identity matrix. Therefore, the regulated scatter matrix is denoted as:
(10) |
DCA performs spectral decomposition of the pre-whitened scatter matrix, :
(11) |
where holds the the monotonically decreasing eigenvalues, and U holds the associated eigenvectors. To form a projection matrix, it chooses eigenvectors with the largest eigenvalues to form a projection matrix , in which each column is an eigenvector. And the raw data is transformed by multiplying the projection matrix:
(12) |
Problem Definition and Solution Analysis
Problem Definition

As Fig.1 shows, in this paper, we consider the following problem: Given Data Owners, each Data Owner holds a set of images that are represented as a -dimensional feature vectors. For ease of analysis, here it is assume that only one image is possessed by and it is denoted as . Each image associates with a label , for instance, the images held by each data owner is about the handwritten digit, and indicates the corresponding digit of ; the untrustworthy Data User would like to fit an image classifier with as input while satisfying -LDP for the th feature of . It should be noted that the images held by each data owner are perturbed independently, so there are no difference to perturb a set of images or a single image at . With respect to image classification, the effectiveness of both Naive Bayes and KNN classifiers have been demonstrated through previous work(?; ?), which are studied in this work.
For the threat model, the adversary is intended to learn individual image , who could be either the untrustworthy data user, a participating data owner or an outside attacker. It assumes that adversaries might have arbitrary background knowledge and there might be a collusion among them. The goal is maintain the data utility in terms of image classification while protecting the data privacy.
Frequency and Mean Estimation
In this paper, we mainly investigate to use Randomized Response to satisfy -LDP for , which is perturbed independently. Given a , we assume that the class label and is the vector after perturbation. Firstly, we show how to build the Naive Bayes classifier by leveraging the aggregation result of the perturbed input.
In our problem, for each training data , is independently perturbed by randomized response at and is sent to the Data User. And the frequency of each appeared value is served as the building block of the classifier training, more specifically, the frequency of value in the th feature that belongs to should be counted, in which is denoted as . However, due to the noise injected by randomized response, the observation doesn’t reflect the true proportion of value . Thus given the number of observation of in the th feature, denoted as , is estimated as(?):
(13) | ||||
(14) |
where , if , which refers to Equ.3, and is the number of observations that in . It can be further shown that is an unbiased estimator. Assuming is the true proportion that value occurs in th feature, would be:
(15) | ||||
(16) | ||||
(17) |
And is given as below:
(18) | ||||
(19) |
where is in . Furthermore, is the estimated mean of feature given the frequency estimation of each appeared value:
(20) |
Correspondingly, and are computed as follow:
(21) | ||||
(22) | ||||
(23) | ||||
(24) |
(25) | ||||
(26) | ||||
(27) |
It can be seen that is an unbiased estimator and is in . And the Mean-Squared-Error (MSE) of is equal to . In practical, the unbiased estimator for the sum of counts is adopted, which is and it results in the same order of MSE.
A different mean estimator is given by replacing the denominator with the corrected sum of frequencies, where :
(28) |
Correspondingly, by using multivariate Taylor expansions, and are computed as follow:
(29) | |||
(30) | |||
(31) | |||
(32) |
(33) | |||
(34) | |||
(35) | |||
(36) | |||
(37) | |||
(38) | |||
(39) | |||
(40) | |||
(41) |
Nearest Centroid-Based Classifier
The Nearest Centroid Classifier (NCC) is a classification model that assigns to testing data the label of the class of training data whose mean is closest to the observation in terms of certain distance metrics. Given training data , the per-class centroid is computed as follows:
(42) |
where is the number of training data that belonging to class . For a testing data , the predicted class is given by,
(43) |
where is the -norm distance between and . To build the NCC from the perturbed data, the estimated centroid is adopted:
(44) |
DCAConv - Supervised Image Feature Extraction

As analyzed in previous section, for both Naive Bayes and KNN classifiers, we show that the domain size of the input determines the utility gain regarding the perturbation of randomized response. For Naive Bayes, the utility can be measured by the variance of the estimated frequency. For KNN classifier, the utility is measured by the probability of preserving the true proximity between the testing data points and the perturbed training points. In terms of randomized response, it is readily to see that the utility gain is maximized when domain size . However, the binary format limits the transited information, especially for the image data. On the other hand, increasing asks for more price to pay with respect to privacy. Thus the domain size is the very key factor for the trade-off between the utility and privacy.
For image data, the naive way to protect the privacy is to apply randomized response to the pixel feature, nevertheless the utility is severely damaged due to the large domain size. Thus the challenge is to find a proper representation of the image data. On one hand, the representation should preserve as much information as possible; on the other hand, the domain size should provide a balance between the utility and privacy. The success of CNN(?) on image classification draws much attention in the last decade, which transforms the image into multiple level representations by extracting the abstract semantics. However, it is hard to apply randomized response to the CNN directly, the reason is that CNN usually needs continuous number in the data flow. Furthermore, it also relies on either Softmax or Sigmoid layer to determine the final output distribution, where the model is trained to optimize for specific loss functions. Inspired by PCANet(?), we are interested in designing a mechanism to transform the image to a representation that has a scalable domain size. In the meanwhile, the domain size could be as small as possible to narrow the impact of randomized response in terms of utility. The outcome is DCAConv, as Fig. 2 shows, which consists of the convolution layer, the binary quantization layer and the pooling layer. The convolution filter is computed using DCA and the generated discriminant components serve as the filter. The quantization layer converts the output of convolution operation to binary bits. Finally, images are represented with the mapping of the binary bits and passed through the pooling layer. The details are presented below:
DCA Convolution Layer
Suppose there are images , each image has the same size , the patch size is , the number of filters in the th convolution layer is . is denoted as , and denotes the th vectorized patch in , where , and is the size of stride, is the the size of zero padding. Additionally, assuming there are classes, each image associates with a class label where , and each class contains images. For the first convolution layer, the overall patch mean and within class patch mean is calculated as below:
(45) | ||||
(46) |
Then the between-class scatter matrix , the within class scatter matrix and regularized scatter matrix are calculated as below:
(47) | ||||
(48) | ||||
(49) |
Once getting and , DCA is performed over the patches and compute a set of discriminant components as the orthonormal filters, where:
(50) |
where holds the associated eigenvectors. The leading eigenvectors capture the main discriminant power of the mean-removed training patches. Then the first eigenvectors are taken as the orthonormal filters, where the th filter is represented as , in the first convolution layer.
Similar to CNN, multiple convolution layers could be stacked following the first layer, as Fig. 2 shows. Given a second convolution layer as example, it repeats the similar procedure as in the first one. Given the th filter in the first layer, for image :
(51) |
where denotes the 2D convolution. It could use zero padding on to make it has the same size as . After that, the patch mean of is removed, then it performs DCA over the concatenated patches and computes the filters for the second convolution layer. Once DCA is finished, the first eigenvectors are taken as the filters, the th filter in the second layer is represented as . For each , there are filters to be convolved, as Fig. 2 shows.
Binary Quantization
After second convolution layer, there are output in total, and they are converted to binary representation using the Heaviside step function :
(52) |
where .
Mapping
Once the output of the second convolution layer is converted to binary, for , it obtains , which is a vector of binary bits. Since has the same size as with zero padding, for each pixel, the vector of binary bits is viewed as a decimal number of digits. It “maps” the output back into a single integer:
(53) |
Max Pooling
The output of the binary quantization layer are “images” with each “pixel” of digits. To further reduce the size of the representation, for , a max pooling layer is applied after the binary quantization layer.
Perturbation
The randomized response is applied to the output of the pooling layer, and the domain size is determined by , where . As we mentioned early, in DCA, the rank of the signal space is limited by the number of classes. More specially, , where is the number of filters in th convolution layer and is the number of classes. By employing DCA in the design, the domain of the final output is scalable, moreover, DCA also guarantees a high utility with a small number of components.
Experiment
We evaluate DCAConv over three popular benchmark image datasets, the MNIST, Fashion-MNIST111https://github.com/zalandoresearch/fashion-mnist, YaleB222vision.ucsd.edu/ iskwak/ExtYaleDatabase/ExtYaleB.html.
The MNIST dataset contains grayscale images of handwritten digits that from 0 to 9, in which there are 60,000 samples for training and 10,000 samples for testing.
The Fashion-MNIST dataset contains the grayscale article images that each sample associates with a label from 10 classes. The dataset is intended to replace the overused MNIST dataset, and shares the same image size and structure of training and testing splits.
The YaleB face dataset contains grayscale photos of 27 subjects and each one has 594 in average, we select samples based on the light condition and the head position, which results in around 250 samples per subject. The face is extracted and scaled to 70 70.
To study the utility and privacy trade-off in terms of the domain size and privacy parameter , we apply randomized response to multiple image representations, like raw pixel and Histogram of Oriented Gradients (HOG), the perturbed input is then used to fit the Naive Bayes and KNN classifiers. For DCAConv, the domain size is altered by choosing different number of filters in the last convolution layer and the classification result demonstrates the effectiveness of the DCAConv over other image feature representations. All experiments are repeated 10 times, the mean and standard deviation of the classification accuracy are drawn in figures.



DCAConv vs Pixel vs HOG
We first measure the classification accuracy with three image representations, the raw pixel, DCAConv and HOG, where a Bag of Visual Words (BOVW) model is associated with the HOG descriptor. More specifically, HOG descriptor provides a fixed length of the image representation, while each feature is contrast-normalized to improve the accuracy. Once the descriptor is generated, a K-Means clustering is performed over the whole descriptors and the center of each cluster is used as the visual dictionary’s vocabularies. Finally, the frequency histogram is made from the visual vocabularies and is served as BOVW. Unlike DCAConv, the domain size of the BOVW is fixed once the number of clusters is determined and there is no significant difference observed after tuning the number of clusters. In the experiment, the domain size of the BOVW is determined by the maximum frequency observed among all features, e.g. MNIST has a domain size of 40, Fashion-MNIST has a size of 39 and YaleB has a size of 44, otherwise, the probability of answering honestly would be uneven among all features. For DCACov, there are two convolution layers and various number of filters is tested. To make a fair comparison, the raw pixel is downscaled to have the same size as the DCAConv output domain. For example, the raw pixel value is downscaled to [0,15] using a MinMaxScaler333http://scikit-learn.org/stable/modules/generated/
sklearn.preprocessing.MinMaxScaler.html to match the DCAConv with 4 filters. For the information loss caused by the downscaling, we don’t observe a significant degrading in terms of the classification accuracy. Fig. 4 displays the accuracy regarding Naive Bayes and KNN classifiers, the horizontal specifies the for each individual feature, and the vertical axis shows the classification accuracy.
From the figure, it could be seen that, both classifiers perform slightly better than random guessing when , and the accuracy increases as . When grows to , for DCAConv and raw pixel, both classifiers provide reasonably good accuracy, however, the BOVW still suffers from a low accuracy as the domain size is much larger than the other two representations. Tab. 1 provides the KNN accuracy of DCAConv when , where the last column gives the ground truth as no randomized response applied. As analyzed in Section III, the probability of being the true distance is bigger than as , and it confirms that the accuracy approximates to the ground truth given that , e.g. for all 3 datasets, the accuracy is within of the ground truth as . The comparison between Naive Bayes and KNN classifiers shows that Naive Bayes achieves higher accuracy when , where it may due to that the noise is eliminated by the frequency estimator, but the KNN still suffers to a low probability to preserve the true proximity of the data points given the same . However, as increases, the perturbation becomes less, and the KNN starts to dominate classification accuracy.
Finally, for the texture dataset, the downscaled pixel feature provides a compatible accuracy with DCAConv. However, for the more complicated data, like facial data, the pixel fails to provide a good discriminant capacity in the KNN algorithm; under the same , BOVW performs worse than the other two representations due to the large domain size.
0.1 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | 3.0 | 3.5 | 4.0 | |||
Naive Bayes | MNIST | 77.52 | 86.35 | 87.15 | 87.19 | 87.07 | 86.97 | 86.92 | 86.92 | 86.90 | 86.90 |
Fashion-MNIST | 58.96 | 68.27 | 68.71 | 68.88 | 68.89 | 68.94 | 68.94 | 68.90 | 68.90 | 68.80 | |
YaleB | 10.52 | 44.27 | 68.29 | 75.32 | 78.89 | 79.43 | 80.63 | 80.70 | 81.32 | 81.71 | |
KNN | MNIST | 24.72 | 68.92 | 81.96 | 86.05 | 88.00 | 89.37 | 89.95 | 90.27 | 90.46 | 90.50 |
Fashion-MNIST | 20.56 | 48.58 | 57.35 | 63.40 | 68.21 | 71.54 | 74.14 | 75.64 | 76.73 | 78.70 | |
YaleB | 6.01 | 49.56 | 90.76 | 95.18 | 95.88 | 95.91 | 95.88 | 95.77 | 95.77 | 95.92 |
0.1 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | 3.0 | 3.5 | 4.0 | |||
Naive Bayes | MNIST | 76.38 | 77.30 | 77.21 | 77.10 | 77.04 | 76.96 | 76.91 | 76.89 | 76.83 | 76.60 |
Fashion-MNIST | 58.96 | 68.27 | 68.71 | 68.88 | 68.89 | 68.94 | 68.94 | 68.90 | 68.90 | 68.80 | |
YaleB | 24.01 | 44.76 | 43.94 | 43.38 | 43.16 | 42.91 | 42.86 | 42.74 | 42.74 | 42.74 | |
KNN | MNIST | 65.96 | 85.25 | 88.27 | 89.21 | 89.69 | 89.95 | 90.04 | 90.19 | 90.24 | 90.20 |
Fashion-MNIST | 69.66 | 70.90 | 70.62 | 70.52 | 70.43 | 70.39 | 70.39 | 70.37 | 70.32 | 70.30 | |
YaleB | 79.33 | 95.50 | 95.35 | 95.46 | 95.58 | 95.55 | 95.61 | 95.66 | 95.75 | 95.80 |
DCAConv vs BNN



We evaluate DCAConv under an extreme case, where the output is set to binary, and compare the utility of the representation to a variant of CNN. As we mention early, the reason that it is hard to apply randomized response to CNN is that the data flow is continuous. However, there are studies to replace the weights and activation with discrete values for the purpose of improving the power-efficiency. Binarized Neural Network (BNN)(?) is one of such research work, which modifies the convolution layer and activation function, to transform the data flow from float number to . For the implementation of BNN, we take the suggested values for parameters like the number of hidden units in the fully connected layer, but with the same convolution layer setting as DCAConv, like the same number of filters. Once the training of BNN is finished, we take the activated binarized output before the fully connected layer and perturb it using randomized response. The classification result is shown in Fig 6, where the vertical axis specifies the classification accuracy. With the binary representation, the probability of preserving the true proximity quickly dominates as , and the performance of the perturbed data almost reaches the ground truth. Similarly, Tab. 2 provides the Naive Bayes and KNN accuracy of DCAConv when , where the domain size , it can be seen that the classification accuracy of KNN almost matches the ground truth regarding all 3 datasets when . However, unlike KNN, there is not a clear threshold of for Naive Bayes classifier to provide a utility guarantee.
Related Work
Image privacy has been studied widely. Privacy-aware image classification has been addressed in (?) and (?), where they are interested in determining if an image has privacy-related content or not. By exploring the “public/private” tags provided by the end users, they trained a privacy classifier to help the end users make decisions in the context of image sharing. A privacy-preserving image feature extraction system is proposed in (?), namely SecSIFT, which allows the user to delegate the SIFT feature extraction to the cloud computing platform. In the design, the image owners encrypt the image, then upload the encrypted data to the Cloud. And the computation is distributed to multiple independent Cloud entities, who are not colluding with each other. Once the feature extraction is finished, the encrypted feature descriptors are returned to the image owners. The authors argue that both the plaintext data and the locations of feature points of the image are not leaked to the Cloud. To the best of our knowledge, the most similar work to this paper is (?), a privacy-enhanced face recognition system. They consider a two party problem, the server owns a database of face images, and the client who holds a facial image. The protocol they propose allows the client efficiently hiding both the biometrics and the result from the server that performs the matching operation by using secure multiparty computation technique. The difference from our work is that it is a centralized setting where the data has already been collected in the server. However, the technique (e.g, without adding noise for differential privacy) provides inadequate privacy protection.
LDP is firstly proposed by (?), in where the minimax bounds for learning problem is derived under the local privacy definition. Kairouz et al. (?) study the trade-off between the utility and LDP in terms of family of extremal mechanisms. Although two simple extremal mechanisms, the binary and randomized response mechanisms, are proposed and shown the theoretical guarantee of the utility, it is still hard to adopt the methods in real world machine learning problems. Wang et al. (?) propose a framework that generalizes several LDP protocols in the literature and analyzes each one regarding frequency estimation. For the real world application, RAPPOR(?) is the first that practically adopts LDP to collect the statistics from the end users, which is designed by Google. Apple and Microsoft also propose their own tools to use LDP in collecting usage, typing history and telemetry data(?).
Recently, an iterative method(?) that satisfies -LDP is proposed for empirical risk minimization, where each data owner locally computes and perturbs the stochastic gradient descent (SGD) of the parameters, then shares the noisy version of SGD to the aggregator. In the paper, the authors demonstrate the computation with three models, the linear regression, logistic regression and SVM. Another client-server machine learning framework is proposed to learn the Generalized Linear models(?), where the server simultaneous delivers the models to the distributed clients and asks for the parameter update. More specifically, the server maintains versions of the machine learning model, and randomly selects one version to update the parameters and replaces the another version. To enforce the -LDP, the client uses Laplacian mechanism to perturb the parameters and returns the model to the server. Unlike both works, we consider a “non-interactive” scenario where the data owners only perturb the input once and then share the noisy input with the data user. Comparing to the interactive fashion, the non-interactive way requires less communication cost and brings a great scalability. Overall, most current studies regarding LDP either focus on the theoretical interest of the perturbation methods or on simple statistical collection, like mean and frequency estimation. There are few studies regarding machine learning in a local setting.
Conclusion
In this paper, we study a privacy-preserving image classification problem, where the distributed data owners hold a set of images; the untrustworthy data user would like to fit a classifier regarding the images held by the owners. To protect the privacy, we propose to use randomized response to perturb the image representation locally, which satisfies -Local Differential Privacy. We address two questions in this paper, firstly, we analyze the utility and privacy trade-off regarding the domain size and parameter for Naive Bayes and KNN classifier. For Naive Bayes classifier, the utility is bounded by the frequency estimation of the individual feature; for KNN, we show that the probability of retaining the true proximity between the testing sample and the perturbed training sample is bigger than as ; secondly, to balance the trade-off between the utility and privacy, we propose a supervised image feature extractor, namely DCAConv, which produces a representation of the image with a scalable domain size. The experiments over three benchmark image datasets evaluates the classification accuracy of the perturbed image representations with respect to Naive Bayes and KNN algorithm, which confirms our analysis regarding the domain size and and shows that the effectiveness of DCAConv under various privacy constraints.
References
- [Boiman, Shechtman, and Irani 2008] Boiman, O.; Shechtman, E.; and Irani, M. 2008. In defense of nearest-neighbor based image classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- [Chan et al. 2015] Chan, T.-H.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; and Ma, Y. 2015. Pcanet: A simple deep learning baseline for image classification? IEEE Transactions on Image Processing.
- [Chan, Liang, and Vasconcelos 2008] Chan, A. B.; Liang, Z.-S. J.; and Vasconcelos, N. 2008. Privacy preserving crowd monitoring: Counting people without people models or tracking. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
- [Courbariaux et al. 2016] Courbariaux, M.; Hubara, I.; Soudry, D.; El-Yaniv, R.; and Bengio, Y. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830.
- [Ding, Kulkarni, and Yekhanin 2017] Ding, B.; Kulkarni, J.; and Yekhanin, S. 2017. Collecting telemetry data privately. In Advances in Neural Information Processing Systems.
- [Duchi, Jordan, and Wainwright 2013] Duchi, J. C.; Jordan, M. I.; and Wainwright, M. J. 2013. Local privacy and statistical minimax rates. In IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS).
- [Dwork et al. 2014] Dwork, C.; Talwar, K.; Thakurta, A.; and Zhang, L. 2014. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In 46th Annual ACM Symposium on Theory of Computing.
- [Dwork 2006] Dwork, C. 2006. Differential privacy. In 33 International Conference on Automata, Languages and Programming.
- [Erkin et al. 2009] Erkin, Z.; Franz, M.; Guajardo, J.; Katzenbeisser, S.; Lagendijk, I.; and Toft, T. 2009. Privacy-preserving face recognition. In International Symposium on Privacy Enhancing Technologies Symposium.
- [Erlingsson, Pihur, and Korolova 2014] Erlingsson, Ú.; Pihur, V.; and Korolova, A. 2014. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security.
- [Kairouz, Oh, and Viswanath 2014] Kairouz, P.; Oh, S.; and Viswanath, P. 2014. Extremal mechanisms for local differential privacy. In Advances in neural information processing systems.
- [Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems.
- [Kung 2017] Kung, S.-Y. 2017. Discriminant component analysis for privacy protection and visualization of big data. Multimedia Tools and Applications.
- [Leon et al. 2012] Leon, P.; Ur, B.; Shay, R.; Wang, Y.; Balebako, R.; and Cranor, L. 2012. Why johnny can’t opt out: a usability evaluation of tools to limit online behavioral advertising. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
- [Lepinski et al. 2015] Lepinski, M.; Levin, D.; McCarthy, D.; Watro, R.; Lack, M.; Hallenbeck, D.; and Slater, D. 2015. Privacy-enhanced android for smart cities applications. In EAI International Conference on Smart Urban Mobility Services.
- [McCann and Lowe 2012] McCann, S., and Lowe, D. G. 2012. Local naive bayes nearest neighbor for image classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- [Nguyên et al. 2016] Nguyên, T. T.; Xiao, X.; Yang, Y.; Hui, S. C.; Shin, H.; and Shin, J. 2016. Collecting and analyzing data from smart device users with local differential privacy. arXiv preprint arXiv:1606.05053.
- [Pihur et al. 2018] Pihur, V.; Korolova, A.; Liu, F.; Sankuratripati, S.; Yung, M.; Huang, D.; and Zeng, R. 2018. Differentially-private” draw and discard” machine learning. arXiv preprint arXiv:1807.04369.
- [Qin et al. 2014] Qin, Z.; Yan, J.; Ren, K.; Chen, C. W.; and Wang, C. 2014. Towards efficient privacy-preserving image feature extraction in cloud computing. In Proceedings of the 22nd ACM international conference on Multimedia.
- [Spyromitros-Xioufis et al. 2016] Spyromitros-Xioufis, E.; Papadopoulos, S.; Popescu, A.; and Kompatsiaris, Y. 2016. Personalized privacy-aware image classification. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval.
- [Wang et al. 2016] Wang, S.; Huang, L.; Wang, P.; Deng, H.; Xu, H.; and Yang, W. 2016. Private weighted histogram aggregation in crowdsourcing. In International Conference on Wireless Algorithms, Systems, and Applications.
- [Wang et al. 2017] Wang, T.; Blocki, J.; Li, N.; and Jha, S. 2017. Locally differentially private protocols for frequency estimation. In Proceedings of the 26th USENIX Security Symposium.
- [Warner 1965] Warner, S. L. 1965. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association.
- [Xia et al. 2016] Xia, Z.; Wang, X.; Zhang, L.; Qin, Z.; Sun, X.; and Ren, K. 2016. A privacy-preserving and copy-deterrence content-based image retrieval scheme in cloud computing. IEEE Transactions on Information Forensics and Security.
- [Zerr et al. 2012] Zerr, S.; Siersdorfer, S.; Hare, J.; and Demidova, E. 2012. Privacy-aware image classification and search. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval.