Optimizing Secure Decision Tree Inference Outsourcing
Abstract
Outsourcing decision tree inference services to the cloud is highly beneficial, yet raises critical privacy concerns on the proprietary decision tree of the model provider and the private input data of the client. In this paper, we design, implement, and evaluate a new system that allows highly efficient outsourcing of decision tree inference. Our system significantly improves upon the state-of-the-art in the overall online end-to-end secure inference service latency at the cloud as well as the local-side performance of the model provider. We first presents a new scheme which securely shifts most of the processing of the model provider to the cloud, resulting in a substantial reduction on the model provider’s performance complexities. We further devise a scheme which substantially optimizes the performance for encrypted decision tree inference at the cloud, particularly the communication round complexities. The synergy of these techniques allows our new system to achieve up to better overall online end-to-end secure inference latency at the cloud side over realistic WAN environment, as well as bring the model provider up to savings in communication and savings in computation.
Index Terms:
Privacy preservation, decision trees, cloud, inference service, secure outsourcing1 Introduction
Machine learning inference services greatly benefit various kinds of application domains (e.g., healthcare [1, 2, 3], finance [4, 5], and intrusion detection [6, 7]), and its rapid development has been largely facilitated by cloud computing [8, 9, 10] in recent years. In this emerging machine learning based service paradigm, a model provider can deploy a trained model in the cloud, which can then provide inference services to the clients. Outsourcing such services to the cloud promises well-understood benefits for both the model provider (provider for short) and client, such as scalability, ubiquitous access, and economical cost.
Among others, decisions trees are one of the most popular machine learning models due to its ease of use and effectiveness, and have been shown to benefit real applications like medical diagnosis [1, 11] and credit-risk assessment [4]. Briefly, a decision tree is comprised of some internal nodes, which are called decision nodes, and some leaf nodes. Each decision node is used to compare a threshold with a certain feature in the feature vector, which is the input to decision tree evaluation, and decide the branch to be taken next. And each leaf node carries a prediction value indicating the inference result. Decision tree inference over an input feature vector is equivalent to tree traversal starting at the root node and terminating when a leaf node is reached.
While outsourcing the decision tree inference service to the cloud is quite beneficial, it also raises critical privacy concerns on the decision tree model and the input data. On the provider side, it is widely known that training a high quality model requires a significant amount of investment on datasets (possibly sensitive), resources, and specialized skills. It is thus important that the decision tree is not exposed in the service so that the intellectual property as well as the profitability and competitive advantage of the provider could be respected. On the client side, the input feature vector may contain sensitive information, e.g., data in medical applications or financial applications. Overcoming the privacy hurdles is thus of paramount importance to help the provider and client gain confidence in outsourced decision tree inference services. Towards this challenge, a recent research endeavor has been presented by Zheng et al. [12], which represents the state-of-the-art. Their design is based on the lightweight additive secret sharing technique and works under a compatible architecture where two cloud servers from independent cloud providers are employed to jointly conduct the decision tree inference in the ciphertext domain. As an initial endeavor, however, their design is not fully satisfactory and yet to be optimized in performance, as we detail below.
Firstly, the performance complexity of the provider is dependent on the size of the decision tree as well as the feature vector. Specifically, the provider needs to construct and encrypt a binary matrix of size scaling to the product of the number of decision nodes and the dimension of the feature vector, so as to support secure feature selection (more details in Section 4.2). Such multiplicative complexity leads to practically unfavorable overhead, which would be further aggravated when the provider needs to outsource multiple decision trees, either for different application domains, or for random forest (an ensemble of decision trees) evaluation.
Secondly, at the cloud side, the phase of secure decision node evaluation has communication rounds linear to the number of bits for value representation. This is unfavorable in the real-world scenario when the two cloud servers are situated in different geographic regions and communicate over WAN, which is a more reasonable setting than local networks given that the two cloud servers are assumed from different trust domains [13].
In light of the above observations, In this paper, we present a new highly efficient design for secure decision tree inference outsourcing which significantly improves upon the state-of-the-art. Our design follows the same architecture of [12], and also makes use of additive secret sharing, yet with significant optimizations to achieve largely boosted performance compared to the state-of-the-art work.
Firstly, we design a new scheme which makes the provider’s performance complexity independent of the feature vector and thus free of the above multiplicative complexity, through a new re-formulation of the secure feature selection problem. We make an observation that secure feature selection can indeed be treated as an oblivious array-entry read problem, where the encrypted feature vector could be treated as an encrypted array, and the encrypted index value is used to obliviously select an entry from the array. During the procedure, it is required that no information about the feature vector, index value, and selected feature be revealed. With this observation, we propose a new secure feature selection design where the provider only needs to construct and encrypt an indexing vector with size , rather than a matrix of size as proposed in [12].
Secondly, we note that the linear communication round complexity of the prior work [12] in the secure decision node evaluation phase is due to the secure realization of a ripple carry adder for secret-shared comparison, which faces a delay problem due to sequential procedure of carry computation. Our observation from the field of digital circuit design is that the carry delay problem presented in the ripple carry adder can be solved via the advanced carry look-ahead adder [14]. With this observation, we craft a new design for secure decision node evaluation, through digging deep into the logic and computation of the carry look-ahead adder and appropriately organizing the computation in a secure and efficient manner. Our new design achieves a logarithmic communication complexity for secure decision node evaluation, gaining superior suitability for practical deployment in WAN environments. As a concrete example, we are able to significantly reduce the rounds of secure decision node evaluation at the cloud servers from to (with the bit length for value representation being ), greatly reducing the network latency due to interaction rounds. We also provide concrete complexity analysis, showing that such significant gain does not sacrifice computational efficiency in terms of the number of secret-shared multiplications.
The synergy of the above optimization techniques lead to a new highly efficient cryptographic inference protocol which achieves a significant reduction on the overall online inference latency at the cloud, as well as a significant boost in the provider’s performance, as compared to the state-of-the-art. We provide formal security analysis of our design under the standard simulation based paradigm. We implement our system and make deployment on the Amazon cloud for performance evaluation over various decision trees with realistic sizes. Compared with the state-of-the-art prior work [12], the overall online end-to-end inference latency at the cloud servers over realistic WAN environment is up to better. In the meantime, our system offers the provider up to savings in communication and savings in computation.
The rest of this paper is organized as follows. Section 2 introduces some preliminaries. Section 3 describes the system model and threat model. Section 4 gives the details of our design. Section 5 provides the security analysis. Section 6 shows the experiments. Section 7 discusses the related work. Section 8 concludes the whole paper.
2 Preliminaries
2.1 Decision Tree Inference
Fig. 1 illustrates a decision tree. As shown, each internal node (called decision node ) is associated with a threshold , while each leaf node is associated with a prediction value indicating the possible inference result. Hence, given a decision tree with decision nodes and leaf nodes, a threshold vector and a prediction value vector are derived. The input for decision tree inference is an -dimensional feature vector, denoted by . There is an associated input selection mapping . Decision tree inference with as input works as follows. Firstly, the mapping is used to select a feature from for each . Secondly, starting from the root node, the Boolean function is evaluated at each . The evaluation result decides whether to next take the left () or right () branch. Such evaluation terminates when a leaf node is reached. The depth is the length of the longest path between the root node and a leaf node. Table I provides a summary of the key notations. Without loss of generality and as the tree structure should be hidden, we will consider complete binary decision trees in our security design, which is also consistent with previous works [15, 16, 17, 18, 12]. It is noted that dummy nodes can be simply added to make non-complete decision trees complete [15].
Notation | Description |
---|---|
Feature vector | |
Threshold vector | |
The -th feature in the feature vector | |
The threshold at decision node | |
Depth of a decision tree | |
Number of decision nodes | |
Dimension of feature vector | |
Number of leaf nodes | |
Number of bits for value representation | |
Evaluation result at decision node | |
Prediction value of leaf node |
2.2 Additive Secret Sharing
Given a value , its -of- additive secret sharing is a pair , , where is a random value in and the subtraction is done in (i.e., result is modulo ). Given either or , the value is perfectly hidden. Suppose that two values and are secret-shared among two parties and , i.e., holds and while holds and . The secret sharing (resp. ) of (resp. ) can be computed locally where each party () directly computes (resp. ). Multiplication by a constant on the value can also be done locally, i.e., . Multiplication over two secret sharings and can be supported by using the Beaver’s multiplication triple [19, 20]. That is, given the secret sharing of a multiplication triple where , can be obtained with one round of interaction between the two parties. In particular, each party first computes and . Then, broadcasts and , and recovers and . Given this, each party computes .
3 Problem Statement
3.1 System Model
Fig. 2 shows our system architecture, comprised of the provider, the client, and two cloud servers hosted by independent and geographically separated cloud services. Such architecture follows the state-of-the-art prior work [12]. The provider (e.g., a medical institution) owns a decision tree model and provides inference services to the client with the power of cloud computing, i.e., outsourcing the inference service to the cloud. Due to concerns on the proprietary decision tree, the provider would only provide an encrypted version. The client (e.g., a patient) holds a feature vector which may encode private information such as weight, height, heart rate, and blood pressure, and wants to use the intelligent inference service to obtain a prediction about, e.g., her health. As the feature vector is privacy-sensitive, the client is only willing to provide a ciphertext.
The power of the cloud is considered to be supplied by the two cloud servers and , which jointly provide the secure decision tree inference service. Such a two-server model not only appeared in the prior work [12] on secure outsourced decision tree inference, but has also been recently used to facilitate security designs in different applications [21, 22, 23, 13, 24], with tailored use according to problem specifics. The prominent advantage of such a two-server model is that it allows the provider and the client to go offline after supplying the encrypted inputs, and the secure inference computation is can be fully run at the cloud. Besides, it is compatible with the working paradigm of additive secret sharing, which is applied for the encryption of the decision tree and feature vector. Each cloud server receives shares of the decision tree and feature vector. They jointly do the processing and produce secret-shared inference result which can be retrieved by the client on demand to reconstruct the inference result.
It is noted that as the two cloud servers are assumed from different trust domains, a practical consideration on realistic deployment is that the two cloud servers be situated in different geographic regions and communicate over a WAN, which is a more reasonable setting compared to local networks. In this case, the latency due to interactions between the cloud servers should be taken into account as an important factor in the secure system design.
Input: Secret sharings and . Output: Secret sharing . 1: Each creates an array where the -th element is . Here, and are random values chosen by ; and . 2: chooses a random value and sends to . 3: computes and sends it to . 4: removes and produces . 5: , with as input, acts as the receiver to run an OT protocol with to obtain . 6: , in a symmetric manner following Steps 2-5, obtains , where . 7: chooses a random value and sends to . Also, sets as its share for the expected feature . 8: computes and sets the result as its share for .
3.2 Threat Model
Following prior work on secure outsourced decision tree inference as well as most of existing works on privacy-preserving machine learning [12, 23, 22], we consider a semi-honest adversary setting in our system. A semi-honest adversary would honestly follow our protocol, yet attempts to infer private information beyond its access rights. In our system, it is considered that each entity (cloud server, client, provider) might be corrupted by such adversary. For the cloud server entity, we follow previous works under the two-server model ([22, 12, 13, 23, 24, 21]) and assume they are non-colluding. Namely, the two cloud servers are not corrupted by an adversary at the same time.
Consistent with [12], we consider that the values in the client’s feature vector as well as the inference result (i.e., the prediction value corresponding to ) should be kept private for the client. For the provider, there is a need to keep private the proprietary parameters/information of the decision tree model, including each decision node’s threshold , the mapping for feature selection, and the prediction value of each leaf node (except the inference result revealed to the client per inference). It is also required that the client learns no additional private information about the decision tree other than the prediction value corresponding to her feature vector. Following prior work [15, 12], we assume some generic meta-parameters as public, including the depth , the dimension , and the number of bits for value representation. We deem dealing with adversarial machine learning attacks out of the scope.
4 Our Proposed Design
4.1 Overview
Our system is aimed at secure outsourcing of decision tree inference with high efficiency. Treating local efficiency as the first priority in our design philosophy, we first aim to shift as much processing as possible to the cloud, reducing the local performance complexities (particularly with respect to the provider in our system). On top of such consideration, we further aim to achieve high efficiency at the cloud through optimizing the processing. Our deign mainly relies on the delicate use of the lightweight additive secret sharing technique, rather than uses resource-intensive garbled circuits and homomorphic encryption.
At a high level, our design is comprised of four phases: secure input preparation, secure feature selection, secure decision node evaluation, and secure inference generation. The secure input preparation phase requires the provider (resp. the client) to encrypt the decision tree (resp. feature vector), and send the ciphertexts to the cloud servers. The secure feature selection phase is to securely select for each decision node a certain feature from the feature vector, in such a way that the cloud servers are oblivious to the mapping between decision nodes and features. The secure decision node evaluation phase securely evaluates the Boolean function at each decision node and output the ciphertext of the evaluation result. The secure inference generation phase is to leverage the ciphertexts of the evaluation results at decision nodes to generate the ciphertext of the ultimate decision tree inference result, which can then be retrieved by the client for recovery. Note that following the previous work [12], we assume that the data-independent multiplication triples are pre-generated and made available to the two cloud servers for use in our design, which can be efficiently achieved via a semi-honest third party [25, 12]. Our focus is on the latency-sensitive online inference procedure.
4.2 Secure Input Preparation
The client encrypts her feature vector via additive secret sharing applied in an element-wise manner. In particular, the client generates two secret shares: and . For the provider, he encrypts the decision tree as follows. Firstly, the vector of thresholds at decision nodes and the vector of prediction values at leaf nodes are encrypted through additive secret sharing, with the secret shares , , , and produced. Then, we need to consider how to properly encrypt the mapping which is used for feature selection.
We note that the prior work [12] constructs a binary matrix of size in such a manner that the -th row vector is a binary vector with elements where all are except for the one at position being set to . In this way, feature selection is then realized via matrix-vector multiplication between the binary matrix and the feature vector, which can be securely supported under additive secret sharing. Unfortunately, such an approach imposes on the provider multiplicative performance complexity which depends on the number of decision nodes as well as the dimension of the feature vector.
Differently, in order to minimize the costs of the provider, our new insight is to instead construct an index vector which is comprised of the selection index values for decision nodes and thus the complexity only depends on the number of decision nodes, i.e., . The selection index value for a decision node is represented as . The secure usage of this index vector will be described shortly in the phase of secure feature selection. The provider also encrypts this index vector via additive secret sharing and produces and . After the above processing, the provider sends the shares , , and to each cloud server ().

(a)

(b)
4.3 Secure Feature Selection
Upon receiving the shares of the client’s feature vector and the provider’s decision tree, the cloud servers first perform secure feature selection which produces the secret sharing of a certain feature for each decision node, based on the encrypted index vector . Note that hereafter all arithmetic operations are conducted by default in the ring , unless otherwise stated.
We now describe how secure feature selection is achieved in our design. For each decision node , the processing of secure feature selection would require as input the shares of the client’ feature vector and the shares of the corresponding index value in the index vector . The output is the secret sharing of the selected feature for the decision node .
To accomplish this functionality for our secure outsourced decision tree services, we make an observation that this indeed can be treated as oblivious array-entry read, where the encrypted feature vector could be treated as an encrypted array, and the encrypted index value is used to obliviously select an entry from the array. We leverage this observation and identify that an approach from a very recent work [26] is suited for our scenario. Using this approach as a basis, we devise the scheme for secure feature selection, which is given in Fig. 3.
The intuition is as follows. For ease of notation, we let (resp. ) denote the share of the feature vector (resp. ) at the cloud server (resp. ). Each cloud server () first creates a new array which is derived from by shifting its indices and entries under fixed random values (per feature selection). Then, given the secret sharing of a target index value , engages in an interaction with so as to receive the entry located at the shifted index in which corresponds to . Since the random value in the shifted index is only known to and the corresponding entry is masked by a random value , is oblivious to the original index as well as the share held by . Similarly, obtains the entry located at the shifted index in which corresponds to , while learning no information about the plain index and the share of the corresponding entry value held by . Next, the two cloud servers engage in an interaction which essentially performs secret re-sharing so as to obtain the secret sharing of the selected feature .
4.4 Secure Decision Node Evaluation
With the secret sharings of the threshold and selected feature at each decision node , the cloud servers now perform secure decision node evaluation. As this basically requires secure comparison of secret-shared values, the prior work [12] transforms the problem of secure decision node evaluation to a simplified bit extraction problem in the secret sharing domain. The key idea is to securely extract the most significant bit (MSB) of the subtraction result as the evaluation result at decision node .
Input: Secret sharings and .
Output: Secret sharing .
1:
computes .
// Secure MSB extraction (with assumed; denotes sharing over )
2:
Let (resp. ) represent the share (resp. ), with the bit string being (resp. ).
Let be defined as and as , where .
Also, let be defined as .
// Setup round for secure carry computation (SCC):
3:
Compute , for
4:
Compute , for
// SCC round 1 (with as example):
5:
Compute
6:
For
(a)
Compute
// SCC round 2:
7:
For
(a)
Compute
// SCC round 3:
8:
For
(a)
Compute
// SCC round 4:
9:
For
(a)
Compute
// SCC round 5:
10:
For
(a)
Compute
// SCC round 6:
11:
Compute
12:
Compute .
Despite the effectiveness, their solution is limited in that it poses linear round complexity. This would lead to high performance overhead in the realistic scenario where the two cloud servers are situated in different geographic regions and connected over WAN, which is a more reasonable setting than local networks given that the two cloud servers are assumed to be non-colluding [13]. The basic idea in [12] is to implement a -bit full adder logic in the secret sharing domain. Specifically, the shares of the difference value at the two cloud servers are represented in bitwise form respectively. Then, a -bit full adder logic is applied to add in the secret sharing domain the two binary inputs in a bitwise manner, where carry bits are calculated and propagated, and finally produce the MSB of the difference value . We note that the work [12] uses a classical and standard adder logic called ripple carry adder, as shown in Fig. 4.
In the ripple carry adder, for each full adder, the two bits that are to be added are available instantly. However, each full adder has to wait for the carry input to arrive from its previous adder. This means that the carry input for the full adder producing the MSB should wait after the carry has rippled through all previous full adders. Note that computing the carry output of each full adder in the secret sharing domain requires interactions between the two cloud servers, thus leading to round complexity.
Our design follows [12] in terms of the same strategy of secure MSB bit extraction for secure decision node evaluation, yet aims to reduce the round complexity. Through the design introduced below, we manage to reduce the round complexity from linear to logarithmic. Our observation is that the use of the more advanced carry look-ahead adder can solve the carry delay problem presented in the ripple carry adder [14]. At a high level, a carry look-ahead adder is able to calculate the carry in advance based on only the input bits. It works as follows. Firstly, two terms are defined for the carry look-ahead adder: the carry generate signal and the carry propagate signal , where and . Note that these two terms are only based on the input bits and can be computed instantly given the input bits. Then, the original carry calculation as in the ripple carry adder can then be re-formulated as . Such re-formulation allows a carry to be computed without waiting for the carry to ripple through all previous stages, as demonstrated by the following example (a -bit carry look-ahead adder):
-
1.
;
-
2.
;
-
3.
;
-
4.
.
It can be seen that each carry can be computed without waiting for the calculation of all previous carries.
Input: Secret sharing for each decision node and for each leaf node . Output: Inference result for the client. 1: For each , //Conversion from to . (a) Let be defined as . (b) Let over be defined as and as . (c) and compute in . 2: For each decision node , (a) sets and sets . This produces the secret-shared value of the left outgoing edge. (b) sets as the secret-shared value of the right outgoing edge. 3: For each leaf node , and compute a secret-shared polynomial term by multiplying the secret-shared values of all edges on its path. 4: and compute , i.e., the secret sharing of the inference result . 5: Client can retrieve and reconstruct as the inference result for the feature vector .
With the application of the carry look-ahead adder for MSB computation for secure decision node evaluation, we only need to focus on the calculation of the carry (e.g., in the above example for -bit inputs). That is, after computing , the MSB can be derived via . Note that . We now need to consider how to properly organize the computation of the carry so that we could effectively achieve round complexity. Our observation is that such a computation can be supported by forming a binary tree over the carry generate terms, carry propagate terms, and a binary operator (illustrated in Fig. 5(a)) defined as: , where and . For demonstration of this idea, we show in Fig. 5(b) an example on how the carry bit essential for MSB computation can be computed in a recursive manner based on a tree structure, in the case of -bit inputs. As shown, a pair of the carry generate and propagate terms is put as a leaf node of the tree. Then, the processing is done upwards attributing to each internal node the value corresponding to the application of the operator between its two children. Such processing leads to rounds in computing the essential carry .
For simplicity of illustration, we show here the details for the case of -bit inputs to concretely demonstrate the computation. In the first round, the following terms are computed: , . In the second round, the following term is computed: . Based on the definition of the binary operator , we first have . Then, we have , which corresponds to the carry .
With all the above insights in mind, we now elaborate on how to support secure decision node evaluation by taking advantage of the carry look-ahead adder in the ciphertext domain. From the definition of the operator , it is noted that only addition and multiplication are required (in ). So it is easy to see this operator can be securely realized in the ciphertext domain via secret-shared addition and multiplication. We denote the secure realization of the operator as , where denotes secret sharing in . Note that each call of the operator needs parallel secret-shared multiplications.
The details of secure decision node evaluation are provided in Fig. 6. It is noted that for simplicity and without loss of generality, we demonstrate the procedure assuming that , which is the practical parameter setting to be used in our experiments, and also consistent with the state-the-art work [12]. Also, to clearly show the computation that can be done in parallel in each round of secure carry computation, we intentionally avoid the use of nested loops. From the procedure shown in Fig. 6, we can see that for the practical setting , only rounds are required through our design for obtaining the secret sharing of the comparison result, in comparison with rounds in the state-of-the-art work [12].
We also point out that the improvement on communication rounds does not sacrifice the computation efficiency in terms of number of multiplications (in ). Through analysis, our new design requires ( for ) multiplications while the prior work requires ( for ) multiplications as well. We remark that although in principle the carry look-ahead adder has higher circuit complexity than the ripple carry adder when computing all carries is required, our design only needs the computation of the essential carry . This accounts for why our new design does not enlarge the computation cost in secure decision node evaluation compared with [12].
4.5 Secure Inference Generation
With the secret-shared evaluation results available at each decision node, we now describe how to leverage them to enable the two cloud servers to generate the encrypted inference result. We note that there are two approaches on how to use the decision node evaluation results [12]: a path cost-based approach and a polynomial-based approach. From the perspective of client cost, the main difference between these approaches is that the path cost-based approach imposes on the client high communication complexity exponentially scaling with the tree depth, while the polynomial-based approach only incurs constant and minimal communication cost (just two shares) for the client.
Given that high local efficiency is our first priority, our system makes use of the polynomial-based approach. This approach works as follows. Starting from the root node, the left outgoing edge of each decision node is assigned the value (denoted as ), while the right outgoing edge is assigned the value (denoted as ). Then, a term is computed for each path by multiplying the edge values of that path. As mentioned before, only the term for the path leading to the leaf node carrying the inference result will have the value , and all other terms are . Then, we can proceed by multiplying each term with the prediction value of each corresponding leaf node and computing the sum, i.e., , which will lead to the expected inference result .
We use the decision tree in Fig. 1 as an example to concretely demonstrate the computation. Firstly, there are four terms given that the depth is and thus four paths/leaf nodes. These terms are computed as follows: , , , and . Suppose that the feature vector is evaluated along the path of the leaf node . We have and , and so we have , , , and . It can be seen that all the terms except the term corresponding to has zero value. So we have , obtaining the expected inference result. The secure realization of this approach in our system for secure inference generation basically follows that of [12]. For completeness, we give the details of secure inference generation in Fig. 7, which realizes polynomial mechanism introduced above in the secret sharing domain.
5 Security Analysis
We define and prove the security of our protocol following the standard simulation-based paradigm. We start with defining the ideal functionality which captures the desired security properties for outsourced decision tree inference, with regard to the threat model mentioned above. We then give the formal security definition under the ideal functionality and show that our protocol securely realizes the ideal functionality. In what follows, we define the ideal functionality for the secure outsourced decision tree inference service targeted in this paper.
Definition 1.
The ideal functionality of the outsourced decision tree inference service is formulated as follows.
-
-
Input. The input to the consists of the decision tree from the provider and the feature vector from the client. The two cloud servers and provide no input to the .
-
-
Computation. Upon receiving the above input, the performs decision tree inference and produces the inference result denoted as .
-
-
Output. The outputs the inference result to the client, and outputs nothing to the provider and cloud servers.
Definition 2.
A protocol securely realizes the in the semi-honest adversary setting with static corruption if the following guarantees are satisfied:
-
-
Corrupted provider. A corrupted and semi-honest provider learns nothing about the values in the client’s feature vector . Formally, a probabilistic polynomial time (PPT) simulator should exist so that , where denotes the provider and refers to the view of in the real-world execution of the protocol .
-
-
Corrupted cloud server. A corrupted and semi-honest cloud server () learns no information about the client’s feature vector and the provider’s decision tree . Formally, a PPT simulator should exist such that , where denotes the view of the cloud server in the real-world execution of the protocol . Note that the two cloud servers have no input and output according to the . Since they are non-colluding, and cannot be corrupted by the adversary at the same time.
-
-
Corrupted client. A corrupted and semi-honest client learns no information about the provider’s decision tree other than generic meta-parameters as stated before. Formally, a PPT simulator should exist such that , where denotes the client, and refers to the view of in the real-world execution of the protocol .
Theorem 1.
Our protocol is a secure realization of the ideal functionality according to Definition 2.
Proof.
As per the security definition, we show the existence of a simulator for different corrupted parties (the provider, the client, and either of the cloud servers).
-
-
Simulator for the corrupted provider: In the protocol , the provider only needs to supply the secret shares of the decision tree and receives no messages. So, the simulator for the corrupted provider can thus be constructed in a dummy way by just outputting the input of the provider. The output of is identically distributed to the view of the corrupted provider.
-
-
Simulator for a corrupted cloud server: As two cloud servers have a symmetric role in our protocol , it suffices to show a simulator for . It is noted that the input/output of in our protocol are just secret shares of some data. The security of additive secret sharing ensures that these secret shares are purely random and can be perfectly simulated by using random values. For the interactions between the two cloud servers in different phases, they are in fact due to the calls of the oblivious array-entry read procedure (only used in the secure feature selection phase) and the secret-shared multiplication procedure based on Beaver’s triples. Let and denote the corresponding simulators which can simulate a view indistinguishable from real view for in the oblivious array-entry read procedure and the secret-shared multiplication procedure respectively. It is noted that the existence of these two simulators has been proved in prior work. With the existence of these simulators, first runs with random strings as input in the secure feature selection phase. Then, sets the simulated output as the input to the subsequent phases. On each call of the secret-shared multiplication procedure, runs in order. Finally, combines and outputs in order the simulated view by and on every secure multiplication as its output. This generates the final simulator for the cloud server .
-
-
Simulator for the corrupted client: In the protocol , the client supplies secret shares of the feature vector and only receives the two shares and of the inference result, from which the plaintext inference result is reconstructed as the output of the client. The simulator thus only needs to simulate the messages (two shares) received by the client given his output . It can set a random value as one of the shares, say , and for the other share . This is in fact just a direct application of additive secret sharing, the security of which ensures that and random values and indistinguishable from the shares received by the client. The combination of the two simulated shares also produces , which is the same as the output in the real protocol execution and thus guarantees correctness. So the output of is identically distributed to the view of the corrupted client. The proof of Theorem 1 is completed.
∎
6 Experiments
6.1 Setup
Our protocol is implemented in C++. For the oblivious transfer primitive, we rely on the libOTe library [27] which provides implementation of the protocol in [28]. Cloud-side experiments are conducted over two AWS t3.xlarge instances equipped with Intel Xeon Platinum 8175M CPU (2.50GHz and 16GB RAM): one in Europe (London) and one in US East (N. Virginia). The average latency is ms and bandwidth is Mbits/s. These two instances are situated in different regions for simulating the real-world scenario that the cloud servers are in different trust domains. The provider and the client are evaluated on an AWS t2.xlarge instance possessing an Intel Xeon E5-2676 v3 processor (2.40GHz and 16GB RAM). We test with synthetic decision trees with realistic configurations, following prior works [15, 16, 12]. The tree depth varies from to , and the dimension of the feature vector varies from to . We make comparison with the state-of-the-art prior work by Zheng et al. [12] (the ZDWWN protocol).
6.2 Local-side Performance Evaluation
We first evaluate the performance on the local side, i.e., the provider and the client. Fig. 8 and Fig. 9 show the communication and computation costs of the provider for varying decisions trees, along with comparison with the ZDWWN protocol. As the provider in our design just constructs an index vector of size rather than a matrix of size as in the ZDWWN protocol, he can enjoy significant cost savings. For different decision trees being tested, the communication cost of the provider ranges from MB to MB in our system, while it is from MB to MB in the ZDWWN protocol. In terms of the provider’s running time, it varies from ms to ms in our system, while it is from ms to ms in the ZDWWN protocol. Overall, our system can offer the provider up to savings (average of ) in communication. In computation, our system can offer the provider up to savings (average of ). In Table II, we give the communication cost and computation cost of the client respectively. It is noted that the client has minimal costs which only scales with the dimension of the feature vector. Recall that the client in our system has the same cost as in the ZDWWN protocol, given that the polynomial-based mechanism is used in the phase of secure inference generation which allows the client to only receive the two shares of the inference result.
Computation (ms) | Communication (KB) | ||
---|---|---|---|
3 | 13 | 0.0057 | 0.22 |
4 | 15 | 0.0060 | 0.25 |
8 | 9 | 0.0054 | 0.16 |
13 | 13 | 0.0056 | 0.22 |
17 | 57 | 0.0098 | 0.91 |
6.3 Cloud-side Performance Evaluation
We now examine the performance on the cloud side. Firstly, we show in Fig. 10 the amount of data transferred between the cloud servers in our system and make comparison with the ZDWWN protocol. Our design has relatively higher communication cost (average of ) than the ZDWWN protocol, which is mainly due to the sue of OT in the secure feature selection phase and the secret-shared multiplications in the secure inference generation phase. We emphasize that such overhead in these two phases is the trade-off for the substantial efficiency improvement on the provider side, which is the first priority in our design philosophy.
Significantly reduced overall online cloud service latency. On another hand, it is noted that the overall end-to-end online inference latency in our system is still much less than the ZDWWN protocol, as shown in Fig. 11. This means that compared with the ZDWWN protocol, our system provides much better service experience for the client and fits much better into the practical realm due to the capability in giving faster response. In particular, our new design is up to (average of ) faster than the ZDWWN protocol. Such efficiency is attributed to the significant optimization on the secure decision node evaluation phase, where the round complexity is largely reduced from linear (as in the ZDWWN protocol) to logarithmic. A breakdown of the overall cloud-side online inference latency is given in Table III.
7 Related Work
There has been some work on secure decision tree inference [29, 15, 16, 18, 17, 30]. Most of prior works [29, 15, 16, 17, 18, 30] focus on the non-outsourcing setting where a customized protocol is designed for running between the provider and the client. For example, in [16], to achieve secure feature selection, the client sends to the provider the ciphertext of the feature vector under homomorphic encryption, and then the provider directly selects the ciphertext of each feature for each decision node based on his plaintext selection mapping. Whether these protocols can be effectively adapted to the outsourcing setting remains largely unclear, since the outsourced service requires operations to conducted over encrypted decision tree and feature vector from the very beginning and also raises more design considerations for security and functionality. Moreover, many protocols make use of heavy cryptographic tools (e.g., fully/partially homomorphic encryption, garbled circuits, and ORAM) in the latency-sensitive online interactions. Although the protocol in [18] uses secret sharing, it is yet designed to fully and inefficiently work on binary representations of the decision tree as well as the feature vector provided from the very beginning, with all the secure processing conducted at bitwise level. So their protocol is also not directly adaptable for efficient secure outsourcing.
Very recently, the work [12] presents the first design tailored for secure outsourcing of decision tree inference, which runs under the two-server model and only makes use of additive secret sharing to securely realize the various components for the online execution of the service. As an initial attempt, however, its performance is yet to be optimized. Our new highly efficient design presents significant optimizations which largely improves the overall online end-to-end latency of the secure inference service provided by the cloud, as well as the provider’s performance.
Parameters | Secure Feature Selection | Secure Decision Node Evaluation | Secure Inference Generation | ||||
---|---|---|---|---|---|---|---|
ZDWWN | Ours | ZDWWN | Ours | ZDWWN | Ours | ||
3 | 13 | 0.151 | 0.527 | 9.38 | 0.529 | 0.154 | 0.154 |
4 | 15 | 0.156 | 0.529 | 9.454 | 0.53 | 0.306 | 0.306 |
8 | 9 | 0.167 | 0.533 | 9.456 | 0.532 | 0.383 | 0.383 |
13 | 13 | 0.393 | 0.91 | 10.03 | 1.735 | 0.69 | 0.69 |
17 | 57 | 19.376 | 21.006 | 11.669 | 4.281 | 1.785 | 1.785 |
Our work is also related to the line of work (e.g., [29, 31, 32, 22, 33], to just list a few) on securely evaluating other machine learning models, such as hyperplane decision [29], Naïve Bayes [29], neural networks [31, 32, 22, 33]. The common blueprint therein is to build specializaedprotocols tailored for the specific computation required by different models through different cryptographic techniques. For example, the work of Liu et al. [31] supports secure neural network evaluation using secret sharing and garbled circuits; the work of Juvekar et al. [32] relies on highly customized use of homomorphic encryption and garbled circuits to support low latency in secure neural network evaluation. Most of these works operate under the non-outsourcing and aim to protect privacy for the model and client input. There are some works [22, 34] also operating under the two-server model as in this work, with the tailored support for secure evaluation of models such as linear regression, logistic regression, and neural networks. Some recent efforts have also been presented on secure machine learning under the three-server [35, 36]/four-server [37] model (for models other than decision trees), where three/four servers have to engage in the online interactions.
8 Conclusion
In this paper, we design, implement, and evaluate a new system that allows highly efficient secure outsourcing of decision tree inference. Through the synergy of several delicate optimizations which securely shift most workload of the provider to the cloud and reduce the communication round complexities between the cloud servers, our system significantly improves upon the state-of-the-art prior work. Extensive experiments demonstrated that compared with the state-of-the-art, our new system achieves up to better online end-to-end inference latency between the cloud servers over realistic WAN, as well as allows the provider to enjoy savings in communication cost and savings in computation cost.
References
- [1] A. T. Azar and S. M. El-Metwally, “Decision tree classifiers for automated medical diagnosis,” Neural Computing and Applications, vol. 23, no. 7-8, pp. 2387–2403, 2013.
- [2] M. W. Libbrecht and W. S. Noble, “Machine learning applications in genetics and genomics,” Nature Reviews Genetics, vol. 16, no. 6, pp. 321–332, 2015.
- [3] F. Wang, H. Zhu, R. Lu, Y. Zheng, and H. Li, “Achieve efficient and privacy-preserving disease risk assessment over multi-outsourced vertical datasets,” IEEE Trans. Dependable Secure Comput., pp. 1–1, 2020.
- [4] B. W. Yap, S. Ong, and N. H. M. Husain, “Using data mining to improve assessment of credit worthiness via credit scoring models,” Expert Syst. Appl., vol. 38, no. 10, pp. 13 274–13 283, 2011.
- [5] D. Delen, C. Kuzey, and A. Uyar, “Measuring firm performance using financial ratios: A decision tree approach,” Expert Syst. Appl., vol. 40, no. 10, pp. 3970–3983, 2013.
- [6] S. S. S. Sindhu, S. Geetha, and A. Kannan, “Decision tree based light weight intrusion detection using a wrapper approach,” Expert Syst. Appl., vol. 39, no. 1, pp. 129–141, 2012.
- [7] N. B. Amor, S. Benferhat, and Z. Elouedi, “Naive bayes vs decision trees in intrusion detection systems,” in Proc. of ACM SAC, H. Haddad, A. Omicini, R. L. Wainwright, and L. M. Liebrock, Eds.
- [8] Microsoft Azure, “Deploy models with Azure Machine Learning,” https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where, 2019, [Online].
- [9] Google Cloud, “Deploying your model,” https://cloud.google.com/vision/automl/object-detection/docs/deploy, 2019, [Online].
- [10] AWS, “Deploy a Model on Amazon SageMaker Hosting Services ,” https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html, 2019, [Online].
- [11] J. Liang, Z. Qin, S. Xiao, L. Ou, and X. Lin, “Efficient and secure decision tree classification for cloud-assisted online diagnosis services,” IEEE Trans. Dependable and Secure Computing, 2019, doi: 10.1109/TDSC.2019.2922958.
- [12] Y. Zheng, H. Duan, C. Wang, R. Wang, and S. Nepal, “Securely and efficiently outsourcing decision tree inference,” IEEE Trans. Dependable Secure Comput., pp. 1–1, 2020.
- [13] W. Chen and R. A. Popa, “Metal: A metadata-hiding file sharing system,” in Proc. of NDSS, 2020.
- [14] D. Harris, “A taxonomy of parallel prefix networks,” in Proc. of Asilomar Conference on Signals, Systems & Computers, 2003.
- [15] D. J. Wu, T. Feng, M. Naehrig, and K. E. Lauter, “Privately evaluating decision trees and random forests,” PoPETs, vol. 2016, no. 4, pp. 335–355, 2016.
- [16] R. K. H. Tai, J. P. K. Ma, Y. Zhao, and S. S. M. Chow, “Privacy-preserving decision trees evaluation via linear functions,” in Proc. of ESORICS, 2017.
- [17] A. Tueno, F. Kerschbaum, and S. Katzenbeisser, “Private evaluation of decision trees using sublinear cost,” PoPETs, vol. 2019, no. 1, pp. 266–286, 2019.
- [18] M. D. Cock, R. Dowsley, C. Horst, R. Katti, A. Nascimento, W. Poon, and S. Truex, “Efficient and private scoring of decision trees, support vector machines and logistic regression models based on pre-computation,” IEEE Trans. Dependable Secure Comput., DOI: 10.1109/TDSC.2017.2679189, 2017.
- [19] D. Beaver, “Efficient multiparty protocols using circuit randomization,” in Proc. of CRYPTO, 1991.
- [20] H. Corrigan-Gibbs and D. Boneh, “Prio: Private, robust, and scalable computation of aggregate statistics,” in Poc. of USENIX NSDI, 2017, pp. 259–282.
- [21] Q. Wang, J. Wang, S. Hu, Q. Zou, and K. Ren, “Sechog: Privacy-preserving outsourcing computation of histogram of oriented gradients in the cloud,” in Proc. of ACM AsiaCCS, 2016.
- [22] P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in Proc. of IEEE S&P, 2017.
- [23] N. Agrawal, A. S. Shamsabadi, M. J. Kusner, and A. Gascón, “QUOTIENT: two-party secure neural network training and prediction,” in Proc. of ACM CCS, 2019.
- [24] Y. Zheng, H. Duan, and C. Wang, “Learning the truth privately and confidently: Encrypted confidence-aware truth discovery in mobile crowdsensing,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 10, pp. 2475–2489, 2018.
- [25] M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, “Chameleon: A hybrid secure computation framework for machine learning applications,” in Proc. of AsiaCCS, 2018.
- [26] M. Curran, X. Liang, H. Gupta, O. Pandey, and S. R. Das, “Procsa: Protecting privacy in crowdsourced spectrum allocation,” in Proc. of ESORICS, 2019.
- [27] P. Rindal, “libOTe: an efficient, portable, and easy to use Oblivious Transfer Library,” https://github.com/osu-crypto/libOTe.
- [28] V. Kolesnikov, R. Kumaresan, M. Rosulek, and N. Trieu, “Efficient batched oblivious PRF with applications to private set intersection,” in Proc. of ACM CCS, 2016.
- [29] R. Bost, R. A. Popa, S. Tu, and S. Goldwasser, “Machine learning classification over encrypted data,” in Proc. of NDSS, 2015.
- [30] A. Tueno, Y. Boev, and F. Kerschbaum, “Non-interactive private decision tree evaluation,” in Proc. of DBSec, A. Singhal and J. Vaidya, Eds., 2020.
- [31] J. Liu, M. Juuti, Y. Lu, and N. Asokan, “Oblivious neural network predictions via minionn transformations,” in Proc. of ACM CCS, 2017.
- [32] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, “GAZELLE: A low latency framework for secure neural network inference,” in Proc. of USENIX Security Symposium, 2018.
- [33] P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa, “Delphi: A cryptographic inference service for neural networks,” in Proc. of USENIX Security Symposium, 2020.
- [34] V. Nikolaenko, U. Weinsberg, S. Ioannidis, M. Joye, D. Boneh, and N. Taft, “Privacy-preserving ridge regression on hundreds of millions of records,” in Proc. of IEEE SP, 2013.
- [35] P. Mohassel and P. Rindal, “Aby: A mixed protocol framework for machine learning,” in Proc. of ACM CCS, 2018.
- [36] S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure computation for neural network training,” PoPETs, vol. 2019, no. 3, pp. 26–49, 2019.
- [37] H. Chaudhari, R. Rachuri, and A. Suresh, “Trident: Efficient 4pc framework for privacy preserving machine learning,” in Proc. of NDSS, 2020.