Adaptive Control of Differentially Private Linear Quadratic Systems
Abstract
In this paper we study the problem of regret minimization in reinforcement learning (RL) under differential privacy constraints. This work is motivated by the wide range of RL applications for providing personalized service, where privacy concerns are becoming paramount. In contrast to previous works, we take the first step towards non-tabular RL settings, while providing a rigorous privacy guarantee. In particular, we consider the adaptive control of differentially private linear quadratic (LQ) systems. We develop the first private RL algorithm, which is able to attain a sub-linear regret while guaranteeing privacy protection. More importantly, the additional cost due to privacy is only on the order of given privacy parameters . Through this process, we also provide a general procedure for adaptive control of LQ systems under changing regularizers, which not only generalizes previous non-private controls, but also serves as the basis for general private controls.
I Introduction
Reinforcement learning (RL) is a control-theoretic problem, which adaptively learns to make sequential decisions in an unknown environment through trial and error. RL has shown to have significant success for delivering a wide variety of personalized services, including online news and advertisement recommendation [1], medical treatment design [2], natural language processing [3], and social robot [4]. In these applications, an RL agent improves its personalization algorithm by interacting with users to maximize the reward. In particular, in each round, the RL agent offers an action based on the user’s state, and then receives the feedback from the user (i.e., state information, state transition, reward, etc.). This feedback is used by the agent to learn the unknown environment and improve its action selection strategy.
However, in most practical scenarios, the feedback from the users often encodes their sensitive information. For example, in a personalized healthcare setting, the states of a patient include personal information such as age, gender, height, weight, state of the treatment etc. Similarly, the states of a virtual keyboard user (e.g., Google GBoard users) are the words and sentences she typed in, which inevitably contain private information about the user. Another intriguing example is the social robot for second language education of children. The states include facial expressions, and the rewards contain whether they have passed the quiz. Users may not want any of this information to be inferred by others. This directly results in an increasing concern about privacy protection in personalized services. To be more specific, although a user might be willing to share her own information to the agent to obtain a better tailored service, she would not like to allow third parties to infer her private information from the output of the learning algorithm. For example, in the healthcare application, we would like to ensure that an adversary with arbitrary side knowledge cannot infer a particular patient’s state from the treatments prescribed to her.
Differential privacy (DP) [5] has become a standard mechanism for designing interactive learning algorithms under a rigorous privacy guarantee for individual data. Most of the previous works on differentially private learning under partial feedback focus on the simpler bandit setting (i.e., no state transition) [6, 7, 8, 9, 10]. For the general RL problem, there are only a few works that consider differential privacy [11, 12, 13]. More importantly, only the tabula-rasa discrete-state discrete-action environments are considered in these works. However, in real-world applications mentioned above, the number of states and actions are often very large and can even be infinite. Over the years, for various non-tabular environments, efficient and provably optimal algorithms for reward maximization or, equivalently, regret minimization have been developed (see, e.g., [14, 15, 16, 17, 18]). This directly motivates the following question: Is it possible to obtain the optimal reward while providing individual privacy guarantees in the non-tabular RL scenario?
In this paper, we take the first step to answer the aforementioned question by considering a particular non-tabular RL problem – adaptive control of linear quadratic (LQ) systems, in which the state transition is a linear function and the immediate reward (cost) is a quadratic function of the current state and action. In particular, our main contributions can be summarized as follows.
-
•
First, we provide a general framework for adaptive control of LQ systems under changing regularizers using the optimism in the face of uncertainty (OFU) principle, which covers both the extreme cases – non-private and fully private LQ control.
-
•
We then develop the first private RL algorithm, namely , for regret minimization in LQ systems by adapting the binary counting mechanism to ensure differencial privacy.
-
•
In particular, we show that satisfies joint differential privacy (JDP), which, informally, implies that sensitive information about a given user is protected even if an adversary has access to the actions prescribed to all other users.
-
•
Finally, we prove that achieves a sub-linear regret guarantee, where the regret due to privacy only grows as with privacy levels implying that a high amount of privacy (low ) comes at a high cost and vice-versa.
II Preliminaries
II-A Stochastic Linear Quadratic Control
We consider the discrete-time episodic linear quadratic (LQ) control problem with time steps at every episode. Let be the state of the system, be the control and be the cost at time . An LQ problem is characterized by linear dynamics and a quadratic cost function
(1) |
where , are unknown matrices, and , are known positive definite (p.d.) matrices. The starting state is fixed (can possibly be chosen by an adversary) and the system noise is zero-mean. We summarize the unknown parameters in .
The goal of the agent is to design a closed-loop control policy mapping states to controls that minimizes the expected cost
(2) |
for all and . Here the expectation is over the random trajectory induced by the policy starting from state at time . From the standard theory for LQ control (e.g., [19]), the optimal policy has the form
where the gain matrices are given by
(3) |
Here the symmetric positive semidefinite matrices are defined recursively by the Riccati iteration
(4) | |||
with . The optimal cost is given by
(5) |
We let the agent play episodes and measure the performance by cumulative regret.111In the following, we add subscript to denote the variables for the -th episode – state , control , noise and cost . In particular, if the true system dynamics are , the cumulative regret of the first episodes is given by
(6) |
where is the (expected) cost under an optimal policy for episode (computed via (5)), and is the (expected) cost under the chosen policy at the start of episode (computed via (2)). We seek to attain a sublinear regret , which ensures that the agent finds the optimal policy as . We end this section by presenting our assumptions on the LQ system (1), which are common in the LQ control literature [17].
Assumption 1 (Boundedness).
(a) The true system dynamics is a member of a set . (b) There exist constants , , such that , , and , , (c) For all , . (d) The noise at any and , is (i) independent of all other randomness, (ii) , and (iii) . (e) There exists a constant such that .
II-B Differential Privacy
We now formally define the notion of differential privacy in the context of episodic LQ control. We write to denote a sequence of unique users participating in the private RL protocol with an RL agent , where is the set of all users. Each user is identified by the state responses she gives to the controls chosen by the agent. We write to denote the privatized controls chosen by the agent when interacting with the users . Informally, we will be interested in randomized algorithms so that the knowledge of the output and all but the -th user does not reveal ‘much’ about . We formalize in the following definition, which is adapted from [20].
Definition 1 (Differential Privacy (DP)).
For any and , an algorithm is -differentially private if for all differing on a single user and all subset of controls ,
We now relax this definition motivated by the fact that the controls recommended to a given user is only observed by her. We consider joint differential privacy [21], which requires that simultaneously for all , the joint distribution on controls sent to users other than will not change substantially upon changing the state responses of the user . To this end, we let to denote all the controls chosen by the agent excluding those recommended to .
Definition 2 (Joint Differential Privacy (JDP)).
For any and , an algorithm is -jointly differentially private if for all , all differing on the -th user and all subset of controls given to all but the -th user,
This relaxation is necessary in our setting since knowledge of the controls recommended to the user can reveal a lot of information about her state responses. It weakens the constraint of DP only in that the controls given specifically to may be sensitive in her state responses. However, it is still a very strong definition since it protects from any arbitrary collusion of other users against her, so long as she does not herself make the controls reported to her public.
In this work, we look for algorithms that are -JDP. But, we will build our algorithm upon standard DP mechanisms. Furthermore, to establish privacy, we will use a different relaxation called concentrated differential privacy (CDP) [22]. Roughly, a mechanism is CDP if the privacy loss has Gaussian tails. To this end, we let to be a mechanism taking as input a data-stream and releasing output from some range .
Definition 3 (Concentrated Differential Privacy (CDP)).
For any , an algorithm is -zero-concentrated differentially private if for all differing on a single entry and all ,
where is the -Renyi divergence between the distributions of and .222For two probability distributions and on , the -Renyi divergence .
III OFU-Based Control
Our proposed private RL algorithm implements the optimism in the face of uncertainty (OFU) principle in LQ systems. As in [14], the key to implementing the OFU-based control is a high-probability confidence set for the unknown parameter matrix .
III-A Adaptive Control with Changing Regularizers
We start with the adaptive LQ control with changing regularizers. This not only allows us to generalize previous results for non-private control, but more importantly serves as a basis for the analysis of private control in the next section. We first define the following compact notations. For a state and control pair at step in episode , i.e., and , we write . For any , we define the following matrices: , and . For two matrices and , we also define . Now, at every episode , we consider the following ridge regression estimate w.r.t. a regularizing p.d. matrix :
In contrast to the standard online LQ control [14], here the sequence of matrices is perturbed by a sequence of regularizers . In particular, when , we get back the standard estimate of [14]. In addition, we also allow to be perturbed by a matrix at every episode . Setting and , we now define the estimate under changing regularizers and as
(7) |
We make the following assumptions on the sequence of regularizers and .
Assumption 2 (Regularity).
For any , there exist constants , and depending on such that, with probability at least , for all ,
Lemma 1 (Concentration under changing regularizers).
Lemma 1 helps us to introduce the following high probability confidence set
(8) |
We then search for an optimistic estimate within this confidence region , such that
(9) |
where is the optimal cost when system dynamics are (can be computed from (5)). With the estimate , the agent then chooses policy and selects the controls recommended by this policy
(10) |
where can be computed from (3). We call this procedure and bound its regret as follows.
Theorem 1 (Regret under changing regularizers).
Proof sketch. Inspired by [17], we first decompose the regret under the following ‘good’ event: which, by Assumption 1 and Lemma 1, holds w.p. at least . Then, under the ‘good’ event, the cumulative regret (6) can be written as
in which is given by (4) and denotes all the randomness present before time .
Now, we are going to bound each term, respectively. For the first two terms, we can show that both of them are bounded martingale difference sequences. Therefore, by Azuma–Hoeffding inequality, we have and with high probability. We use Lemma 1 and the OFU principle (9) to bound the third term as . To put everything together, first note from Assumption 1 that
Plugging this into given in Lemma 1 and the third term above, yields the final result. ∎
We end the section with a proof sketch of Lemma 1.
Proof sketch (Lemma 1). Under Assumptions 1 and 2, with some basic algebra, we first have
By Assumption 2, we have w.p. at least , . To bound , by the boundedness of in Assumption 1, we first note that each row of the matrix is a sub-Gaussian random vector with parameter . We then generalize the self-normalized concentration inequality of vector-valued martingales [23, Theorem 1] to the setting of matrix-valued martingales. In particular, we show that w.p. at least ,
Combining the bounds on and using a union bound argument, yields the final result. ∎
III-B Private Control
In this section, we introduce the algorithm (Alg. 1). At every episode , we keep track of the history via private version of the matrices and . To do so, we first initialize two private counter mechanisms and , which take as parameters the privacy levels , , number of episodes , horizon and a problem-specific constant (see Assumption 1). The counter (resp. ) take as input an event stream of matrices (resp. ), and at the start of each episode , release the private version of the matrix (resp. ), which itself is a matrix of the same dimension. Let and denote the privatized versions for and , respectively. For some (will be determined later), we define and . We now instantiate the general procedure under changing regularizers (Section III-A) with these private statistics. First, we compute the point estimate from (7) and build the confidence set from (8). Then, we choose the most optimisitic policy w.r.t. the entire set from (9) and (10). Finally, we execute the policy for the entire episode and update the counters with observed trajectory.
We now describe the private counters adapting the Binary counting mechanism of [24]. First, we write to denote a partial sum () involving the state-control pairs in episodes through . Next, we consider a binary interval tree, where each leaf node represents an episode (i.e., the tree has leaf nodes at the start of episode ), and each interior node represents the range of episodes covered by its children. At the start of episode , we first release a noisy corresponding to each node in the tree. Here is obtained by perturbing both -th and -th, , entries of with i.i.d. Gaussian noise .333This will ensure symmetry of the even after adding noise. Then is computed by summing up the noisy released by the set of nodes that uniquely cover the range . Observe that, at the end each episode, the mechanism only needs to store noisy required for computing private statistics at future episodes, and can safely discard that are no longer needed. For the private counter , we maintain with i.i.d. noise and compute the private statistics using a similar procedure. The noise levels and depends on the problem intrinsics (, , ) and privacy parameters (, ). These, in turn, govern the constants , , appearing in the confidence set and the regularizer . The details will be specified in the next Section as needed.
IV Privacy and Regret Guarantees
In this section, we show that is a JDP algorithm with sublinear regret guarantee.
IV-A Privacy Guarantee
Theorem 2 (Privacy).
Under Assumption 1, for any and , is -JDP.
Proof sketch. We first show that both the counters and are -DP. We begin with the counter . To this end, we need to determine a global upper bound over the -sensitivity of all the . Informally, encodes the maximum change in the Frobenious norm of each if the trajectory of a single episode is changed. By Assumption 1, we have , and hence . Since the noisy are obtained via Gaussian mechanism, we have that each is -CDP [22, Proposition 1.6]. We now see that every episode appears only in at most . Therefore, by the composition property, the whole counter is -CDP, and thus, in turn, -DP for any [22, Lemma 3.5]. Now, setting , we can ensure that is -DP. A similar analysis yields that counter is -DP if we set , where .
To prove Theorem 2, we now use the billboard lemma [25, Lemma 9] which, informally, states that an algorithm is JDP under continual observation if the output sent to each user is a function of the user’s private data and a common quantity computed using standard differential privacy. Note that at each episode , computes private statistics and for all users using the counters and . These statistics are then used to compute the policy . By composition and post-processing properties of DP, we can argue that the sequence of policies are computed using an -DP mechanism. Now, the controls during episode are generated using the policy and the user’s private data as . Then, by the billboard lemma, the composition of the controls sent to all the users is -JDP.∎
IV-B Regret Guarantee
Theorem 3 (Private regret).
Under Assumption 1, for any privacy parameters and , and for any , with probability at least , enjoys the regret bound
Theorems 2 and 3 together imply that can achieve a sub-linear regret under -JDP privacy guarantee. Furthermore, comparing Theorem 3 with Theorem 1, we see that the first term in the regret bound corresponds to the non-private regret, and the second term is the cost of privacy. More importantly, the cost due to privacy grows only as with .
Proof sketch (Theorem 3). First note that the private statistics can be computed by summing at most noisy . We then have that the total noise in each is a symmetric matrix with it’s -th entry, , being i.i.d. . Therefore, by an adaptation of [26, Corollary 4.4.8], we have w.p. at least ,
Similarly, the total noise in each is an matrix, whose each entry is i.i.d. . Hence is a -statistic with degrees of freedom, and therefore, by [27, Lemma 1], we have w.p. at least ,
By construction, we have the regularizer . Setting , we ensure that is p.d., and hence . Then, by a union bound argument, Assumption 2 holds for , and . Appropriating noise levels from Section IV-A, the regret bound now follows from Theorem 1.∎
V Conclusion
We develop the first DP algorithm, , for episodic LQ control. Through the notion of JDP, we show that it can protect private user information from being inferred by observing the control policy without losing much on its regret performance. We leave as future work private control of non-linear systems [16].
References
- [1] L. Li, W. Chu, J. Langford, and R. E. Schapire, “A contextual-bandit approach to personalized news article recommendation,” in Proceedings of the 19th international conference on World wide web, 2010, pp. 661–670.
- [2] Y. Zhao, M. R. Kosorok, and D. Zeng, “Reinforcement learning design for cancer clinical trials,” Statistics in medicine, vol. 28, no. 26, pp. 3294–3315, 2009.
- [3] A. R. Sharma and P. Kaushik, “Literature survey of statistical, deep and reinforcement learning in natural language processing,” in 2017 International Conference on Computing, Communication and Automation (ICCCA). IEEE, 2017, pp. 350–354.
- [4] G. Gordon, S. Spaulding, J. K. Westlund, J. J. Lee, L. Plummer, M. Martinez, M. Das, and C. Breazeal, “Affective personalization of a social robot tutor for children’s second language skills,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, 2016.
- [5] C. Dwork, “Differential privacy: A survey of results,” in International conference on theory and applications of models of computation. Springer, 2008, pp. 1–19.
- [6] A. Tossou and C. Dimitrakakis, “Algorithms for differentially private multi-armed bandits,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
- [7] ——, “Achieving privacy in the adversarial multi-armed bandit,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
- [8] D. Basu, C. Dimitrakakis, and A. Tossou, “Differential privacy for multi-armed bandits: What is it and what is its cost?” arXiv preprint arXiv:1905.12298, 2019.
- [9] N. Mishra and A. Thakurta, “(nearly) optimal differentially private stochastic multi-arm bandits,” in Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 2015, pp. 592–601.
- [10] X. Zhou and J. Tan, “Local differential privacy for bayesian optimization,” arXiv preprint arXiv:2010.06709, 2020.
- [11] B. Balle, M. Gomrokchi, and D. Precup, “Differentially private policy evaluation,” in International Conference on Machine Learning. PMLR, 2016, pp. 2130–2138.
- [12] G. Vietri, B. Balle, A. Krishnamurthy, and S. Wu, “Private reinforcement learning with pac and regret guarantees,” in International Conference on Machine Learning. PMLR, 2020, pp. 9754–9764.
- [13] E. Garcelon, V. Perchet, C. Pike-Burke, and M. Pirotta, “Local differentially private regret minimization in reinforcement learning,” arXiv preprint arXiv:2010.07778, 2020.
- [14] Y. Abbasi-Yadkori and C. Szepesvári, “Regret bounds for the adaptive control of linear quadratic systems,” in Proceedings of the 24th Annual Conference on Learning Theory, 2011, pp. 1–26.
- [15] I. Osband and B. V. Roy, “Model-based reinforcement learning and the eluder dimension,” in Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 1, 2014, pp. 1466–1474.
- [16] S. R. Chowdhury and A. Gopalan, “Online learning in kernelized markov decision processes,” in The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019, pp. 3197–3205.
- [17] T. Wang and L. F. Yang, “Episodic linear quadratic regulators with low-rank transitions,” arXiv preprint arXiv:2011.01568, 2020.
- [18] C. Jin, Z. Yang, Z. Wang, and M. I. Jordan, “Provably efficient reinforcement learning with linear function approximation,” in Conference on Learning Theory, 2020, pp. 2137–2143.
- [19] D. Bertsekas, Dynamic programming and optimal control. Athena scientific Belmont, MA, 3 edition, 2004.
- [20] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy.” 2014.
- [21] M. Kearns, M. Pai, A. Roth, and J. Ullman, “Mechanism design in large games: Incentives and privacy,” in Proceedings of the 5th conference on Innovations in theoretical computer science, 2014, pp. 403–410.
- [22] M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in Theory of Cryptography Conference. Springer, 2016, pp. 635–658.
- [23] Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári, “Improved algorithms for linear stochastic bandits,” in Advances in Neural Information Processing Systems, 2011, pp. 2312–2320.
- [24] T.-H. H. Chan, E. Shi, and D. Song, “Private and continual release of statistics,” ACM Transactions on Information and System Security (TISSEC), vol. 14, no. 3, pp. 1–24, 2011.
- [25] J. Hsu, Z. Huang, A. Roth, T. Roughgarden, and Z. S. Wu, “Private matchings and allocations,” SIAM Journal on Computing, vol. 45, no. 6, pp. 1953–1984, 2016.
- [26] R. Vershynin, High-dimensional probability: An introduction with applications in data science. Cambridge university press, 2018, vol. 47.
- [27] B. Laurent and P. Massart, “Adaptive estimation of a quadratic functional by model selection,” Annals of Statistics, pp. 1302–1338, 2000.