Attack Impact Evaluation by Exact Convexification
through State Space Augmentation
Abstract
We address the attack impact evaluation problem for control system security. We formulate the problem as a Markov decision process with a temporally joint chance constraint that forces the adversary to avoid being detected throughout the considered time period. Owing to the joint constraint, the optimal control policy depends not only on the current state but also on the entire history, which leads to the explosion of the search space and makes the problem generally intractable. It is shown that whether an alarm has been triggered or not, in addition to the current state is sufficient for specifying the optimal decision at each time step. Augmentation of the information to the state space induces an equivalent convex optimization problem, which is tractable using standard solvers.
I INTRODUCTION
Due to the increased connectivity, security of control systems has become an urgent matter. Indeed, a lot of malicious software that target industrial control systems have been reported [1], and some of them succeeded to cause serious consequences to critical infrastructures [2, 3, 4, 5]. Security risk assessment is a necessary process for effective security countermeasures. Risk assessment is conducted through specifying possible scenarios, quantifying their likelihoods, and evaluating their impacts [6, 7].
Attack impact evaluation for control systems is a challenging task since it depends on malicious input sequences even if the supposed intrusion route is fixed. Typically, the impact is quantified as the solution of an optimal control problem with a constraint that forces the adversary to avoid being detected throughout the considered time period [8]. Based on this framework, various formulations have been proposed [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. A common problem is that the class of systems that can be handled by the existing works are limited because of difficulty to solve the constrained optimal control problem. In particular, the class of attack detectors is limited. Typical works consider the detector or provide a bound for all possible detectors by using Kullback-Leibler divergence (KLD) between the observed output and the nominal signal. However, other stateful detectors such as cumulative sum (CUSUM) detectors and exponentially weighted moving average (EWMA) detectors are known to be effective in practice, from the both perspectives of detection performance and computational efficiency [20, 21].
The objective of this study is to provide an attack impact evaluation framework that can handle a general class of systems and detectors for practical risk assessment. In our formulation, the control system with an attack detector is modeled as a Markov decision process (MDP) where an alarm region is embedded in the state space. The stealth condition is given as a temporally joint chance constraint, which restricts the probability of intrusion into the alarm region throughout the whole time period. However, because the chance constraint is joint over time, the optimal policy depends not only on the current state, but also the entire history. In consequence, the dimension of the search space exponentially increases with the time horizon length, and the stochastic optimal control problem becomes generally intractable.
This paper proposes an equivalent problem formulation that reduces the size of the search space and makes the problem tractable. It is shown that there exists a sufficient statistic for optimal decision making at each time step and we can reduce the size of the search space by augmenting the information into the state space. Specifically, an extra binary state representing whether the alarm has been triggered or not in addition to the current state is sufficient for the optimal decision. We refer to adding this information to the state space as state space augmentation. The optimal value can be achieved by Markov policies in the augmented MDP. This finding induces an equivalent optimal control problem where the size of the search space is relatively small. The induced problem is a standard constrained optimal control problem, which can be reduced to an equivalent linear program, and hence can be solved by standard solvers. Moreover, it is observed that the problem formulation leads to an optimal policy where the adversary does not care about future alarms once an alarm is triggered. In this case, the resulting state trajectory tends to stay in the alarm region, which is unreasonable in practice. To model more sophisticated attack strategies, we propose an extended problem formulation by taking into account the number of alarms throughout the entire time period. It is shown that state space augmentation can also be applied to the extended problem.
Related Work
The attack impact evaluation problem has been widely considered in control system security [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. The formulations themselves are basically similar, and technically the problem is reduced to a constrained optimal control problem. However, different approaches should be taken depending on the class of the system and the detector, the objective function that quantifies the impact caused by an attack signal, and the class of possible attack strategies. An early work [9] considers the detector, which is the simplest stateless detector. The work [10] provides a rather general formulation where an essential bound is provided using KLD. The CUSUM detector is considered in [11]. In [15], the author considers a detector using norm of the output signal assuming that the system is deterministic. Finally, the other works consider one of the detectors. To the best of our knowledge, this study is the first framework that can handle general detectors for attack impact evaluation.
Our proposed method is based on state space augmentation, which adds information of alarm history for decision making at each step. A similar idea has been proposed in risk-averse MDP[22, 23, 24]. In [22], the authors consider a non-standard MDP where the objective function is given by not expectation but conditional-value-at-risk (CVaR), which is also referred to as average-value-at-risk. They show that value iteration can be applied by considering the augmented state space even for CVaR. The idea is generalized in [23] where not only CVaR minimization but also chance-constrained MDP are considered. The work [24] proposes risk-aware reinforcement learning based on the idea. Moreover, linear temporal logic specification techniques provide a rather general framework [25]. While these works are relatively abstract, this study exhibits a clear interpretation through a concrete application in the context of security, where the augmented state has a concrete interpretation, namely the number of alarm’s up until that time step.
Organization and Notation
The paper is organized as follows. Sec. II provides the system model, clarifies the threat model, and formulates the impact evaluation problem. In Sec. III, the difficulty of the formulated problem is explained, and a solution based on state space augmentation is proposed. Sec. IV considers an extension of the problem where the number of alarms throughout the entire time period is taken into account. It is shown that the approach proposed in Sec. III is still valid in the extended formulation. In Sec. V, the theoretical results are verified through numerical simulation. Finally, Sec. VI concludes and summarizes the paper.
Let and be the sets of natural numbers and real numbers, respectively. The -ary Cartesian power of the set is denoted by . The tuple is denoted by .
II ATTACK IMPACT EVALUATION PROBLEM
II-A System Model
This study considers a control system with an attack detector. Its stochastic model is described by the finite-horizon MDP [26] with an alarm region given by
(1) |
where is the state space, is the action space, is the state transition function from to , is the time index set, for and are the reward functions, and is the alarm region. An alarm implemented in the control system is triggered when the state reaches the alarm region. For simplicity, we assume the state space and the action space to be finite.

Example: We give a specific example that can be described in the form (1). Consider the block diagram in Fig. 1, which includes a nonlinear stochastic system equipped with a dynamic attack detector, which is also referred to as a stateful attack detector [27]. Let the dynamics of the control system and the detector be given by
(2) |
respectively, where and are states, is an actuation signal caused by the attack, is noise, is a binary signal that describes whether an alarm is triggered or not at the time step. Suppose that the noise is an independent and identically distributed random variable over a probability space . Let and be the state space and the action space in the MDP form (1), respectively. Determine the state transition function by
where and are Borel sets in and and
Further, let be the alarm region111Note that the state transition function is guaranteed to be a stochastic kernel if the maps are Borel measurable [28, Proposition 7.26].. Then, we can obtain an MDP in the form (1) by discretizing the spaces to be finite sets by using standard methods (for example, see [29]). More specific detector examples include the attack detector and the CUSUM attack detector [21]. The attack detector with the observed output , the nominal output and the threshold is represented by
The CUSUM attack detector with the bias and the threshold is represented by
with the observed signal and the nominal output222More precisely, the corresponding MDP should be time-varying owing to the nominal output. However, by adding the time information to the state, our discussion can readily be extended as long as finite-horizon problems are considered..
II-B Threat Model
In this study, we consider the following threat model:
-
•
The adversary has succeeded to intrude into the system and can execute any action in a probabilistic manner at every time step.
-
•
The adversary has perfect model knowledge of .
-
•
The adversary possesses infinite memory and computation resources.
-
•
The adversary can observe the state at every time step.
-
•
The attack begins at and ends at .
The threat model implies that the adversary can implement an arbitrary history-dependent randomized policy , where is a tuple of policies at each time step, is the set of all history-dependent randomized policies, is the set of histories at the time step , and is the set of probabilistic measures with respect to the Borel algebra on . Let the canonical measurable space of the MDP be denoted by , where and is its product -algebra. Define the random variables and as the projection of each outcome into the state and action at the time step , respectively. Throughout this paper, the initial state is fixed for simplicity. We denote the probability measure induced by given the policy by and the expectation operator with respect to by .
The objective of the adversary is to maximize a cumulative attack impact while avoiding being detected. Let the impact be quantified as the expectation of the sum of the reward functions with respect to . Set the stealth condition to
(3) |
where
with a constant . The criterion (3) requires the probability of alarms to be less than or equal to throughout the attack sequence.
II-C Problem Formulation
Based on the setting above, the attack impact evaluation problem is formulated as a stochastic optimal control problem with a temporally joint chance constraint.
Problem 1
The attack impact evaluation problem is given by
(4) |
Technically, the goal of this paper is to solve Problem 1. In the subsequent section, we explain its difficulty and propose a solution method.
III EXACT CONVEXIFICATION BY STATE SPACE AUGMENTATION
III-A Idea: State Space Augmentation
It is well known that for standard MDPs without constraints the optimal value can be achieved by Markov policies, namely policies that depend only on the current state. This property is justified by the fact: For any history-dependent policy, there exists a Markov policy such that the probabilities marginalized with respect to time are equal to those of the history-dependent policy [30, Theorem 18.1]. In Problem 1, however, there exists a temporally joint chance constraint, which cannot be decomposed with respect to time. Hence Markov policies cannot achieve the optimal value of (4) in general, an example of which is provided in Appendix A. Thus the dimension of its search space exponentially increases with the time horizon length. Indeed, the dimension of is . This explosion of the search space makes the problem intractable.
The key idea to overcome the difficulty is to add the alarm information by augmenting the state space. We define the augmented state space and the induced augmented MDP.
Definition 1 (Augmented State Space and MDP)
The augmented state represents the alarm information. The binary flag indicates that the alarm has been triggered before the time step , whereas indicates otherwise. For the augmented MDP , we denote the set of the history-dependent non-deterministic policies by , the canonical measurable space by , the random variables of the state by , the probability measure and the expectation induced by the initial condition and the policy by and , respectively.
By using the augmented information, we can rewrite the temporally joint chance constraint in (4) as an isolated chance constraint on the state at the final step. It is intuitively true that , the event that an alarm is triggered at some time step, is equivalent to , the event that the augmented state takes the value at the final time step. This reformulation induces an equivalent problem
(7) |
The most important feature of this formulation is that the chance constraint depends on the marginal distribution only for the final time step. Hence, the optimal value of (7) can be achieved by Markov policies for the augmented state space . Thus, the problem (7) can be reduced to
(8) |
where the search space of (7) is replaced with , the set of Markov policies for the augmented MDP .
We formally justify the reformulation. We first show the following lemma.
Lemma 1
For any , there exists such that
(9) |
where is the joint distribution of for with respect to and is that for with respect to .
Proof.
Let be a history associated with the augmented MDP . Denote the first time instance at which by , i.e., and when for any . We say to be consistent when satisfies
Note that the probability of any history that is not consistent is zero from the induced state transition function for any policy. A history in the original MDP uniquely induces a consistent history in , which we denote by . For a fixed policy for , give a policy for by
(10) |
We confirm that the policy (10) satisfies the condition (9) by induction. Because the initial state is fixed to be , the condition is satisfied for . Assume that (9) holds for some . Denoting the probability mass function with a policy by , we have
for any and . Thus, it suffices to show that
(11) |
For a policy , we have
Since uniquely induces such that the corresponding history becomes consistent and satisfies , we have
and (11) holds. ∎
Lemma 1 implies that the stochastic behaviors of the original MDP and the augmented MDP are identical with appropriate policies.
The following theorem is the main result of this paper.
Theorem 1
Proof.
We first show . Since the policy set of the augmented MDP includes that of the original MDP, clearly holds. Fix a feasible policy for (7) and take the corresponding policy for the original MDP according to (10). From Lemma 1, the marginal distributions of the state and the action with the policies coincide. By denoting by , from (10), we have . Thus and is feasible in (7). Therefore , which leads to . Finally, is a direct conclusion of [30, Theorem 18.1]. ∎
III-B Solution to Problem 1
It is known that a standard class of constrained MDPs, which the problem (8) belongs to, can be solved using linear programming [31]. The procedure is briefly explained in the following. Denote by where is omitted from the notation. The objective function can be rewritten by
The left-hand side of the constraint can be rewritten by
where for takes if and otherwise. Moreover, the function satisfy
(12) |
where for takes if and otherwise. Conversely, for any functions that satisfy (12), there exists a policy such that . Such a policy can specifically be constructed by
(13) |
when the denominator is nonzero, otherwise can be arbitrary. Therefore can be used as the decision variables of the problem (8) instead of the policy. It should be emphasized that all the functions are linear with respect to .
The linear programming formulation is given as follows:
(14) |
which can readily be solved by standard solvers.
IV EXTENSION: MULTI-ALARM AVOIDANCE STRATEGY
The optimal policy of Problem 1 means that the adversary cares about being detected when there has been no alarms so far, but does no longer care once an alarm is triggered. In practice, however, a single alarm may not result in counteractions by the defender owing to existence of false alarms. Hence, it is more natural that the adversary avoids serial alarms. In this sense, Problem 1 leads to an unreasonable strategy, which is illustrated by Fig. 2 in the simulation section.
The cause of such unnatural strategies is that the chance constraint is given for the binary value whether an alarm has been triggered or not in the problem formulation. In this section, to obtain a more reasonable strategy and evaluate the attack impact more precisely, we extend the formulation of Problem 1 by taking multi-alarms into account.
First, define the event that alarms are triggered more than or equal to times by
where
Using the notation, the extended version of the attack impact evaluation problem for multi-alarm avoidance strategies is formulated as follows:
Problem 2
The attack impact evaluation problem for multi-alarm avoidance strategies is given by
(15) |
where for are constants for the stealth constraints.
The same idea of state space augmentation can be applied to Problem 2 as well by adding information of the number of alarms instead of the binary information. The augmented state space and the induced augmented MDP for Problem 2 are defined as follows.
Definition 2
The augmented MDP naturally leads to an equivalent problem
(16) |
where the search space is the set of Markov policies. The following theorem is the correspondence of Theorem 1.
Theorem 2
Proof.
The claim can be proven in a manner similar to the proof of Theorem 1. ∎
The dimension of is . The problem (16) is also a standard constrained MDP and can be reformulated as a linear program.
Remark: The constraint in the extended problem (15) can be regarded as a restriction on the probability distribution of the number of alarms. In other words, the formulation utilizes a risk measure on a probability distribution. Several risk measures have been proposed, such as CVaR, which is one of the most commonly used coherent risk measures [32]. Those risk measures compress risk of a random variable that possesses a distribution into a scalar value. Because our formulation uses the full information of the distribution, the constraint can be regarded as a fine-grained version of constraints using standard risk measures.
V NUMERICAL EXAMPLE
We verify the effectiveness of the proposed method through a numerical example. Consider the state space . The state represents the reference value, and the adversary’s objective is to push the state from the reference as much as possible. The action space is given by . The time horizon is given by . The state transition function is given by
for and
for . The alarm region is given by . The reward function is given by for any and .
Consider the formulation of Problem 1. Let the constant on the stealth condition be given by Fig. 2 depicts the simulation results with the optimal policy obtained by solving the equivalent formulation (14). Fig. 2a depicts the probability mass function with respect to the total number of alarms during the process. It can be observed that the resulting probability distribution satisfies the stealth constraint in (4) and hence Problem 1 is solved through the proposed method. On the other hand, as pointed out in Sec. IV, the probability is concentrated around the cases more than ten alarms. This result indicates that the formulation in Problem 1 leads to a policy such that the number of alarms becomes large once an alarm is triggered. Figs. 2b and 2c depict the sample paths with 100 trials and the empirical means conditioned by whether an alarm is triggered at least once during the process, or not, respectively. Those subfigures suggest the same tendency that the adversary does no longer care about being detected once an alarm is triggered.



Next, consider the formulation of Problem 2. Let the constants on the stealth condition be given by . Fig. 3 depicts the simulation results. The subfigures correspond to those in Fig. 2. It can be observed from Fig. 3a that the obtained probability mass function looks more natural than that in Fig. 2a, i.e., the probability of the number of alarms decreases as the number increases. It can also be observed from Figs. 3b and 3c that the resulting state trajectories are also natural, i.e., those are close to the boundary of the alarm region even if an alarm is triggered during the process. The evaluated attack impacts through Problems 1 and 2 are compared in Table I.



VI CONCLUSION
This study has addressed the attack impact evaluation problem for control systems. The problem is formulated as an MDP with a temporally joint chance constraint. The difficulty of the optimal control problem lies in the explosion of the search space owing to the dependency of the optimal policy on the entire history. In this paper, we have shown that the information whether alarm has been triggered or not is sufficient and the size can be reduced by considering state space augmentation. Moreover, the problem is extended by taking the number of alarms into account for obtaining reasonable strategies and evaluating the attack impact more precisely.
Further research directions include extension to infinite and continuous state spaces, discretization of which results in a finite but huge state space when the dimension is high. Moreover, although the size of the search space is reduced compared with the original one, the optimal policy still depends on the time instance. To reduce its size further and obtain an optimal stationary policy, extension to the infinite horizon problem is of interest. Finally, in the extended formulation, we have used the full information of the probability distribution. Clarifying relationships to standard risk measures, such as CVaR, is an open problem.
Appendix A Example: History-Dependency of Optimal Policy and Sufficiency of Alarm Information
Consider the MDP illustrated by Fig. 4. In the example, the adversary can take an action only at . When the action is taken, the state reaches with probability one and the adversary obtains reward . On the other hand, when the action is taken, the state reaches or with equal probabilities. The amount of the reward is in the case of , while no reward is given in the case of . The alarm region is given as . The action can be interpreted as a risky action in the sense that it leads to large reward in expectation but may trigger an alarm.

The history-dependent policies are parameterized by and with parameters . The joint chance constraint is written by Thus the feasible region of is . The objective function is written by Because this is monotonically increasing with respect to and , the optimal values are which leads to
On the other hand, the Markov policies are parameterized by with . The joint chance constraint is and the objective function is . Thus the optimal value is which leads to Denoting the value of the objective function with a policy by we have which implies that Markov policies cannot achieve the optimal value for the example.
The optimal history-dependent policy means that, the adversary reduces the risk when no alarm has been triggered () while she takes the risky action when there has been an alarm (. In other words, the decision making at the state depends on the alarm history. This observation leads to the hypothesis that this binary information in addition to the current state is sufficient for optimal decision making and the idea of state space augmentation.
References
- [1] K. E. Hemsley and D. R. E. Fisher, “History of industrial control system cyber incidents,” U.S. Department of Energy Office of Scientific and Technical Information, Tech. Rep. INL/CON-18-44411-Rev002, 2018.
- [2] N. Falliere, L. O. Murchu, and E. Chien, “W32. Stuxnet Dossier,” Symantec, Tech. Rep., 2011.
- [3] Cybersecurity & Infrastructure Security Agency, “Stuxnet malware mitigation,” Tech. Rep. ICSA-10-238-01B, 2014, [Online]. Available: https://www.us-cert.gov/ics/advisories/ICSA-10-238-01B.
- [4] ——, “HatMan - safety system targeted malware,” Tech. Rep. MAR-17-352-01, 2017, [Online]. Available: https://www.us-cert.gov/ics/MAR-17-352-01-HatMan-Safety-System-Targeted-Malware-Update-B.
- [5] ——, “Cyber-attack against Ukrainian critical infrastructure,” Tech. Rep. IR-ALERT-H-16-056-01, 2018, [Online]. Available: https://www.us-cert.gov/ics/alerts/IR-ALERT-H-16-056-01.
- [6] S. Kaplan and B. J. Garrick, “On the quantitative definition of risk,” Risk Analysis, vol. 1, no. 1, pp. 11–27, 1981.
- [7] S. Sridhar, A. Hahn, and M. Govindarasu, “Cyber–physical system security for the electric power grid,” Proc. IEEE, vol. 100, no. 1, pp. 210–224, 2012.
- [8] A. Teixeira, I. Shames, H. Sandberg, and K. H. Johansson, “A secure control framework for resource-limited adversaries,” Automatica, vol. 51, pp. 135–148, 2015.
- [9] Y. Mo and B. Sinopoli, “On the performance degradation of cyber-physical systems under stealthy integrity attacks,” IEEE Trans. Autom. Control, vol. 61, no. 9, pp. 2618–2624, 2016.
- [10] C.-Z. Bai, F. Pasqualetti, and V. Gupta, “Data-injection attacks in stochastic control systems: Detectability and performance tradeoffs,” Automatica, vol. 82, pp. 251 – 260, 2017.
- [11] D. Umsonst, H. Sandberg, and A. A. Cárdenas, “Security analysis of control system anomaly detectors,” in Proc. 2017 American Control Conference (ACC), 2017, pp. 5500–5506.
- [12] N. H. Hirzallah and P. G. Voulgaris, “On the computation of worst attacks: A LP framework,” in 2018 Annual American Control Conference (ACC), 2018, pp. 4527–4532.
- [13] C. Murguia and J. Ruths, “On reachable sets of hidden CPS sensor attacks,” in 2018 Annual American Control Conference (ACC), 2018, pp. 178–184.
- [14] Y. Chen, S. Kar, and J. M. F. Moura, “Optimal attack strategies subject to detection constraints against cyber-physical systems,” IEEE Trans. Contr. Netw. Systems, vol. 5, no. 3, pp. 1157–1168, 2018.
- [15] A. M. H. Teixeira, “Optimal stealthy attacks on actuators for strictly proper systems,” in 2019 IEEE 58th Conference on Decision and Control (CDC), 2019, pp. 4385–4390.
- [16] J. Milošević, H. Sandberg, and K. H. Johansson, “Estimating the impact of cyber-attack strategies for stochastic networked control systems,” IEEE Trans. Control Netw. Syst., vol. 7, no. 2, pp. 747–757, 2019.
- [17] C. Fang, Y. Qi, J. Chen, R. Tan, and W. X. Zheng, “Stealthy actuator signal attacks in stochastic control systems: Performance and limitations,” IEEE Trans. Autom. Control, vol. 65, no. 9, pp. 3927–3934, 2019.
- [18] X.-L. Wang, G.-H. Yang, and D. Zhang, “Optimal stealth attack strategy design for linear cyber-physical systems,” IEEE Trans. on Cybern., 2020, (early access).
- [19] T. Sui, Y. Mo, D. Marelli, X. Sun, and M. Fu, “The vulnerability of cyber-physical system under stealthy attacks,” IEEE Trans. Autom. Control, vol. 66, no. 2, pp. 637–650, 2020.
- [20] A. A. Cárdenas, S. Amin, Z.-S. Lin, Y.-L. Huang, C.-Y. Huang, and S. Sastry, “Attacks against process control systems: Risk assessment, detection, and response,” in Proc. the 6th ACM ASIA Conference on Computer and Communications Security, 2011.
- [21] C. Murguia and J. Ruths, “On model-based detectors for linear time-invariant stochastic systems under sensor attacks,” IET Control Theory Applications, vol. 13, no. 8, pp. 1051–1061, 2019.
- [22] N. Bäuerle and J. Ott, “Markov decision processes with average-value-at-risk criteria,” Mathematical Methods of Operations Research, vol. 74, no. 3, pp. 361–379, 2011.
- [23] W. B. Haskell and R. Jain, “A convex analytic approach to risk-aware Markov decision processes,” SIAM Journal on Control and Optimization, vol. 53, no. 3, pp. 1569–1598, 2015.
- [24] Y. Chow, M. Ghavamzadeh, L. Janson, and M. Pavone, “Risk-constrained reinforcement learning with percentile risk criteria,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 6070–6120, 2017.
- [25] C. Baier and J.-P. Katoen, Principles of Model Checking. MIT Press, 2008.
- [26] M. L. Puterman, Markov Decision Process: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 1994.
- [27] J. Giraldo et al., “A survey of physics-based attack detection in cyber-physical systems,” ACM Comput. Surv., vol. 51, no. 4, 2018.
- [28] D. Bertsekas and S. Shreve, Stochastic Optimal Control: The Discrete-Time Case. Athena Scientific, 1996.
- [29] R. Munos and A. Moore, “Variable resolution discretization in optimal control,” Machine Learning, vol. 49, no. 2, pp. 291–323, 2002.
- [30] K. Hinderer, Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter. Springer, 1970.
- [31] E. Altman, Constrained Markov Decision Processes. Chapman and Hall/CRC, 1999.
- [32] P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath, “Coherent measures of risk,” Mathematical Finance, vol. 9, no. 3, pp. 203–228, 1999.