This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\quotingsetup

vskip=1pt

Adaptive ECCM for Mitigating Smart Jammers

Abstract

This paper considers adaptive radar electronic counter-counter measures (ECCM) to mitigate ECM by an adversarial jammer. Our ECCM approach models the jammer-radar interaction as a Principal Agent Problem (PAP), a popular economics framework for interaction between two entities with an information imbalance. In our setup, the radar does not know the jammer’s utility. Instead, the radar learns the jammer’s utility adaptively over time using inverse reinforcement learning. The radar’s adaptive ECCM objective is two-fold (1) maximize its utility by solving the PAP, and (2) estimate the jammer’s utility by observing its response. Our adaptive ECCM scheme uses deep ideas from revealed preference in micro-economics and principal agent problem in contract theory. Our numerical results show that, over time, our adaptive ECCM both identifies and mitigates the jammer’s utility.

Index Terms—  Adaptive Electronic counter countermeasures, Afriat’s theorem, Bayesian Target Tracker, Principal Agent Problem, Information Asymmetry, Electronic Warfare

1 Introduction

Electronic Countermeasures (ECM) are widely used by jammers to degrade radar’s measurement accuracy in a shared spectrum environment. To mitigate the impact of ECM, modern radars implement Electronic Counter-Countermeasures (ECCM). We consider a target-tracking cognitive radar that maximizes its signal-to-noise ratio (SNR) and is also aware of an adversarial jammer trying to mitigate the radar’s performance. At the start of the radar-jammer interaction, the radar has zero information about the jammer’s strategy. The radar learns the jammer’s ECM strategy over time via repeated radar-jammer interactions while ensuring its measurement accuracy lies over a specified threshold - we term this as adaptive ECCM. A list of standard ECM and ECCM strategies are summarized in [1] and [2].

This paper formulates the radar’s ECCM objective as a Principle Agent Problem (PAP), wherein the radar gradually learns the jammer’s objective using Inverse Reinforcement Learning (IRL). We assume the radar possesses IRL capability and can learn the jammer’s utility, while the jammer is a naive agent - it only maximizes its utility. Reconstructing agent preferences from a finite time series dataset is the central theme of revealed preference in micro-economics [3], [4]. In the radar context, the radar uses the celebrated result of Afriat’s theorem [3] to estimate the jammer’s utility over time. This imbalance of information is termed as information asymmetry in literature widely studied in micro-economics [5][6]. The Principal Agent Problem (PAP) is a well-known framework that mitigates information asymmetry. The PAP [7] has been studied extensively in micro-economics for appropriate contract formulation between two entities in labor contracts [8], insurance market [9][10], and differential privacy [11] in machine learning.

Our choice of PAP is motivated by its flexibility to accommodate additional constraints on the information of the radar and the jammer. Such capabilities have been capitalized upon in prior ECCM literature [12]. However, our work generalizes [12] in that the radar now has to learn the jammer’s utility over time using IRL. Similar approaches for mitigating information asymmetry exist in literature, for example, in consumer economics [13] where the seller maximizes its profit by solving a leader-follower problem. In [14], the seller maximizes his utility by modelling his interaction with the consumer as a Stackelberg game.

2 Radar-Jammer Interaction for Adaptive ECCM

In this section, we formulate the radar, jammer and target interaction as shown in Fig. 1. The radar and jammer interact while the target evolves independently. The main idea is that the radar exploits this interaction to mitigate the effect of ECM by using ECCM. For simplicity we only consider a single target. We formulate the scenario in two timescales. Let n{0,1,2,}n\in\{0,1,2,\ldots\} and k{0,1,2,}k\in\{0,1,2,\ldots\}, denote the time index for fast and slow timescale, respectively. In fast timescale, the target evolves in linear, time-invariant dynamics and additive Gaussian noise. On the slow timescale, the radar and the jammer participate in an EW and update their ECCM and ECM, respectively.

Refer to captionTarget dynamicsRefer to captionRadarRefer to captionJammerRadar’s probe (αk\alpha_{k})Jammer’s response (βk\beta_{k})FAST TIMESCALESLOW TIMESCALE
Fig. 1: Two timescale radar-jammer problem for tracking a target. Target dynamics/transient occurs on the fast timescale; radar-jammer PAP is solved at the slow timescale.

Target dynamics and Radar’s measurement

The targets evolve on the fast time scale nn. We use the standard linear Gaussian model [15] for the kinematics of the target and initial condition x0x_{0}:

xn+1=Axn+wn,x0𝒩(x^0,Σx0)yn=h(xn,vn),vn𝒩(0,V(αk,βk))A=diag[T,T,T],T=[1T01]\begin{split}x_{n+1}&=A\,x_{n}+w_{n},\quad x_{0}\sim\mathcal{N}(\hat{x}_{0},\Sigma_{x_{0}})\\ y_{n}&=h(x_{n},v_{n}),\quad v_{n}\sim\mathcal{N}\bigg{(}0,V\Big{(}\alpha_{k},\beta_{k}\Big{)}\bigg{)}\\ A&=\operatorname{diag}\left[\textbf{T},\textbf{T},\textbf{T}\right],\quad\textbf{T}=\left[\begin{array}[]{ll}1&T\\ 0&1\end{array}\right]\end{split} (1)

Here, TT denotes the sampling time. The initial condition 𝒩(x^0,Σx0)\mathcal{N}(\hat{x}_{0},\Sigma_{x_{0}}) denotes a Gaussian random vector with mean x^0\hat{x}_{0} and covariance Σx0\Sigma_{x_{0}}. xn6x_{n}\in\mathbb{R}^{6}, comprised of the x,y,zx,y,z position and velocity, is the state of the target at time nn. The i.i.d. sequence of Gaussian random vectors {wn𝒩(0,Q)}\{w_{n}\sim\mathcal{N}(0,Q)\} models the acceleration maneuvers of the target. Here, h()h(\cdot) represents the radar’s sensing functionalities. V(αk,βk)V(\alpha_{k},\beta_{k}) is the measurement noise variance of the random vector vkv_{k}. Here, kk indexes the slow timescale on which the radar and the jammer update their strategies.

We make the standard assumption that the radar has a Bayesian tracker [16], which recursively computes the posterior distribution (xn|y1,yn)\mathbb{P}(x_{n}|y_{1},\ldots y_{n}) of the target state at each time kk. The radar can also do inverse bayesian filtering for target tracking [17],[18].

3 Radar-Jammer Interaction. Principal Agent Problem (PAP) with IRL

The working assumption in this paper is that the radar does not know the jammer’s utility at the start of the radar-jammer interaction. The radar ‘learns’ the jammer’s utility over time via inverse reinforcement learning (IRL) as schematically illustrated in Fig. 2. The radar’s ECCM efficacy of mitigating the jammer improves with increasing accuracy of the radar’s estimate of the jammer’s utility.

Information Symmetry. Radar knows Jammer’s Utility

For simplicity, let us first consider the case where the radar completely knows the jammer’s utility. This scenario is referred to as ‘information symmetry’ in contract theory. The radar optimizes its action at time kk under the assumption that the jammer is cognitive, i.e. , the jammer is a utility maximizer. The radar’s transmitted signal at time kk is given by:

Protocol 1 (Principal Agent Problem (PAP)).
Radar’s Probe: αk=argmaxuR(α,β),\displaystyle\alpha_{k}=\operatorname{argmax}~{}u_{R}(\alpha,\beta), (2)
βargmaxβ¯uJ(α,β¯)\displaystyle\beta\in\operatorname{argmax}_{\bar{\beta}}~{}u_{J}(\alpha,\bar{\beta}) ,uJ(α,β)=uθ(β)+λαβ.\displaystyle,~{}u_{J}(\alpha,\beta)=u_{\theta}(\beta)+\lambda~{}\alpha^{\prime}\beta. (3)
Jammer’s Response: βk=argmaxβuJ(αk,β).\displaystyle\beta_{k}=\operatorname{argmax}_{\beta}u_{J}(\alpha_{k},\beta). (4)

In (2), uR,uJu_{R},~{}u_{J} denote the utility functions of the radar and the jammer, and αk,βk\alpha_{k},~{}\beta_{k} denote the radar’s and jammer’s transmission signals at time kk, respectively. Observe that utilities depend on both the radar’s and jammer’s responses. In words, the constraint in the optimization problem (2) means the jammer’s response for radar’s signal α\alpha maximizes the jammer’s utility uJu_{J}, hence the feasible tuples (α,β)(\alpha,\beta) over which the radar maximizes its utility is restricted to a manifold. In (3), uθ(β)u_{\theta}(\beta) is the jammer’s system utility parametrized by vector θ\theta that is independent of the radar’s signal α\alpha . In (2), it is implicit that the radar knows θ\theta. The radar’s utility uRu_{R} can be a combination of tracking performance metrics, for e.g. , SNR, and the negative of the jammer’s utility (this term mitigates the jammer). We will motivate this reasoning with a concrete example in Sec. 4. Since the radar knows the jammer’s utility (3), it can successfully ensure the jammer’s utility is mitigated by a smart choice of αk\alpha_{k} obtained by solving (2).

Finally, a few words on the jammer’s utility structure (3). The first term can be viewed as the jammer’s system performance and is independent of the radar’s action. The second term λαβ\lambda\alpha^{\prime}\beta regulates the jammer’s signal power in terms of the radar’s transmission signal power. Intuitively, large radar transmission power requires a large jammer transmission power for effective mitigation of the radar; see [18] for the relation between αβ\alpha^{\prime}\beta and the asymptotic covariance of the Bayesian tracker of the jammer.

Maximize uJu_{J}IRL (Learn uJu_{J}via Thm. 1)PAP(maximize uRu_{R})βk\beta_{k}αk\alpha_{k}u^J,k{\widehat{u}_{J,k}}RadarSmart Jammer
Fig. 2: The EECM problem, involves radar maximizing its utility based on its noisy estimate of the jammer utility. The radar estimates the utility via IRL which aids it to learn the jammer’s utility adaptively.

Information Asymmetry. Radar learns Jammer’s utility via repeated interactions

We now consider the information asymmetric version of Protocol 1, where the radar has a noisy estimate of the radar’s utility. The rest of the paper deals with the information asymmetric case. Compared to the information symmetric case, the radar’s aim now is two-fold: (1) maximize its utility uRu_{R}, and (2) estimate the jammer’s utility and decrease information asymmetry. Intuitively, a more accurate estimate of the jammer’s utility results in increased efficacy of radar ECCM. The radar-jammer interaction (2) can now be expressed as:

Protocol 2 (Adaptive ECCM. Hybrid PAP with IRL).

Consider a radar tracking a target in the presence of an adversarial jammer. Assume the radar does not know the jammer’s utility. The radar takes the following actions for all k>0k>0:
Step 1. Generate signal αk\alpha_{k} by solving the following PAP:

αk=argmaxuR(α,β),βargmaxβ¯u^J,k1(α,β¯),\displaystyle\alpha_{k}=\operatorname{argmax}~{}u_{R}(\alpha,\beta),~{}\beta\in\operatorname{argmax}_{\bar{\beta}}~{}\widehat{u}_{J,k-1}(\alpha,\bar{\beta}), (5)
where u^J,k1(α,β)=uθ^k1(β)+λαβ.\displaystyle\text{where }\widehat{u}_{J,k-1}(\alpha,\beta)=u_{\hat{\theta}_{k-1}}(\beta)+\lambda~{}\alpha^{\prime}\beta. (6)

In (6), θ^k1\hat{\theta}_{k-1} is the radar’s estimate of the jammer’s utility parameter θ\theta (3) at time kk.
Step 2. Receive jammer’s signal βk\beta_{k} and update the estimate of the jammer’s utility:

θ^k\displaystyle\hat{\theta}_{k} =IRL-Update(𝒟k1{αk,βk}),\displaystyle=\operatorname{\texttt{IRL-Update}}(\mathcal{D}_{k-1}\cup\{\alpha_{k},\beta_{k}\}), (7)
𝒟k1\displaystyle\mathcal{D}_{k-1} ={αk,βk}k=1k1\displaystyle=\{\alpha_{k^{\prime}},\beta_{k^{\prime}}\}_{k^{\prime}=1}^{k-1} (8)

where IRL-Update()\operatorname{\texttt{IRL-Update}}(\cdot) is described in (13), and 𝒟k\mathcal{D}_{k} is the radar’s dataset at time kk.

Protocol 2 is our adaptive ECCM scheme illustrated in Fig. 2. In Protocol 2, Steps 1 and 2 correspond to the PAP and IRL blocks, respectively, in Fig. 2. In words, the radar first chooses its action αk\alpha_{k} that maximizes its utility uRu_{R} subject to its current estimate of the jammer’s utility. In response to radar’s signal αk\alpha_{k}, the jammer chooses action βk\beta_{k}. The radar uses this new data point of (αk,βk)(\alpha_{k},\beta_{k}) to update its estimate of the jammer’s utility. The asymptotic goal of the radar is to identify that probe signal α\alpha^{\ast} that satisfies:

α\displaystyle\alpha^{\ast} argmaxαuR(α,β), such that\displaystyle\in\operatorname{argmax}_{\alpha}u_{R}(\alpha,\beta),\text{ such that}
β\displaystyle\beta argmaxβuJ(α,β)\displaystyle\in\operatorname{argmax}_{\beta^{\prime}}u_{J}(\alpha,\beta^{\prime}) (9)

That is, as time kk\rightarrow\infty, the radar knows the jammer’s utility exactly and is able to effectively mitigate ECM tactics.

IRL for Learning the Jammer’s Utility

We now describe the IRL aspect (IRL-Update()\operatorname{\texttt{IRL-Update}}(\cdot) module) in the adaptive ECCM outline in Protocol 2. Below, we present a key result from revealed preference theory, namely, Afriat’s theorem [3, 19, 4]. Afriat’s theorem is a remarkable result in micro-economics for non-parametric utility estimation from a finite dataset of probes and responses generated by a decision-maker.

In the radar context of this paper, Afriat’s theorem generates a set-valued estimate of the utility parameter θ\theta (3) in the jammer’s utility uJu_{J}. Since the radar knows a priori that the jammer’s utility has a parametric form (3), Afriat’s theorem below yields tighter conditions for utility maximization.

Theorem 1 (Afriat’s theorem for Identifying Jammer Utility).

Consider a radar tracking a target in the presence of a jammer. Suppose the parameter θ\theta in the jammer’s utility (3) belongs to a compact set Θn\Theta\subseteq\mathbb{R}^{n}, θ\theta is fixed for all kk and the radar knows the value of λ\lambda in the jammer’s utility (3). Then, at time kk:
1. The radar checks for the existence of a feasible parameter θ\theta that satisfies (4) by checking the feasibility of a set of inequalities:

There exists a feasible θΘs.t. AFT(θ,𝒟k)𝟎,uJs.t. βkargmaxβuJ(αk,β),k=1,,k,\begin{split}&\text{There exists a feasible }\theta^{\prime}\in\Theta~{}\text{s.t. }\operatorname{AFT}(\theta^{\prime},\mathcal{D}_{k})\leq\mathbf{0},\\ \Leftrightarrow&\exists~{}u_{J}~{}\text{s.t. }\beta_{k^{\prime}}\in\operatorname{argmax}_{\beta}u_{J}(\alpha_{k^{\prime}},\beta),~{}\forall k^{\prime}=1,\ldots,k,\end{split} (10)

where dataset 𝒟k\mathcal{D}_{k} is defined in (8) and the set of inequalities AFT(θ,𝒟k)𝟎\operatorname{AFT}(\theta^{\prime},\mathcal{D}_{k})\leq\mathbf{0} is given by:

uθ(βs)uθ(βt)+λαt(βsβt)0t,s{1,,k}.u_{\theta^{\prime}}(\beta_{s})-u_{\theta^{\prime}}(\beta_{t})+\lambda\alpha_{t}^{\prime}(\beta_{s}-\beta_{t})\leq 0\forall t,s\in\{1,\dots,k\}. (11)

(b) If AFT(,𝒟k)𝟎\operatorname{AFT}(\cdot,\mathcal{D}_{k})\leq\mathbf{0} has a feasible solution, the set-valued estimate of the jammer’s utility parameter θ\theta is given by:

Θk={θ:AFT(θ,𝒟k)𝟎}\Theta_{k}=\{\theta^{\prime}:\operatorname{AFT}(\theta^{\prime},\mathcal{D}_{k})\leq\mathbf{0}\} (12)
Refer to caption
(a)
Refer to caption
(b)
Fig. 3: Adaptive ECCM mitigates jammer utility as well as obtains a more accurate estimate of the jammer’s utility over time kk. In sub-figure (a), the radar’s estimate of the jammer’s coefficients improves with kk. As seen in (b), the jammer utility decreases with kk as the radar’s estimate of θ\theta improves.

Theorem 1 is our key IRL result that the radar uses to estimate the parameter θ\theta in the jammer’s utility (3). The inequalities (11) are called Afriat’s inequalities in literature. The feasibility of the inequalities (11) is both necessary and sufficient for the dataset 𝒟k\mathcal{D}_{k} to be generated by a utility maximizer. The feasible set Θk\Theta_{k} (12) comprises the set of feasible parameters that rationalize the radar’s dataset 𝒟k\mathcal{D}_{k} (8).

In the radar context, the radar uses Afriat’s theorem to identify if the jammer is cognition. Theorem 1 assumes the radar knows the parameter λ\lambda in (3). If the feasibility inequalities (11) have a viable solution, Theorem 1 provides a set-valued reconstruction of the jammer’s utility parameter θ\theta via (12). However, the radar’s optimization (5) requires a point-valued estimate of the jammer’s utility and not a set-valued estimate. Hence, for the adaptive ECCM to be well-posed, we assume the radar uses the max-margin estimate of the jammer’s utility parameter θ\theta:

θ^k=IRL-Update(𝒟k)=maxθΘk(θ)\displaystyle\hat{\theta}_{k}=\operatorname{\texttt{IRL-Update}}(\mathcal{D}_{k})=\max_{\theta^{\prime}\in\Theta_{k}}\mathcal{M}(\theta^{\prime}) (13)
(θ)=mins,t{1,,k}uθ(βs)uθ(βt)+λαt(βsβt),\displaystyle\mathcal{M}(\theta^{\prime})=\min_{s,t\in\{1,\ldots,k\}}u_{\theta^{\prime}}(\beta_{s})-u_{\theta^{\prime}}(\beta_{t})+\lambda\alpha_{t}^{\prime}(\beta_{s}-\beta_{t}), (14)

where Θk\Theta_{k} is computed via the feasibility test (12). The variable ()\mathcal{M}(\cdot) is the margin of the IRL feasibility test (10) given dataset 𝒟k\mathcal{D}_{k}. Intuitively, the margin indicates how far is a candidate parameter estimate θ\theta^{\prime} from failing the IRL test for utility maximization behavior; see [20] for a elaborate discussion on the margin of set-valued IRL estimators. Our choice of the max-margin estimate for the jammer’s utility is corroborated by similar approaches in [21] and also IRL [22] for MDPs.

4 Numerical Experiment. Adaptive ECCM

We now illustrate our adaptive ECCM scheme in Protocol 2 via a simulation-based numerical experiment. Our numerical experiment uses the following parameters:

Jammer action: βk+4,βk21\displaystyle\text{Jammer action: }\beta_{k}\in\mathbb{R}^{4}_{+},~{}\|\beta_{k}\|_{2}\leq 1
Jammer utility: uJ(α,β)=βdiag(θ)β+λαβ,\displaystyle\text{Jammer utility: }u_{J}(\alpha,\beta)=-\beta^{\prime}\operatorname{diag}(\theta)\beta+\lambda\,\alpha^{\prime}\beta, (15)
Radar action: αk+4,αk21,\displaystyle\text{Radar action: }\alpha_{k}\in\mathbb{R}^{4}_{+},~{}\|\alpha_{k}\|_{2}\leq 1,
Radar utility: uR(α,β)=αdiagβααdiag(β)α+ββδu^J(α,β),\displaystyle\text{Radar utility: }u_{R}(\alpha,\beta)=\frac{\alpha^{\prime}\operatorname{diag}\beta\alpha}{\alpha^{\prime}\operatorname{diag}(\beta)\alpha+\beta^{\prime}\beta}-\delta~{}~{}{\hat{u}_{J}}(\alpha,\beta), (16)
Jammer utility parameter:\displaystyle\text{Jammer utility}\text{ parameter}:
θ=[θ1θ2θ3θ4],θiUnif([0,1]),i=1,2,3,4,λ=0.5\displaystyle\theta=[\theta_{1}~{}\theta_{2}~{}\theta_{3}~{}\theta_{4}],~{}\theta_{i}\sim\operatorname{Unif}([0,1]),~{}i=1,2,3,4,~{}\lambda=0.5
Radar’s initial guess for θ\theta:
θ^0=[0.5, 0.5, 0.5, 0.5]\displaystyle\hat{\theta}_{0}=[0.5,\,0.5,\,0.5,\,0.5]

Figure 3 shows the performance of the radar’s adaptive ECCM scheme of Protocol 2 over K=50K=50 interactions between the radar and jammer. Figure 3 plots the IRL estimation error θθ^k2\|\theta-\hat{\theta}_{k}\|_{2} and jammer utility uJ(αk,βk)u_{J}(\alpha_{k}^{*},\beta_{k}^{*}) over time. We observe that the jammer’s utility decreases with kk, and the distance between the radar’s IRL estimate of the jammer’s utility parameter θ^k\hat{\theta}_{k} and true parameter θ\theta decreases with time. This trend is expected since (1) the size of the set-valued IRL parameter estimate Θk\Theta_{k} (12) reduces with number of radar-jammer interactions and (2) the radar trades-off between maximizing its own utility and minimizing the jammer’s utility (16).

5 Conclusion

This paper provides a principled approach to adaptive ECCM where the radar needs to learn the jammer’s utility over time. How to learn a jammer’s ECM tactics and mitigate ECM via ECCM at the same time? Our key adaptive ECCM result is Protocol 2. The main idea is to choose a myopic action that optimizes radar performance based on current estimate of the jammer’s utility followed by updating the estimate of the jammer’s utility. To do so, we use the concepts of principal agent problem (PAP) from contract theory and revealed preference-based inverse reinforcement learning tools from micro-economics. In terms of future extensions, it is of interest to investigate the convergence properties of the adaptive ECCM scheme based on hybrid PAP-IRL. We shall also consider the case where the radar observes a noisy response from the jammer, and propose an adaptive ECCM algorithm where the IRL block is replaced with an inverse detector.

References

  • [1] S. L. Johnston. Radar electronic counter-countermeasures. IEEE Transactions on Aerospace and Electronic Systems, (1):109–117, 1978.
  • [2] M. I. Skolnik. Radar handbook. McGraw-Hill Education, 2008.
  • [3] S. Afriat. The construction of utility functions from expenditure data. International economic review, 8(1):67–77, 1967.
  • [4] H. Varian. Revealed preference and its applications. The Economic Journal, 122(560):332–338, 2012.
  • [5] N. Dierkens. Information asymmetry and equity issues. Journal of financial and quantitative analysis, 26(2):181–199, 1991.
  • [6] V. Y. Tsvetkov. Information asymmetry as a risk factor. European researcher, (11-1):1937–1943, 2014.
  • [7] A. Mas-Colell, M. D. Whinston, and J. R. Green. Microeconomic theory. Oxford University Press, 1995.
  • [8] G. J. Miller. Solutions to Principal-Agent Problems in Firms. Springer US, Boston, MA, 2005.
  • [9] M. Vera-Hernández. Structural estimation of a principal-agent model: Moral hazard in medical insurance. The RAND Journal of Economics, 34(4):670–693, 2003.
  • [10] R. W. Rauchhaus. Principal-agent problems in humanitarian intervention: Moral hazards, adverse selection, and the commitment dilemma. International Studies Quarterly, 53(4):871–884, 2009.
  • [11] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211–407, 2014.
  • [12] A. Gupta and V. Krishnamurthy. Principal–agent problem as a principled approach to electronic counter-countermeasures in radar. IEEE Transactions on Aerospace and Electronic Systems, 58(4):3223–3235, 2022.
  • [13] K. Amin, R. Cummings, L. Dworkin, M. Kearns, and A. Roth. Online learning and profit maximization from revealed preferences. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29-1, 2015.
  • [14] A. Aprem and S. J. Roberts. Optimal pricing in black box producer-consumer stackelberg games using revealed preference feedback. Neurocomputing, 437:31–41, 2021.
  • [15] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with Applications to Tracking and Navigation. John Wiley & Sons, Ltd, 1998.
  • [16] V. Krishnamurthy. Partially observed Markov decision processes. Cambridge university press, 2016.
  • [17] K. Pattanayak, V. Krishnamurthy, and C. Berry. How can a cognitive radar mask its cognition? In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5897–5901. IEEE, 2022.
  • [18] V. Krishnamurthy, K. Pattanayak, S. Gogineni, B. Kang, and M. Rangaswamy. Adversarial radar inference: Inverse tracking, identifying cognition, and designing smart interference. IEEE Transactions on Aerospace and Electronic Systems, 57(4):2067–2081, 2021.
  • [19] W. Diewert. Afriat’s theorem and some extensions to choice under uncertainty. The Economic Journal, 122(560):305–331, 2012.
  • [20] K. Pattanayak, V. Krishnamurthy, and C. Berry. How can a radar mask its cognition? arXiv preprint arXiv:2210.11444, 2022.
  • [21] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152, 1992.
  • [22] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning, pages 729–736, 2006.