This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

footnotetext: Bohan Lifootnotetext: [email protected]footnotetext: footnotetext: Junyi Guo: corresponding authorfootnotetext: [email protected]footnotetext: footnotetext: Xiaoqing Liangfootnotetext: [email protected]footnotetext:

Optimal Monotone Mean-Variance Problem in a Catastrophe Insurance Model

Bohan Li111Center for Financial Engineering, Soochow University, Suzhou, Jiangsu 215006, P.R.China    Junyi Guo222School of Mathematical Sciences, Nankai University, Tianjin 300071, P.R.China    Xiaoqing Liang333School of Sciences, Hebei University of Technology, Tianjin 300401, P.R.China
Abstract

This paper explores an optimal investment and reinsurance problem involving both ordinary and catastrophe insurance businesses. The catastrophic events are modeled as following a compound Poisson process, impacting the ordinary insurance business. The claim intensity for the ordinary insurance business is described using a Cox process with a shot-noise intensity, the jump of which is proportional to the size of the catastrophe event. This intensity increases when a catastrophe occurs and then decays over time. The insurer’s objective is to maximize their terminal wealth under the Monotone Mean-Variance (MMV) criterion. In contrast to the classical Mean-Variance (MV) criterion, the MMV criterion is monotonic across its entire domain, aligning better with fundamental economic principles. We first formulate the original MMV optimization problem as an auxiliary zero-sum game. Through solving the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation, explicit forms of the value function and optimal strategies are obtained. Additionally, we provides the efficient frontier within the MMV criterion. Several numerical examples are presented to demonstrate the practical implications of the results.

Mathematics Subject Classification (2020) 49L20 · 91G80

Keywords. Optimal reinsurance \cdot Monotone mean-variance criterion \cdot Zero-sum game \cdot Shot noise process \cdot Catastrophe insurance

1 Introduction

Large-scale disasters, especially natural catastrophes, have the potential to result in numerous injuries, fatalities, and substantial economic losses. For insurance companies, the occurrence of such catastrophic events significantly amplifies their exposure to claims within a short time. In recent times, some insurance firms have introduced specialized catastrophe insurance tailored to specific natural disasters. Therefore, the management of catastrophic risk has assumed an increasingly pivotal role within the insurance industry. In practice, insurance companies often establish a bankruptcy remote entity known as a Special Purpose Vehicle (SPV). They purchase reinsurance from the SPV through continuous premium payments. When a catastrophe occurs, the SPV partially compensates the insurance company for its losses. Simultaneously, the SPV packages and securitizes this catastrophic risk as catastrophe bonds, selling them to investors. These catastrophe bonds, whose risk depend on disaster events and often displays low correlation with financial market, become attractive risk diversification assets among investors.

Typically, optimal reinsurance problems model the insurer’s surplus process using the Cramér-Lundberg risk model. However, catastrophe insurance claims differ significantly from those in traditional insurance models. Catastrophes lead to an immediate surge in claims, and even after the event, the repercussions persist and gradually recede over time. Furthermore, the frequency of ordinary claims rises in tandem with the severity of the catastrophe(see [3] for more details). To capture this phenomenon, we employ a Cox process, to model the occurrence process of ordinary claims whose intensity is a stochastic process following a shot-noise process, which represents the occurrence and receding of the catastrophe. While this model has been applied in asset pricing (see [12], [25], [12], [24], [18], [37], [39] and [19]), it has received limited attention in the context of catastrophe insurance optimization. [17] first considered the optimal investment and reinsurance problem for the shot-noise process. They used the diffusion approximation of the shot-noise process introduced by [13] to optimize the terminal wealth under the classical MV criterion. [2] minimized the risk of unit-linked life insurance by hedging in a financial market driven by a discontinuous shot-noise process. [4] investigated the optimal investment and reinsurance problem for a Cox process. In their model, the intensity of the claim arrival process is modelled by a diffusion process. The objective was to maximize the expected exponential utility of its terminal wealth. The value function and optimal strategy were provided via the solutions to backward partial differential equations. [6] considered the self-exciting and externally-exciting contagion claim Model under the time-consistent MV criterion with a model first introduced by [14]. Therefore, existing research primarily focuses on continuous-time models or diffusion approximation models. The optimization problem of catastrophe models with pure jump settings in both the claim process and the intensity process, especially under the emerging MMV criterion, remains an under-explored topic of study.

In this paper, we address this issue by considering an insurer managing both ordinary and catastrophe insurance claims under the MMV criterion within a Black-Scholes financial market framework. The insurer faces potential claims, which fall into two categories: ordinary claims and catastrophe claims. The modeling of ordinary aggregate claims involves the use of a Cox process, wherein the intensity process takes the form of a shot noise process. Simultaneously, catastrophe claims are modeled using a compound Poisson process, sharing the same Poisson counting process as the intensity of ordinary claims. This implies that catastrophe claims and the impact of disasters occur concurrently. We make an assumption that the sizes of catastrophe claims are proportional to the effect of the catastrophe, represented as the jump of the shot noise process. Specifically, ordinary claims are expressed as:

C1(t)=i=1N1(t)Ui\displaystyle C_{1}(t)=\sum_{i=1}^{N_{1}(t)}U_{i}

where N1(t)N_{1}(t) is a Cox process with intensity process λ(t)\lambda(t). UiU_{i} denotes the size of the ii-th claim, with the claim sizes {Ui}i1\{U_{i}\}_{i\geq 1} being independent and identical distributed. The intensity process λ(t)\lambda(t) is defined as

λ(t):=λ0eδt+i=1N2(t)Vieδ(tτi).\displaystyle\lambda(t):=\lambda_{0}e^{-\delta t}+\sum_{i=1}^{N_{2}(t)}V_{i}e^{-\delta(t-\tau_{i})}.

Here, N2(t)N_{2}(t) is a Poisson process with a constant intensity ρ\rho, and {Vi}i1\{V_{i}\}_{i\geq 1} are also identically and independently distributed random variables. The shot-noise process λ(t)\lambda(t) is used to describe the occurrence and receding pattern of the catastrophes. ViV_{i} represents the impact of the ii-th catastrophe, and it decays exponentially with the factor δ\delta. In practice, the intensity ρ\rho tends to be relatively low due to the infrequent occurrence of catastrophes. We introduce a constant scale factor kk, thus kVikV_{i} denotes the size of the ii-th catastrophe claims, resulting in the aggregate catastrophe claims process:

C2(t)=i=1N2(t)kVi.\displaystyle C_{2}(t)=\sum_{i=1}^{N_{2}(t)}kV_{i}.

Consequently, the total risk assumed by the insurer is given by:

C1(t)+C2(t).\displaystyle C_{1}(t)+C_{2}(t).

Despite the successful application of the MV criterion in finance and economics, it has a notable drawback of lacking monotonicity. In other words, situations may arise where X<YX<Y but 𝔼(X)θ2𝕍ar(X)>𝔼(Y)θ2𝕍ar(Y)\mathbb{E}(X)-\frac{\theta}{2}\mathbb{V}ar(X)>\mathbb{E}(Y)-\frac{\theta}{2}\mathbb{V}ar(Y), where θ\theta represents the investor’s risk aversion parameter. To address this limitation, [31] introduced the MMV criterion, which serves as the nearest monotonic counterpart to the MV criterion. Interestingly, the MMV criterion aligns with the classical MV criterion within its domain of monotonicity. While there are relatively few studies in the literature related to the MMV optimization criterion, some researchers have ventured into this area. For example, [35] explored this criterion in a continuous-time framework, assuming that coefficients of stock prices are random as functions of a stochastic process. They focused on the discounted terminal wealth process under the MMV criterion, obtaining optimal portfolio strategies and value functions under specified coefficient conditions. In our earlier research works, [28] and [29], we delved into the MMV optimization criterion within the context of investment and reinsurance problems. Specifically, we explored this criterion in two distinct scenarios: the diffusion approximation model and the Cramér-Lundberg risk model. It is noteworthy that we made an observation regarding the optimal strategies in these two frameworks that these optimal strategies align with those derived for the classical MV problem. In a significant contribution, [34] and [36]111[36] shows that the optimal wealth processes of MMV problems are identical to that of MV problems under a certain condition which is applicable for continuous assets processes (see [16]). extended this understanding by demonstrating that for a broad class of portfolio choice problems, when risky assets are continuous semi-martingales, the optimal portfolio and value function under the classical MV criterion and the MMV criterion are equivalent, which is not the case in our present paper, since the catastrophe claim process is not continuous. To the best of our knowledge, no research has explored the optimization strategies under the MV or MMV criteria in the context of discontinuous catastrophe models.

In light of these gaps, our paper focuses on an optimal reinsurance problem encompassing both ordinary and catastrophe insurance claims, operating under the MMV criterion. This model allows the insurer to invest within a Black-Scholes financial market. To mitigate claim-related risks, the insurer has the option to purchase reinsurance for both ordinary and catastrophe insurance from SPV. For analytical simplicity, we confine the reinsurance format to proportional reinsurance. The optimal MMV problem takes the form of a max-min problem. Initially, the objective is to minimize it by selecting an alternative probability measure that is absolutely continuous (though not necessarily equivalent) with respect to the reference probability measure PP. Subsequently, the objective is maximized by determining the optimal insurer strategies. To facilitate this complex analysis, we adopt a procedure similar to [28] in which the alternative probability measure is replaced by its conditional expected Radon-Nikodym derivative with respect to the reference probability measure. Despite the inherent challenge of absolute continuity (not equivalence), this approach enables us to represent the conditional expected Radon-Nikodym derivative as the solution to a stochastic differential equation (see [29]). This reformulation converts the original optimal MMV problem into a two-player non-zero sum game, incorporating the conditional expected Radon-Nikodym derivative into the state processes. We leverage dynamic programming techniques to address this auxiliary problem, deriving the explicit solution to the Hamilton-Jacobi-Bellman (HJB) equation after rigorous calculations. Additionally, we provide the efficient frontier for the MMV problem involving catastrophic insurance.

The structure of the paper is organized as follows: In Section 2, we formulate the insurance model and introduce the MMV criterion. Subsequently, we transform the MMV maximization problem into an auxiliary zero-sum game problem. Employing dynamic programming, we derive the HJBI equation satisfied by the value function. In Section 3, we further solve the HJBI equation, demonstrating that the candidate value function and strategies indeed represent the optimal value function and corresponding strategies. Toward the end of this section, we provide the efficient frontier under the MMV criterion. In Section 4, we investigate the diffusion approximation model within the MMV criterion framework. Section 5 offers numerical examples and sensitivity analyses to illustrate our findings. Finally, Section 6 draws conclusions from our research.

2 Model Formulation

2.1 Insurance Model

Let PP be the real-world probability measure and R0(t)R_{0}(t) be the surplus of an insurer, which operates both the ordinary insurance business and the catastrophe insurance business,

dR0(t)=c(t)dtdC1(t)dC2(t),\displaystyle dR_{0}(t)=c(t)dt-dC_{1}(t)-dC_{2}(t),

where c(t)c(t) is the premium rate at time tt. The aggregate claims of the insurer are composed of two parts, C1(t)C_{1}(t), denoting the ordinary aggregate claims, and C2(t)C_{2}(t), which represents the aggregate catastrophe claims. In the event of a catastrophe, there is an immediate surge in claims, which is characterized by the catastrophe claim process denoted as C2(t)C_{2}(t).

In traditional insurance literature, the aggregate claims process C1(t)C_{1}(t) is typically assumed to follow a compound Poisson process. However, considering the impact of catastrophes, the frequency of ordinary claims C1(t)C_{1}(t) should be contingent upon the occurrence of a catastrophe. The fixed intensity of the compound Poisson process is inadequate for capturing the dynamic nature of ordinary claim frequency. To address this limitation, we replace the constant intensity of C1(t)C_{1}(t) with a stochastic process denoted as λ(t)\lambda(t). Let N1(t)N_{1}(t) represent the counting process for C1(t)C_{1}(t); in this context, N1(t)N_{1}(t) can be described as a doubly stochastic Poisson process, also known as a Cox process222For more details, we refer the readers to the works of [7], [1], [33], [5], [21], and [22].. .

Denote λ(t)\lambda(t) the shot noise process as follows333See [8], [9], [27], and [12].:

λ(t)=λ0eδt+i=1N2(t)Vieδ(tτi).\displaystyle\lambda(t)=\lambda_{0}e^{-\delta t}+\sum_{i=1}^{N_{2}(t)}V_{i}e^{-\delta(t-\tau_{i})}.

We employ λ(t)\lambda(t) to characterize the impact of catastrophes on the intensity of N1(t)N_{1}(t), where λ0\lambda_{0} corresponds to the initial value, ViV_{i} stands for the impact of the ii-th catastrophe, N2(t)N_{2}(t) represents the total number of catastrophes occurring before time tt, τi\tau_{i} denotes the time at which the ii-th catastrophe transpires, and δ\delta signifies the exponential decay factor for λ\lambda.

Let

C1(t)=i=1N1(t)Ui,C2(t)=i=1N2(t)kVi.\displaystyle C_{1}(t)=\sum_{i=1}^{N_{1}(t)}U_{i},\qquad C_{2}(t)=\sum_{i=1}^{N_{2}(t)}kV_{i}.

in which each UiU_{i} represents the magnitude of the ii-th ordinary claim. Additionally, the premium rate c(t)c(t) is considered a random process, depends on the observations of λ(s)\lambda(s) within the interval [0,t)[0,t). This is because the insurer has the capability to monitor the historical data of past catastrophes and evaluate their lasting effects to determine the pricing of catastrophe insurance products. At this point, c(t)c(t) is defined by

c(t):=\displaystyle c(t):= (1+κ)d𝔼P[C1(t)|λ(s),0st]dt+(1+ι)d𝔼P[C2(t)]dt\displaystyle(1+\kappa)\frac{d\mathbb{E}^{P}[C_{1}(t)|\lambda(s),0\leq s\leq t]}{dt}+(1+\iota)\frac{d\mathbb{E}^{P}[C_{2}(t)]}{dt}
=\displaystyle= (1+κ)λ(t)𝔼PUi+(1+ι)ρk𝔼PVi.\displaystyle(1+\kappa)\lambda(t)\mathbb{E}^{P}U_{i}+(1+\iota)\rho k\mathbb{E}^{P}V_{i}.

where ρ\rho is the intensity of N2(t)N_{2}(t), κ\kappa and ι\iota denote the safety loadings of the insurer for ordinary claims and catastrophe claims, respectively. The integral forms of λ(t)\lambda(t) and R0(t)R_{0}(t) are444We use >0\mathbb{R}_{>0} to denote the space of positive real numbers, and 0\mathbb{R}_{\geq 0} to denote the space of non-negative real numbers.

dλ(t)=δλ(t)dt+>0zN2(dt,dz),λ(0)=λ0,\displaystyle d\lambda(t)=-\delta\lambda(t)dt+\int_{\mathbb{R}_{>0}}zN_{2}(dt,dz),\quad\lambda(0)=\lambda_{0},

and

dR0(t)=(1+κ)λ(t)𝔼PUidt+(1+ι)ρk𝔼PVidt>0zN1(dt,dz)>0kzN2(dt,dz).\displaystyle dR_{0}(t)=(1+\kappa)\lambda(t)\mathbb{E}^{P}U_{i}dt+(1+\iota)\rho k\mathbb{E}^{P}V_{i}dt-\int_{\mathbb{R}_{>0}}zN_{1}(dt,dz)-\int_{\mathbb{R}_{>0}}kzN_{2}(dt,dz).

The insurer has the option to acquire proportional reinsurance to mitigate both ordinary and catastrophic risks. We represent the safety loadings for reinsurance companies for ordinary and catastrophe claims as κr\kappa_{r} and ιr\iota_{r}, respectively. In this scenario, the managed surplus process evolves as:

dR(t)=\displaystyle dR(t)= [(1+κr)u(t)(κrκ)]λ(t)𝔼PUidt>0zu(t)N1(dt,dz)\displaystyle[(1+\kappa_{r})u(t)-(\kappa_{r}-\kappa)]\lambda(t)\mathbb{E}^{P}U_{i}dt-\int_{\mathbb{R}_{>0}}zu(t)N_{1}(dt,dz)
+[(1+ιr)v(t)(ιrι)]ρk𝔼PVidtk>0zv(t)N2(dt,dz),\displaystyle+[(1+\iota_{r})v(t)-(\iota_{r}-\iota)]\rho k\mathbb{E}^{P}V_{i}dt-k\int_{\mathbb{R}_{>0}}zv(t)N_{2}(dt,dz),

where the two control variables, u(t)u(t) and v(t)v(t), are the retention levels for ordinary claims and catastrophe claims, respectively.

We also allow the insurer to invest its wealth to a risky asset, the price of which at time tt is governed by a geometric Brownian motion given by

dS(t)=μ0S(t)dt+σ0S(t)dW0(t),\displaystyle dS(t)=\mu_{0}S(t)dt+\sigma_{0}S(t)dW_{0}(t),

where μ0\mu_{0} and σ0\sigma_{0} are constants, and W0(t)W_{0}(t) represents a standard Brownian motion. The insurer is also allowed to allocate its wealth into a risk-free asset, the price of which at time tt follows the process:

dB(t)=rB(t)dt,\displaystyle dB(t)=rB(t)dt,

where rr is a fixed interest rate. Let π(t)\pi(t) denote the amount that the insurer invests into the risky asset, then the insurer’s surplus process follows

dX(t)=π(t)dS(t)S(t)+(X(t)π(t))dB(t)B(t)+dR(t).\displaystyle dX(t)=\pi(t)\frac{dS(t)}{S(t)}+(X(t)-\pi(t))\frac{dB(t)}{B(t)}+dR(t). (1)

We assume that the processes and random variables N1(t)N_{1}(t), {Ui}i1\{U_{i}\}_{i\geq 1}, N2(t)N_{2}(t), {Vi}i1\{V_{i}\}_{i\geq 1} and W0(t)W_{0}(t) are mutually independent under the probability PP. Let t\mathcal{F}_{t} be the completion of the σ\sigma-field generated by N1(dt,dz)N_{1}(dt,dz), N2(dt,dz)N_{2}(dt,dz), W0(t)W_{0}(t), {Ui}\{U_{i}\} and {Vi}\{V_{i}\} under PP and 𝔽:={t|t[0,T]}\mathbb{F}:=\{\mathcal{F}_{t}|t\in[0,T]\}. By Proposition 3 of [26], the compensated random measures in probability space (Ω,𝔽,P)(\Omega,\mathbb{F},P) are given by

N~1(ds,dz)=N1(ds,dz)λ(s)F1(dz)ds,\displaystyle\widetilde{N}_{1}(ds,dz)=N_{1}(ds,dz)-\lambda(s)F_{1}(dz)ds,

and

N~2(ds,dz)=N2(ds,dz)ρF2(dz)ds,\displaystyle\widetilde{N}_{2}(ds,dz)=N_{2}(ds,dz)-\rho F_{2}(dz)ds,

where F1F_{1} and F2F_{2} are distributions of UiU_{i} and ViV_{i}, respectively. For notational simplicity, we also denote

μ1=>0zF1(dz),\displaystyle\mu_{1}=\int_{\mathbb{R}_{>0}}zF_{1}(dz),\quad σ12=>0z2F1(dz),\displaystyle\sigma_{1}^{2}=\int_{\mathbb{R}_{>0}}z^{2}F_{1}(dz),
μ2=>0zF2(dz),\displaystyle\mu_{2}=\int_{\mathbb{R}_{>0}}zF_{2}(dz),\quad σ22=>0z2F2(dz).\displaystyle\sigma_{2}^{2}=\int_{\mathbb{R}_{>0}}z^{2}F_{2}(dz). (2)

Thus, the differential equations for λ(t)\lambda(t) and R(t)R(t) can be written as

dλ(t)=(δλ(t)+ρμ2)dt+>0zN~2(dt,dz),\displaystyle d\lambda(t)=(-\delta\lambda(t)+\rho\mu_{2})dt+\int_{\mathbb{R}_{>0}}z\widetilde{N}_{2}(dt,dz),

and

dR(t)=\displaystyle dR(t)= (κru(t)(κrκ))μ1λ(t)dt>0zu(t)N~1(dt,dz)\displaystyle\bigg{(}\kappa_{r}u(t)-(\kappa_{r}-\kappa)\bigg{)}\mu_{1}\lambda(t)dt-\int_{\mathbb{R}_{>0}}zu(t)\widetilde{N}_{1}(dt,dz)
+(ιrv(t)(ιrι))kμ2ρdtk>0zv(t)N~2(dt,dz).\displaystyle+\bigg{(}\iota_{r}v(t)-(\iota_{r}-\iota)\bigg{)}k\mu_{2}\rho dt-k\int_{\mathbb{R}_{>0}}zv(t)\widetilde{N}_{2}(dt,dz). (3)

By substituting (3) into (1), the controlled surplus process of the insurer satisfies the following stochastic differential equation,

dX(t)=(rX(t)+π(t)(μ0r)+(κru(t)κr+κ)μ1λ(t)+(ιrv(t)ιr+ι)kμ2ρ)dt\displaystyle dX(t)=\bigg{(}rX(t)+\pi(t)(\mu_{0}-r)+(\kappa_{r}u(t)-\kappa_{r}+\kappa)\mu_{1}\lambda(t)+(\iota_{r}v(t)-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}dt
+π(t)σ0dW0(t)>0zu(t)N~1(dt,dz)k>0zv(t)N~2(dt,dz),\displaystyle+\pi(t)\sigma_{0}dW_{0}(t)-\int_{\mathbb{R}_{>0}}zu(t)\widetilde{N}_{1}(dt,dz)-k\int_{\mathbb{R}_{>0}}zv(t)\widetilde{N}_{2}(dt,dz),

where (π(t),u(t),v(t))(\pi(t),u(t),v(t)) are the control variables.

2.2 Monotone Mean-Variance Criterion

The insurer aims to choose some admissible strategies (π(t),u(t),v(t))(\pi(t),u(t),v(t)) to maximize the terminal surplus under MMV criterion introduced by [31], that is,

Vθ(X(T))=minQΔ2(P){𝔼Q[X(T)]+12θC(Q||P)},V_{\theta}(X(T))=\min_{Q\in\Delta^{2}(P)}\left\{\mathbb{E}^{Q}[X(T)]+\frac{1}{2\theta}C(Q||P)\right\}, (4)

where

Δ2(P)={QP:Q(Ω)=1,E[(dQdP)2]<},\Delta^{2}(P)=\left\{Q\ll P:Q(\Omega)=1,E\left[\left(\frac{dQ}{dP}\right)^{2}\right]<\infty\right\},

and C(Q||P)C(Q||P) is a penalty function satisfying

C(Q||P)=𝔼P[(dQdP)2]1,C(Q||P)=\mathbb{E}^{P}\left[\left(\frac{dQ}{dP}\right)^{2}\right]-1,

and θ\theta is an index to measure the risk aversion of the insurer. If the alternative probability measure QQ is constrained in a subset of Δ2(P)\Delta^{2}(P):

Δ~2(P)={QP:Q(Ω)=1,𝔼P[(dQdP)2]<},\widetilde{\Delta}^{2}(P)=\bigg{\{}Q\sim P:Q(\Omega)=1,\mathbb{E}^{P}\bigg{[}\left(\frac{dQ}{dP}\right)^{2}\bigg{]}<\infty\bigg{\}},

by applying the exponential martingale representation, we know that dQdP|t\frac{dQ}{dP}|_{\mathcal{F}_{t}} is the solution of the SDE:

Y(t)=1+0tY(s)o(s)𝑑W0(s)+0t>0Y(s)p(s,z)N~1(ds,dz)+0t>0Y(s)q(s,z)N~2(ds,dz),Y(t)=1+\int_{0}^{t}Y(s)o(s)dW_{0}(s)+\int_{0}^{t}\int_{\mathbb{R}_{>0}}Y(s-)p(s,z)\widetilde{N}_{1}(ds,dz)+\int_{0}^{t}\int_{\mathbb{R}_{>0}}Y(s-)q(s,z)\widetilde{N}_{2}(ds,dz), (5)

where (o(t),p(t,z),q(t,z))(o(t),p(t,z),q(t,z)) are some suitable process555oo is an 𝔽\mathbb{F}-adapted process, and for any fixed z>0z\in\mathbb{R}_{>0}, pp and qq are 𝔽\mathbb{F}-predictable processes. All of them should satisfy some integrable conditions such as Novikov’s condition. depending on the selection of QQ (see [32] and [20] for more details). Specially, we can regard o(t)o(t), p(t,z)p(t,z) and q(t,z)q(t,z) as control processes which have a one-to-one correspondence with QΔ~2(P)Q\in\widetilde{\Delta}^{2}(P).

In this paper, we do not constrain the chosen of QQ in Δ~2(P)\widetilde{\Delta}^{2}(P), hence dQdP|t\frac{dQ}{dP}\big{|}_{\mathcal{F}_{t}} is not necessarily positive almost surely for t[0,T]t\in[0,T], and is not necessarily a standard exponential martingale. Fortunately, by using the tools of discontinuous martingale analysis in [26], [30] and [23], it can be proved that the exponential martingale representation (5) still holds and dQdP|t\frac{dQ}{dP}|_{\mathcal{F}_{t}} is a solution of (5) (see [29] for more details). In this case, o(t)o(t), p(t,z)p(t,z) and q(t,z)q(t,z) can explode at the time that YY hits zero, but for any

ζn:=inf{t0;Y(t)1n},\zeta_{n}:=\inf\left\{t\geq 0;Y(t)\leq\frac{1}{n}\right\},

o(t)o(t), p(t,z)p(t,z) and q(t,z)q(t,z) should satisfy the following integrable condition for any integer n1n\geq 1:

E[0Tζno(t)2dt+0Tζn>0(1p(t,z)+1)2λ(t)F1(dz)dt\displaystyle E\bigg{[}\int_{0}^{T\wedge\zeta_{n}}o(t)^{2}dt+\int_{0}^{T\wedge\zeta_{n}}\int_{\mathbb{R}_{>0}}(1-\sqrt{p(t,z)+1})^{2}\lambda(t)F_{1}(dz)dt
+0Tζn>0(1q(t,z)+1)2ρF2(dz)dt]<.\displaystyle+\int_{0}^{T\wedge\zeta_{n}}\int_{\mathbb{R}_{>0}}(1-\sqrt{q(t,z)+1})^{2}\rho F_{2}(dz)dt\bigg{]}<\infty. (6)

We give the following example to illustrate this more clearly.

Example 2.1.

Let the intensity of N2N_{2}, ρ\rho = 1 and define

Y(T)=N2(τ)τ, for some 0<τ<T,\displaystyle Y(T)=\frac{N_{2}(\tau)}{\tau},\text{ for some }0<\tau<T,

and an alternative measure

Q(A):=AY(T)𝑑P, for all AT.\displaystyle Q(A):=\int_{A}Y(T)dP,\text{ for all }A\in\mathcal{F}_{T}.

Since Y(T)Y(T) is non-negative and square-integrable, and

Q(Ω)=𝔼Y(T)=\displaystyle Q(\Omega)=\mathbb{E}Y(T)= 𝔼[N2(τ)τ]=1,\displaystyle\mathbb{E}\bigg{[}\frac{N_{2}(\tau)}{\tau}\bigg{]}=1,

we have QΔ2(P)Q\in\Delta^{2}(P). In this case,

Y(t)=𝔼[Y(T)|t]={N2(t)t+ττ,0t<τ,N2(τ)τ,τtT.\displaystyle Y(t)=\mathbb{E}\big{[}Y(T)\big{|}\mathcal{F}_{t}\big{]}=\begin{cases}&\frac{N_{2}(t)-t+\tau}{\tau},\quad 0\leq t<\tau,\\ &\frac{N_{2}(\tau)}{\tau},\quad\tau\leq t\leq T.\end{cases}

Obviously, Y(t)Y(t) is not almost surely positive for τtT\tau\leq t\leq T. Moreover, we have

Y(t)=1+0tτ1τ𝑑N~2(s)=1+0t1τ1{sτ}𝑑N~2(s).\displaystyle Y(t)=1+\int_{0}^{t\vee\tau}\frac{1}{\tau}d\widetilde{N}_{2}(s)=1+\int_{0}^{t}\frac{1}{\tau}1_{\{s\leq\tau\}}d\widetilde{N}_{2}(s).

Define

η(t)=1τY(t)11{t<τ}={1N2(t)t+τ,0t<τ,0,τtT,\displaystyle\eta(t)=\frac{1}{\tau}Y(t)^{-1}1_{\{t<\tau\}}=\begin{cases}&\frac{1}{N_{2}(t)-t+\tau},\quad 0\leq t<\tau,\\ &0,\quad\tau\leq t\leq T,\end{cases}

then

Y(t)=\displaystyle Y(t)= 1+0t1τ1{sτ}𝑑N2~(s)=1+0tY(s)η(s)𝑑N2~(s).\displaystyle 1+\int_{0}^{t}\frac{1}{\tau}1_{\{s\leq\tau\}}d\widetilde{N_{2}}(s)=1+\int_{0}^{t}Y(s-)\eta(s-)d\widetilde{N_{2}}(s).

We have q(t)=η(t)q(t)=\eta(t-). Note that on the set {ω:N2(τ)=0}\{\omega:N_{2}(\tau)=0\}, we have

0T(1q(t)+1)2𝑑t=\displaystyle\int_{0}^{T}(1-\sqrt{q(t)+1})^{2}dt= 0τ(11N2(t)t+τ+1)2𝑑t\displaystyle\int_{0}^{\tau}\left(1-\sqrt{\frac{1}{N_{2}(t-)-t+\tau}+1}\right)^{2}dt
=\displaystyle= 0τ(11τt+1)2𝑑t=.\displaystyle\int_{0}^{\tau}\left(1-\sqrt{\frac{1}{\tau-t}+1}\right)^{2}dt=\infty.

But for any ζn<τ\zeta_{n}<\tau, it satisfies the condition (6) that

E[0Tζn(1q(t)+1)2𝑑t]=E[0τζn(11N2(t)t+τ+1)2𝑑t]<,\displaystyle E\bigg{[}\int_{0}^{T\wedge\zeta_{n}}(1-\sqrt{q(t)+1})^{2}dt\bigg{]}=E\bigg{[}\int_{0}^{\tau\wedge\zeta_{n}}\left(1-\sqrt{\frac{1}{N_{2}(t-)-t+\tau}+1}\right)^{2}dt\bigg{]}<\infty,

2.3 Hamilton-Jacobi-Bellman-Isaacs Equation

Based on the above discussion, we now summarize the dynamics of X(t)X(t), Y(t)Y(t) and λ(t)\lambda(t) below

dX(t)=\displaystyle dX(t)= (rX(t)+π(t)(μ0r)+(κru(t)κr+κ)μ1λ(t)+(ιrv(t)ιr+ι)kμ2ρ)dt\displaystyle\bigg{(}rX(t)+\pi(t)(\mu_{0}-r)+(\kappa_{r}u(t)-\kappa_{r}+\kappa)\mu_{1}\lambda(t)+(\iota_{r}v(t)-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}dt
+π(t)σ0dW0(t)>0zu(t)N~1(dt,dz)k>0zv(t)N~2(dt,dz),\displaystyle+\pi(t)\sigma_{0}dW_{0}(t)-\int_{\mathbb{R}_{>0}}zu(t)\widetilde{N}_{1}(dt,dz)-k\int_{\mathbb{R}_{>0}}zv(t)\widetilde{N}_{2}(dt,dz), (7a)
dY(t)=\displaystyle dY(t)= Y(t)o(t)dW0(t)+>0Y(t)p(t,z)N~1(dt,dz)+>0Y(t)q(t,z)N~2(dt,dz),\displaystyle Y(t)o(t)dW_{0}(t)+\int_{\mathbb{R}_{>0}}Y(t-)p(t,z)\widetilde{N}_{1}(dt,dz)+\int_{\mathbb{R}_{>0}}Y(t-)q(t,z)\widetilde{N}_{2}(dt,dz), (7b)
dλ(t)=\displaystyle d\lambda(t)= (δλ(t)+ρμ2)dt+>0zN~2(dt,dz).\displaystyle(-\delta\lambda(t)+\rho\mu_{2})dt+\int_{\mathbb{R}_{>0}}z\widetilde{N}_{2}(dt,dz). (7c)

and the original maximization problem is transformed to an auxiliary two-player zero-sum game as follows,

Problem.

Let

Ja,b(s,x,y,λ)=𝔼s,x,y,λP[Xa(T)Yb(T)+12θ(Yb(T))2],J^{a,b}(s,x,y,\lambda)=\mathbb{E}_{s,x,y,\lambda}^{P}\bigg{[}X^{a}(T)Y^{b}(T)+\frac{1}{2\theta}(Y^{b}(T))^{2}\bigg{]}, (8)

where 𝔼s,x,y,λP[]\mathbb{E}_{s,x,y,\lambda}^{P}[\cdot] represents 𝔼P[|Xa(s)=x,Yb(s)=y,λ(s)=λ]\mathbb{E}^{P}[\cdot|X^{a}(s)=x,Y^{b}(s)=y,\lambda(s)=\lambda]. Player one wants to maximize Ja,b(s,x,y,λ)J^{a,b}(s,x,y,\lambda) with strategy a=(π,u,v)a=(\pi,u,v) over 𝒜[s,T]\mathcal{A}[s,T] defined below and player two wants to maximize Ja,b(s,x,y,λ)-J^{a,b}(s,x,y,\lambda) with strategy b=(o,p,q)b=(o,p,q) over [s,T]\mathcal{B}[s,T] defined below.

Notice that the starting time of Problem (4) is fixed at zero, and Problem (8) extended Problem (4) by letting the initial time become ss (see [28, 29] for more details). If there exist the optimal strategies aa^{*} and bb^{*} for Problem (8), then

Vθ(X(T))=Ja,b(0,x,1,λ).V_{\theta}(X(T))=J^{a,b^{*}}(0,x,1,\lambda).
Definition 2.1.

The strategy {a(t)}0tT={(π(t),u(t),v(t))}0tT\{a(t)\}_{0\leq t\leq T}=\{(\pi(t),u(t),v(t))\}_{0\leq t\leq T} employed by player one is admissible if the following conditions are met: π:[s,T]\pi:[s,T]\to\mathbb{R} is an 𝔽\mathbb{F}-adapted process, and u:[s,T]u:[s,T]\to\mathbb{R} and v:[s,T]v:[s,T]\to\mathbb{R} are 𝔽\mathbb{F}-predictable processes such that (7a) is well-defined and satisfies 𝔼PXu(t)<\mathbb{E}^{P}X^{u}(t)<\infty. We denote the set of all admissible strategies a(t)a(t) as 𝒜[s,T]\mathcal{A}[s,T].

The strategy {b(t,z)}0tT={(o(t),p(t,z),q(t,z))}0tT\{b(t,z)\}_{0\leq t\leq T}=\{(o(t),p(t,z),q(t,z))\}_{0\leq t\leq T} used by player two is admissible if the following conditions are met: o:[s,T]o:[s,T]\to\mathbb{R} is an 𝔽\mathbb{F}-adapted process, and for any fixed z>0z\in\mathbb{R}_{>0}, p(,z):[s,T]p(\cdot,z):[s,T]\to\mathbb{R} and q(,z):[s,T]q(\cdot,z):[s,T]\to\mathbb{R} are 𝔽\mathbb{F}-predictable processes. These processes should satisfy the condition:

ΔL(t)>0p(t,z)N1(t,dz)+>0p(t,z)N2(t,dz)1,t[s,T],\Delta L(t)\equiv\int_{\mathbb{R}{>0}}p(t,z)N_{1}({t},dz)+\int{\mathbb{R}_{>0}}p(t,z)N_{2}({t},dz)\geq-1,\quad t\in[s,T],

Additionally, the stochastic differential equation (7b) should have a unique solution, which is a nonnegative 𝔽\mathbb{F}-adapted square integrable PP-martingale satisfying 𝔼PY(t)=1\mathbb{E}^{P}Y(t)=1 for t[s,T]t\in[s,T]. We use [s,T]\mathcal{B}[s,T] to represent the set of all admissible strategies b(t,z)b(t,z).

We present the following verification theorem, whose proof closely follows the one of Theorem 2.2.2 in [38] and Theorem 3.2 in [32]:

Theorem 2.1.

(Verification Theorem) Suppose that W:(s,x,y,λ)[s,T]××0×>0W:(s,x,y,\lambda)\mapsto[s,T]\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0} is a C1,2,2,2C^{1,2,2,2} function satisfying the following condition

{ta,bW(t,x,y,λ)=0,(t,x,y,λ)[s,T)××0×>0,ta,bW(t,x,y,λ)0,b[s,T],(t,x,y,λ)[s,T)××0×>0,ta,bW(t,x,y,λ)0,a𝒜[s,T],(t,x,y,λ)[s,T)××0×>0,W(T,x,y,λ)=xy+12θy2,(x,y,λ)×0×>0,\displaystyle\begin{cases}&\mathcal{L}_{t}^{a^{*},b^{*}}W(t,x,y,\lambda)=0,\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &\mathcal{L}_{t}^{a^{*},b}W(t,x,y,\lambda)\geq 0,\qquad\forall b\in\mathcal{B}[s,T],\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &\mathcal{L}_{t}^{a,b^{*}}W(t,x,y,\lambda)\leq 0,\qquad\forall a\in\mathcal{A}[s,T],\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &W(T,x,y,\lambda)=xy+\frac{1}{2\theta}y^{2},\qquad\forall(x,y,\lambda)\in\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\end{cases} (9)

where ta,b\mathcal{L}_{t}^{a,b} is the infinitesimal generator of (X(t),Y(t),λ(t))(X(t),Y(t),\lambda(t)) given by

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Wt+{rx+π(μ0r)+(κruκr+κ)μ1λ\displaystyle W_{t}+\biggl{\{}rx+\pi(\mu_{0}-r)+(\kappa_{r}u-\kappa_{r}+\kappa)\mu_{1}\lambda
+(ιrvιr+ι)kμ2ρ}Wx+(δλ+ρμ2)Wλ+12π2σ02Wxx+12y2o2Wyy+πσ0yoWxy\displaystyle+(\iota_{r}v-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}W_{x}+(-\delta\lambda+\rho\mu_{2})W_{\lambda}+\frac{1}{2}\pi^{2}\sigma_{0}^{2}W_{xx}+\frac{1}{2}y^{2}o^{2}W_{yy}+\pi\sigma_{0}yoW_{xy}
+λ>0{W(t,xuz,y+yp(z),λ)W(t,x,y,λ)+uzWxyp(z)Wy}F1(dz)\displaystyle+\lambda\int_{\mathbb{R}_{>0}}\{W(t,x-uz,y+yp(z),\lambda)-W(t,x,y,\lambda)+uzW_{x}-yp(z)W_{y}\}F_{1}(dz)
+ρ>0{W(t,xkvz,y+yq(z),λ+z)W(t,x,y,λ)+kvzWxyq(z)WyzWλ}F2(dz).\displaystyle+\rho\int_{\mathbb{R}_{>0}}\{W(t,x-kvz,y+yq(z),\lambda+z)-W(t,x,y,\lambda)+kvzW_{x}-yq(z)W_{y}-zW_{\lambda}\}F_{2}(dz). (10)

Then,

W(s,x,y,λ)=supa𝒜[s,T]infb[s,T]Ja,b(s,x,y,λ).\displaystyle W(s,x,y,\lambda)=\sup_{a\in\mathcal{A}[s,T]}\inf_{b\in\mathcal{B}[s,T]}J^{a,b}(s,x,y,\lambda).

The infinitesimal generator is derived by using the piece-wise deterministic processes theory in [15]. For more details about the infinitesimal generator of Cox process driven by shot noise intensity, see [10], [11], [12] and [25].

3 Value Function and Optimal Strategies

In this section, we shall first build a candidate solution to the HJBI equation (9) based on Proposition 3.1. Then We shall conduct an analysis of the optimal strategies. Subsequently, we shall demonstrate that the constructed candidate solution indeed represents the value function of Problem (4), and we will also provide its explicit form. At the end of this section, we shall present the efficient frontier for our problem. For the sake of clarity, we defer the proofs to Appendix A.

We show the main result for the optimal strategies and the value function as follows:

Theorem 3.1.

Assume the initial values X(0)=x0X(0)=x_{0}, Y(0)=1Y(0)=1 and λ(0)=λ0\lambda(0)=\lambda_{0}. The value function for Problem (4) is given by

W(t,x)=θ(er(Tt)x+α(t)λ(t)+β(t))24eη(t)λ(t)+ζ(t)+θ(erTx0+2θeη(0)λ0+ζ(0)+α(0)λ0+β(0))24eη(t)λ(t)+ζ(t),\displaystyle W(t,x)=-\frac{\theta(e^{r(T-t)}x+\alpha(t)\lambda(t)+\beta(t))^{2}}{4e^{\eta(t)\lambda(t)+\zeta(t)}}+\frac{\theta(e^{rT}x_{0}+\frac{2}{\theta}e^{\eta(0)\lambda_{0}+\zeta(0)}+\alpha(0)\lambda_{0}+\beta(0))^{2}}{4e^{\eta(t)\lambda(t)+\zeta(t)}},

and the optimal feedback strategies are given as follows

{π(t,x)=μ0rσ02(x+x0ert+1θeη(0)λ0+ζ(0)r(Tt)er(Tt)[α(t)λ(t)+β(t)α(0)λ0β(0)]),u(t,x)=κrμ1σ12(x+x0ert+1θeη(0)λ0+ζ(0)r(Tt)er(Tt)[α(t)λ(t)+β(t)α(0)λ0β(0)]),v(t,x)=1kϕ(t)(x+x0ert+1θeη(0)λ0+ζ(0)r(Tt)er(Tt)[α(t)λ(t)+β(t)α(0)λ0β(0)])+α(t)ker(Tt),\displaystyle\begin{cases}\pi^{*}(t,x)=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\eta(0)\lambda_{0}+\zeta(0)-r(T-t)}-e^{-r(T-t)}[\alpha(t)\lambda(t)+\beta(t)-\alpha(0)\lambda_{0}-\beta(0)]\bigg{)},\\ u^{*}(t,x)=&\frac{\kappa_{r}\mu_{1}}{\sigma_{1}^{2}}\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\eta(0)\lambda_{0}+\zeta(0)-r(T-t)}-e^{-r(T-t)}[\alpha(t)\lambda(t)+\beta(t)-\alpha(0)\lambda_{0}-\beta(0)]\bigg{)},\\ v^{*}(t,x)=&\frac{1}{k}\phi(t)\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\eta(0)\lambda_{0}+\zeta(0)-r(T-t)}-e^{-r(T-t)}[\alpha(t)\lambda(t)+\beta(t)-\alpha(0)\lambda_{0}-\beta(0)]\bigg{)}\\ &+\frac{\alpha(t)}{k}e^{-r(T-t)},\end{cases}

where

{η(t)=κr2μ12δσ12(1eδ(Tt)),ζ(t)=ρtT{ϕ(s)2>0z2eη(s)zF2(dz)>0eη(s)zF2(dz)}𝑑s+(ρ+(μ0r)2σ02)(Tt),α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)kμ2r(1er(Tt)),\displaystyle\begin{cases}\eta(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\delta\sigma_{1}^{2}}(1-e^{-\delta(T-t)}),\\ \zeta(t)=&\rho\int_{t}^{T}\biggl{\{}\phi(s)^{2}\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(s)z}F_{2}(dz)-\int_{\mathbb{R}_{>0}}e^{-\eta(s)z}F_{2}(dz)\biggr{\}}ds+(\rho+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}})(T-t),\\ \alpha(t)=&\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=&\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)k\mu_{2}}{r}(1-e^{r(T-t)}),\end{cases}

and ϕ(t)=(ιr+1)μ2>0zeη(t)zF2(dz)>0z2eη(t)zF2(dz)\phi(t)=\frac{(\iota_{r}+1)\mu_{2}-\int_{\mathbb{R}_{>0}}ze^{-\eta(t)z}F_{2}(dz)}{\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(t)z}F_{2}(dz)}.

To achieve the aforementioned result, we employ a three-step approach. Firstly, we tackle Problem (8), and the outcomes shall be presented in Proposition 3.1. Problem (8) involves three processes, namely X(t)X(t), Y(t)Y(t), and λ(t)\lambda(t). It is worthnoting that the insurer can only observe the surplus process X(t)X(t) and the intensity process λ(t)\lambda(t). As Y(t)Y(t) serves as an auxiliary process introduced to facilitate problem-solving, it is unobservable. However, Proposition 3.1 provides expressions of the value function and the optimal strategies as functions of the auxiliary process Y(t)Y(t). To remove the dependency on Y(t)Y(t), we take the second step that establish a relationship between the processes X(t)X(t) and Y(t)Y(t)(see Lemma 3.1). Finally, in the third step, we can represent the value function and optimal strategies solely as functions of X(t)X(t) and λ(t)\lambda(t) (see Proposition 3.2).

Proposition 3.1.

For any (t,λ)[0,T]×>0(t,\lambda)\in[0,T]\times\mathbb{R}_{>0}, if there exist sufficiently smooth functions G(t)G(t), H(t,λ)(>0)H(t,\lambda)(>0), I(t,λ)I(t,\lambda) and K(t,λ)K(t,\lambda) satisfying the following differential equations, respectively666For any appropriate function ψ(t,λ)\psi(t,\lambda), define ψz:=ψ(t,λ+z)ψ(t,λ)\psi_{z}:=\psi(t,\lambda+z)-\psi(t,\lambda) as its finite difference on the spatial variable.

{0=Gt+rG(t),0=HtδλHλ+ρ>0HzF2(dz)+H(t,λ)(μ0r)2σ02+H(t,λ)λκr2μ12σ12+ρ(ιrμ2+2𝐇~(Hz))22𝐇~(z)2ρ𝐇~(Hz2z),0=ItδλIλ+ρ>0IzF2(dz)+((κr+κ)μ1λ+(ιr+ι)kμ2ρ)G(t)+ριrμ2𝐇~(Iz)𝐇~(z)ρ{2𝐇~(HzIzz)2𝐇~(Hz)𝐇~(Iz)𝐇~(z)},0=KtδλKλ+ρ>0KzF2(dz)ρ2{𝐇~(Iz2z)𝐇~(Iz)2𝐇~(z)},\displaystyle\begin{cases}0=&G_{t}+rG(t),\\ 0=&H_{t}-\delta\lambda H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)+\frac{H(t,\lambda)(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda)\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\\ &+\frac{\rho\bigg{(}\iota_{r}\mu_{2}+2{\bf\widetilde{H}}(H_{z})\bigg{)}^{2}}{2{\bf\widetilde{H}}(z)}-2\rho{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z}),\\ 0=&I_{t}-\delta\lambda I_{\lambda}+\rho\int_{\mathbb{R}_{>0}}I_{z}F_{2}(dz)+\bigg{(}(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}G(t)\\ &+\frac{\rho\iota_{r}\mu_{2}{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}-\rho\biggl{\{}2{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})-2\frac{{\bf\widetilde{H}}(H_{z}){\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}},\\ 0=&K_{t}-\delta\lambda K_{\lambda}+\rho\int_{\mathbb{R}_{>0}}K_{z}F_{2}(dz)-\frac{\rho}{2}\biggl{\{}{\bf\widetilde{H}}(\frac{I_{z}^{2}}{z})-\frac{{\bf\widetilde{H}}(I_{z})^{2}}{{\bf\widetilde{H}}(z)}\biggr{\}},\end{cases} (11)

with boundry conditions G(T)=1G(T)=1, H(T,λ)=12θH(T,\lambda)=\frac{1}{2\theta}, I(T,λ)=K(T,λ)=0I(T,\lambda)=K(T,\lambda)=0, where

𝐇~(f):=>0f(z)z2H(t,λ+z)F2(dz),\displaystyle{\bf\widetilde{H}}(f):=\int_{\mathbb{R}_{>0}}\frac{f(z)z}{2H(t,\lambda+z)}F_{2}(dz),

for all integrable function f(z)f(z), then the solution to the HJB equation (9) is

W(t,x,y,λ)=G(t)xy+H(t,λ)y2+I(t,λ)y+K(t,λ).W(t,x,y,\lambda)=G(t)xy+H(t,\lambda)y^{2}+I(t,\lambda)y+K(t,\lambda). (12)

Moreover, the corresponding optimal feedback strategies are given by

π(t,x,y,λ)=2H(t,λ)(μ0r)yG(t)σ02,\displaystyle\pi^{*}(t,x,y,\lambda)=\frac{2H(t,\lambda)(\mu_{0}-r)y}{G(t)\sigma_{0}^{2}},
u(t,x,y,λ)=2H(t,λ)κrμ1yG(t)σ12,\displaystyle u^{*}(t,x,y,\lambda)=\frac{2H(t,\lambda)\kappa_{r}\mu_{1}y}{G(t)\sigma_{1}^{2}},
v(t,x,y,λ)=1kG(t){ιrμ2y𝐇~(z)+2y𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)},\displaystyle v^{*}(t,x,y,\lambda)=\frac{1}{kG(t)}\biggl{\{}\frac{\iota_{r}\mu_{2}y}{{\bf\widetilde{H}}(z)}+\frac{2y{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}},
o(t,z,x,y,λ)=μ0rσ0,\displaystyle o^{*}(t,z,x,y,\lambda)=-\frac{\mu_{0}-r}{\sigma_{0}}, (13)
p(t,z,x,y,λ)=κrμ1zσ12,\displaystyle p^{*}(t,z,x,y,\lambda)=\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}},
q(t,z,x,y,λ)=12H(t,λ+z)y{(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))y+(𝟏z𝐇~𝐇~(z))(Iz)}.\displaystyle q^{*}(t,z,x,y,\lambda)=-\frac{1}{2H(t,\lambda+z)y}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}y+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}.

We observe that the optimal feedback strategies (13) only depend on the auxiliary process Y(t)Y(t) rather than the surplus process X(t)X(t). However Y(t)Y(t) is not observable in practice. To address this problem, we shall introduce a lemma which shows the relationship of (X(t),Y(t))(X(t),Y(t)) and (X(s),Y(s))(X(s),Y(s)) for any 0stT0\leq s\leq t\leq T. Then we can rewrite the optimal feedbacks (π,u,v)(\pi,u,v) as functions that only depend on the current value of surplus process X(t)X(t), and the initial values (X(s),Y(s))(X(s),Y(s)) at time ss. Here is the lemma:

Lemma 3.1.

Let X(t)X^{*}(t) and Y(t)Y^{*}(t) be the state processes under the feedback strategies defined by (13), then for any 0stT0\leq s\leq t\leq T, we have

X(t)G(t)X(s)G(s)=2Y(t)H(t,λ(t))+2Y(s)H(s,λ(s))I(t,λ(t))+I(s,λ(s)).\displaystyle X^{*}(t)G(t)-X^{*}(s)G(s)=-2Y^{*}(t)H(t,\lambda(t))+2Y^{*}(s)H(s,\lambda(s))-I(t,\lambda(t))+I(s,\lambda(s)). (14)

Note that the process λ(t)\lambda(t) does not depend on the controls, we can easily calculate its value at any time t[0,T]t\in[0,T]. For simplicity, in the following context, we denote W(t,x,y):=W(t,x,y,λ(t))W(t,x,y):=W(t,x,y,\lambda(t)) and W(t,x):=W(t,x,λ(t))W(t,x):=W(t,x,\lambda(t)). By substituting (14) into (12) and (13), we can rewrite the value function and the optimal strategies as follows:

Proposition 3.2.

For any 0stT0\leq s\leq t\leq T, let X(t)X^{*}(t) and Y(t)Y^{*}(t) be the state processes under the feedback strategies defined by (13) and let X(s)X^{*}(s) and Y(s)Y^{*}(s) be the initial values at time ss, then the value function is given by

W(t,X(t))=(G(t)X(t)+I(t,λ(t)))24H(t,λ(t))+(G(s)X(s)+2H(s,λ(s)Y(s)+I(s,λ(s)))24H(t,λ(t))+K(t,λ(t)),\displaystyle W(t,X^{*}(t))=-\frac{(G(t)X^{*}(t)+I(t,\lambda(t)))^{2}}{4H(t,\lambda(t))}+\frac{(G(s)X^{*}(s)+2H(s,\lambda(s)Y^{*}(s)+I(s,\lambda(s)))^{2}}{4H(t,\lambda(t))}+K(t,\lambda(t)), (15)

and the optimal strategies π\pi^{*}, uu^{*} and vv^{*} at time tt are given by:

{π(t)=μ0rσ02(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t)),u(t)=κrμ1σ12(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t)),v(t)=12kH(t,λ(t))(ιrμ2𝐇~(z)+2𝐇~(Hz)𝐇~(z))(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t))+1kG(t)𝐇~(Iz)𝐇~(z).\displaystyle\begin{cases}\pi^{*}(t)=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)},\\ u^{*}(t)=&\frac{\kappa_{r}\mu_{1}}{\sigma_{1}^{2}}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)},\\ v^{*}(t)=&\frac{1}{2kH(t,\lambda(t))}\bigg{(}\frac{\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}+\frac{2{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}\bigg{)}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)}\\ &+\frac{1}{kG(t)}\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}.\end{cases} (16)

It’s important to note that (15) and (16) also rely on Y(s)Y^{*}(s). Keep in mind that ss is chosen as the initial time before time tt, implying that Y(s)Y^{*}(s) is, in fact, known. In conclusion, by setting the initial time of Problem (8) to be zero, and the initial values of Problem (8) to be X(0)=x0X^{*}(0)=x_{0}, Y(0)=1Y^{*}(0)=1, and λ(0)=λ0\lambda(0)=\lambda_{0}, we ultimately obtain the solution to the original Problem (4) (see Theorem 3.1). Furthermore, the optimal feedbacks in (13) solely rely on the current value of Y(t)=yY^{*}(t)=y at time tt, making them time-consistent. However, as illustrated in (16), the strategies depend not only on X(t)X^{*}(t) but also on the values X(s)X^{*}(s) and Y(s)Y^{*}(s) at some time point ss in the past. Consequently, the optimal strategies presented in the form of (16) are precommitted.

Now, we give the explicit expressions of G(t)G(t), H(t,λ)H(t,\lambda), I(t,λ)I(t,\lambda) and K(t,λ)K(t,\lambda) by the following lemma.

Lemma 3.2.

The solutions to (11) are given as follows

G(t)=er(Tt),H(t,λ)=12θeη(t)λ+ζ(t),I(t,λ)=α(t)λ+β(t),K(t,λ)=0,\displaystyle\begin{array}[]{l}G(t)=e^{r(T-t)},\quad H(t,\lambda)=\frac{1}{2\theta}e^{\eta(t)\lambda+\zeta(t)},\\ I(t,\lambda)=\alpha(t)\lambda+\beta(t),\quad K(t,\lambda)=0,\end{array} (19)

where

{η(t)=κr2μ12δσ12(1eδ(Tt)),ζ(t)=ρtT{ϕ(s)2>0z2eη(s)zF2(dz)>0eη(s)zF2(dz)}𝑑s+(ρ+(μ0r)2σ02)(Tt),α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)kμ2r(1er(Tt)),\displaystyle\begin{cases}\eta(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\delta\sigma_{1}^{2}}(1-e^{-\delta(T-t)}),\\ \zeta(t)=&\rho\int_{t}^{T}\biggl{\{}\phi(s)^{2}\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(s)z}F_{2}(dz)-\int_{\mathbb{R}_{>0}}e^{-\eta(s)z}F_{2}(dz)\biggr{\}}ds+(\rho+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}})(T-t),\\ \alpha(t)=&\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=&\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)k\mu_{2}}{r}(1-e^{r(T-t)}),\end{cases}

and ϕ(t)=(ιr+1)μ2>0zeη(t)zF2(dz)>0z2eη(t)zF2(dz)\phi(t)=\frac{(\iota_{r}+1)\mu_{2}-\int_{\mathbb{R}_{>0}}ze^{-\eta(t)z}F_{2}(dz)}{\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(t)z}F_{2}(dz)}.

Based on the above analysis and preparation, we are able to solve the HJBI equation (9). Hence, we can use the verification theorem to prove that the candidate function and the corresponding strategies are indeed the value function and the optimal strategies.

Theorem 3.2.

(Value function and optimal strategy) Assume the initial values X(s)=xsX(s)=x_{s} and Y(s)=ysY(s)=y_{s}. The value function for Problem (8) is given by W(t,x)C1,2([s,T]×)W(t,x)\in C^{1,2}([s,T]\times\mathbb{R}),

W(t,x;s,xs,ys)=θ(er(Tt)x+α(t)λ(t)+β(t))24eη(t)λ(t)+ζ(t)+θ(erTxs+2ysθeη(s)λ(s)+ζ(s)+α(s)λ(s)+β(s))24eη(t)λ(t)+ζ(t),\displaystyle W(t,x;s,x_{s},y_{s})=-\frac{\theta(e^{r(T-t)}x+\alpha(t)\lambda(t)+\beta(t))^{2}}{4e^{\eta(t)\lambda(t)+\zeta(t)}}+\frac{\theta(e^{rT}x_{s}+\frac{2y_{s}}{\theta}e^{\eta(s)\lambda(s)+\zeta(s)}+\alpha(s)\lambda(s)+\beta(s))^{2}}{4e^{\eta(t)\lambda(t)+\zeta(t)}}, (20)

and the equilibrium strategies are the precommitted feedback strategies

{π(t,x;s,xs,ys)=μ0rσ02(x+xser(ts)+ysθeη(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt)),u(t,x;s,xs,ys)=κrμ1σ12(x+xser(ts)+ysθeη(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt)),v(t,x;s,xs,ys)=1kϕ(t)(x+xser(ts)+ysθeη(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt))+1kα(t)er(Tt),\displaystyle\begin{cases}\pi^{*}(t,x;s,x_{s},y_{s})=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)},\\ u^{*}(t,x;s,x_{s},y_{s})=&\frac{\kappa_{r}\mu_{1}}{\sigma_{1}^{2}}\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)},\\ v^{*}(t,x;s,x_{s},y_{s})=&\frac{1}{k}\phi(t)\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)}+\frac{1}{k}\alpha(t)e^{-r(T-t)},\end{cases} (21)

and

{o(t)=μ0rσ0,p(t,z)=κrμ1zσ12,q(t,z)=eη(t)z+ϕ(t)eη(t)zz1.\displaystyle\begin{cases}&o^{*}(t)=-\frac{\mu_{0}-r}{\sigma_{0}},\\ &p^{*}(t,z)=\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}},\\ &q^{*}(t,z)=e^{-\eta(t)z}+\phi(t)e^{-\eta(t)z}z-1.\end{cases} (22)

3.1 MMV Efficient Frontier

To simplify the notaitons, we introduce the following functions

ψ1(s,t):=\displaystyle\psi_{1}(s,t):= κr2μ12δσ12(eδ(ts)eδ(Ts)),\displaystyle\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\delta\sigma_{1}^{2}}(e^{-\delta(t-s)}-e^{-\delta(T-s)}),
ψ2(s,t):=\displaystyle\psi_{2}(s,t):= ζ(t)+ρst>0eη(u)z(eψ1(u,t)z1)(ϕ(u)z+1)F2(dz)𝑑u,\displaystyle\zeta(t)+\rho\int_{s}^{t}\int_{\mathbb{R}_{>0}}e^{-\eta(u)z}(e^{\psi_{1}(u,t)z}-1)(\phi(u)z+1)F_{2}(dz)du,
ψ3(s,t):=\displaystyle\psi_{3}(s,t):= ζ(t)+ρst>0eη(u)z(eψ1(u,t)z1)(ϕ(u)z+1)2F2(dz)𝑑u,\displaystyle\zeta(t)+\rho\int_{s}^{t}\int_{\mathbb{R}_{>0}}e^{-\eta(u)z}(e^{\psi_{1}(u,t)z}-1)(\phi(u)z+1)^{2}F_{2}(dz)du,

and

C1(s,t):=\displaystyle C_{1}(s,t):= es0es,t3(es,t2)2(es0es,t2)2,\displaystyle\frac{e^{0}_{s}e^{3}_{s,t}-(e^{2}_{s,t})^{2}}{(e^{0}_{s}-e^{2}_{s,t})^{2}},
C2(s,t):=\displaystyle C_{2}(s,t):= es0(es,t3es,t2)es0es,t3(es,t2)2α(t)ριrμ2δ(1eδ(ts))er(Tt),\displaystyle\frac{e^{0}_{s}(e^{3}_{s,t}-e^{2}_{s,t})}{e^{0}_{s}e^{3}_{s,t}-(e^{2}_{s,t})^{2}}\frac{\alpha(t)\rho\iota_{r}\mu_{2}}{\delta}(1-e^{-\delta(t-s)})e^{-r(T-t)},
C3(s,t):=\displaystyle C_{3}(s,t):= α(t)2ρσ222δ(1e2δ(ts))e2r(Tt)(es,t2)2es0es,t3(es,t2)2α(t)2ρ2ιr2μ22δ2(1eδ(ts))2e2r(Tt),\displaystyle\frac{\alpha(t)^{2}\rho\sigma_{2}^{2}}{2\delta}(1-e^{-2\delta(t-s)})e^{-2r(T-t)}-\frac{(e^{2}_{s,t})^{2}}{e^{0}_{s}e^{3}_{s,t}-(e^{2}_{s,t})^{2}}\frac{\alpha(t)^{2}\rho^{2}\iota_{r}^{2}\mu_{2}^{2}}{\delta^{2}}(1-e^{-\delta(t-s)})^{2}e^{-2r(T-t)},

where es0:=eη(s)λ+ζ(s)e^{0}_{s}:=e^{\eta(s)\lambda+\zeta(s)}, es,t2:=eψ1(s,t)λ+ψ2(s,t)e^{2}_{s,t}:=e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}, es,t3:=eψ1(s,t)λ+ψ3(s,t)e^{3}_{s,t}:=e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}. Now we state the efficient frontier for the MMV criterion problem, and show the results for some special cases.

Theorem 3.3.

(Efficient Frontier) For initial values X(s)=xX^{*}(s)=x, Y(s)=yY^{*}(s)=y and λ(s)=λ\lambda(s)=\lambda, letting 𝕍ars,x,y,λP[]\mathbb{V}ar_{s,x,y,\lambda}^{P}[\cdot] represents 𝕍arP[|Xa(s)=x,Yb(s)=y,λ(s)=λ]\mathbb{V}ar^{P}[\cdot|X^{a}(s)=x,Y^{b}(s)=y,\lambda(s)=\lambda], then the expected wealth process under QQ^{*} is

𝔼s,x,y,λQX(t)=\displaystyle\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}X^{*}(t)= xer(ts)(α(t)eδ(ts)α(s))er(Tt)λ\displaystyle xe^{r(t-s)}-(\alpha(t)e^{-\delta(t-s)}-\alpha(s))e^{-r(T-t)}\lambda
α(t)ρ(ιr+1)μ2δ(1eδ(ts))er(Tt)β(t)er(Tt)+β(s)er(Tt).\displaystyle-\alpha(t)\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)})e^{-r(T-t)}-\beta(t)e^{-r(T-t)}+\beta(s)e^{-r(T-t)}.

Furthermore, the variance and expectation of the wealth process at time tt have the relationship:

𝕍ars,x,y,λPX(t)=C1(s,t)(𝔼s,x,y,λPX(t)𝔼s,x,y,λQX(t)C2(s,t))2+C3(s,t).\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}X^{*}(t)=C_{1}(s,t)\bigg{(}\mathbb{E}_{s,x,y,\lambda}^{P}X^{*}(t)-\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}X^{*}(t)-C_{2}(s,t)\bigg{)}^{2}+C_{3}(s,t). (23)
Corollary 3.1.

The efficient frontier for the terminal wealth process is given by

𝕍ars,x,y,λPX(T)=1eη(s)λ+ζ(s)1(𝔼s,x,y,λPX(T)𝔼s,x,y,λQX(T))2.\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}X^{*}(T)=\frac{1}{e^{\eta(s)\lambda+\zeta(s)}-1}\bigg{(}\mathbb{E}_{s,x,y,\lambda}^{P}X^{*}(T)-\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}X^{*}(T)\bigg{)}^{2}.
Proof.

Let t=Tt=T in (23). ∎

Corollary 3.2.

If there is no catastrophe in the model, i.e., λ(t)\lambda(t) is a constant λ\lambda and δ=ρ=0\delta=\rho=0, then

𝕍ars,x,y,λPX(t)=1e(λκr2μ12σ12+(μ0r)2σ02)(ts)1(𝔼s,x,y,λPX(t)𝔼s,x,y,λQX(t))2.\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}X^{*}(t)=\frac{1}{e^{\big{(}\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}\big{)}(t-s)}-1}\bigg{(}\mathbb{E}_{s,x,y,\lambda}^{P}X^{*}(t)-\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}X^{*}(t)\bigg{)}^{2}.
Proof.

Let δ=ρ=0\delta=\rho=0 in (23), we have

η(s)=κr2μ12σ12(Ts),ψ1(s,t)=κr2μ12σ12(Tt),\displaystyle\eta(s)=\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}(T-s),\quad\psi_{1}(s,t)=\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}(T-t),
ζ(s)=(μ0r)2σ02(Ts),ψ2(s,t)=ψ3(s,t)=(μ0r)2σ02(Tt),\displaystyle\zeta(s)=\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-s),\quad\psi_{2}(s,t)=\psi_{3}(s,t)=\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-t),
es0=e(λκr2μ12σ12+(μ0r)2σ02)(Ts),es,t2=es,t3=e(λκr2μ12σ12+(μ0r)2σ02)(Tt).\displaystyle e^{0}_{s}=e^{\big{(}\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}\big{)}(T-s)},\quad e^{2}_{s,t}=e^{3}_{s,t}=e^{\big{(}\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}\big{)}(T-t)}.

Therefore,

C1(s,t)=1e(λκr2μ12σ12+(μ0r)2σ02)(ts)1,C2(s,t)=C3(s,t)=0.\displaystyle C_{1}(s,t)=\frac{1}{e^{\big{(}\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}\big{)}(t-s)}-1},\quad C_{2}(s,t)=C_{3}(s,t)=0.

4 Optimization Problem for the Diffusion Approximation Model

In this section, we consider the optimization problem under MMV for the diffusion approximation model. We take a parallel procedure as that of Section 2 and Section 3. In Subsection 4.1, we introduce the diffusion approximation model for the catastrophe insurance. In Subsection 4.2, we present the auxiliary problem, and by applying the dynamic programming principle, we gives the corresponding HJB equation. In Subsection 4.3, the optimal strategies and value function are obtained explicitly by solving the HJBI equation.

4.1 Diffusion Approximation of the Catastrophe Insurance

In [13], the diffusion approximation of the Cox-process driven by shot noise intensity is obtained. By Theorem 2 of [13], the diffusion approximation of the aggregate claims process and the intensity process are given by

dC1(t)=μ1λ(t)dt+σ1ρμ2δdW1(t),\displaystyle dC_{1}(t)=\mu_{1}\lambda(t)dt+\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}dW_{1}(t),
dC2(t)=kρμ2dt+kσ2ρdW2(t),\displaystyle dC_{2}(t)=k\rho\mu_{2}dt+k\sigma_{2}\sqrt{\rho}dW_{2}(t),

and

dλ(t)=(δλ(t)+ρμ2)dt+σ2ρdW2(t).\displaystyle d\lambda(t)=(-\delta\lambda(t)+\rho\mu_{2})dt+\sigma_{2}\sqrt{\rho}dW_{2}(t).

where μ1\mu_{1}, σ12\sigma_{1}^{2}, μ2\mu_{2}, and σ22\sigma_{2}^{2} are defined by (2), W1(t)W_{1}(t) and W2(t)W_{2}(t) are two independent standard Brownian motions under probability PP.

In this section, we consider the MMV optimization problem under this diffusion approximation. Similarly, the surplus process is given by

dR(t)=\displaystyle dR(t)= (κru(t)(κrκ))μ1λ(t)dtu(t)σ1ρμ2δdW1(t)\displaystyle\bigg{(}\kappa_{r}u(t)-(\kappa_{r}-\kappa)\bigg{)}\mu_{1}\lambda(t)dt-u(t)\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}dW_{1}(t)
+(ιrv(t)(ιrι))kμ2ρdtkv(t)σ2ρdW2(t).\displaystyle+\bigg{(}\iota_{r}v(t)-(\iota_{r}-\iota)\bigg{)}k\mu_{2}\rho dt-kv(t)\sigma_{2}\sqrt{\rho}dW_{2}(t).

Then the insurer’s surplus process after investment is given by

dX(t)=(rX(t)+π(t)(μ0r)+(κru(t)κr+κ)μ1λ(t)+(ιrv(t)ιr+ι)kμ2ρ)dt\displaystyle dX(t)=\bigg{(}rX(t)+\pi(t)(\mu_{0}-r)+(\kappa_{r}u(t)-\kappa_{r}+\kappa)\mu_{1}\lambda(t)+(\iota_{r}v(t)-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}dt
+π(t)σ0dW0(t)u(t)σ1ρμ2δdW1(t)kv(t)σ2ρdW2(t).\displaystyle+\pi(t)\sigma_{0}dW_{0}(t)-u(t)\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}dW_{1}(t)-kv(t)\sigma_{2}\sqrt{\rho}dW_{2}(t).

4.2 Hamilton-Jacobi-Bellman Equation

For the diffusion approximation model, the corresponding characteristic process Y(t)Y(t) is given by

Y(t)=y+stY(r)o(r)𝑑W0(r)+stY(r)p(r)𝑑W1(r)+stY(r)q(r)𝑑W2(r),t[0,T].Y(t)=y+\int_{s}^{t}Y(r)o(r)dW_{0}(r)+\int_{s}^{t}Y(r)p(r)dW_{1}(r)+\int_{s}^{t}\ Y(r)q(r)dW_{2}(r),\quad\forall t\in[0,T].

The insurer aims to find the optimal strategies a=(π,u,v)a^{*}=(\pi^{*},u^{*},v^{*}) and b=(o,p,q)b^{*}=(o^{*},p^{*},q^{*}) such that for any admissible (a,b)(a,b) the objective function Ja,b(s,x,y,λ)J^{a,b}(s,x,y,\lambda) is maximized by aa^{*} and is minimized by bb^{*}, where

Ja,b(s,x,y,λ)=𝔼s,x,y,λP[Xa(T)Yb(T)+12θ(Yb(T))2].J^{a,b}(s,x,y,\lambda)=\mathbb{E}_{s,x,y,\lambda}^{P}\bigg{[}X^{a}(T)Y^{b}(T)+\frac{1}{2\theta}(Y^{b}(T))^{2}\bigg{]}. (24)

Recall that the processes X(t)X(t), Y(t)Y(t) and λ(t)\lambda(t) satisfy the following SDE:

dX(t)=\displaystyle dX(t)= (rX(t)+π(t)(μ0r)+(κru(t)κr+κ)μ1λ(t)+(ιrv(t)ιr+ι)kμ2ρ)dt\displaystyle\bigg{(}rX(t)+\pi(t)(\mu_{0}-r)+(\kappa_{r}u(t)-\kappa_{r}+\kappa)\mu_{1}\lambda(t)+(\iota_{r}v(t)-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}dt
+π(t)σ0dW0(t)u(t)σ1ρμ2δdW1(t)kv(t)σ2ρdW2(t),\displaystyle+\pi(t)\sigma_{0}dW_{0}(t)-u(t)\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}dW_{1}(t)-kv(t)\sigma_{2}\sqrt{\rho}dW_{2}(t), (25a)
dY(t)=\displaystyle dY(t)= Y(t)o(t)dW0(t)+Y(t)p(t)dW1(t)+Y(t)q(t)dW2(t),\displaystyle Y(t)o(t)dW_{0}(t)+Y(t)p(t)dW_{1}(t)+Y(t)q(t)dW_{2}(t), (25b)
dλ(t)=\displaystyle d\lambda(t)= (δλ(t)+ρμ2)dt+σ2ρdW2(t).\displaystyle(-\delta\lambda(t)+\rho\mu_{2})dt+\sigma_{2}\sqrt{\rho}dW_{2}(t). (25c)
Definition 4.1.

(Admissible Strategies) The strategy {a(t)}0tT={(π(t),u(t),v(t))}0tT\{a(t)\}_{0\leq t\leq T}=\{(\pi(t),u(t),v(t))\}_{0\leq t\leq T} for player one is admissible in Problem (8) if π:[s,T]\pi:[s,T]\to\mathbb{R}, u:[s,T]u:[s,T]\to\mathbb{R}, and v:[s,T]missingRv:[s,T]\to\mathbb{\mathbb{missing}}{R} are 𝔽\mathbb{F}-adapted processes that ensure the well-definedness of (25a) and satisfy 𝔼PXu(t)<\mathbb{E}^{P}X^{u}(t)<\infty. We denote the set of all such admissible strategies as 𝒜[s,T]\mathcal{A}[s,T].

The strategy {b(t)}0tT={(o(t),p(t),q(t))}0tT\{b(t)\}_{0\leq t\leq T}=\{(o(t),p(t),q(t))\}_{0\leq t\leq T} for player two is admissible in Problem (8) if o:[s,T]o:[s,T]\to\mathbb{R}, p:[s,T]p:[s,T]\to\mathbb{R}, and q:[s,T]q:[s,T]\to\mathbb{R} are 𝔽\mathbb{F}-adapted processes such that the SDE (25b) admits a unique solution, which is a nonnegative 𝔽\mathbb{F}-adapted square integrable PP-martingale, and it satisfies 𝔼PY(t)=1\mathbb{E}^{P}Y(t)=1 for t[s,T]t\in[s,T]. We denote the set of all such admissible strategies as [s,T]\mathcal{B}[s,T].

We present the following verification theorem for the diffusion approximation case, the proof of which is similar to Theorem 2.1.

Theorem 4.1.

(Verification Theorem) Suppose that W:(s,x,y,λ)[s,T]××0×>0W:(s,x,y,\lambda)\mapsto[s,T]\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0} is a C1,2,2,2C^{1,2,2,2} function satisfying the following condition

{ta,bW(t,x,y,λ)=0,(t,x,y,λ)[s,T)××0×>0,ta,bW(t,x,y,λ)0,b[s,T],(t,x,y,λ)[s,T)××0×>0,ta,bW(t,x,y,λ)0,a𝒜[s,T],(t,x,y,λ)[s,T)××0×>0,W(T,x,y,λ)=xy+12θy2,\displaystyle\begin{cases}&\mathcal{L}_{t}^{a^{*},b^{*}}W(t,x,y,\lambda)=0,\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &\mathcal{L}_{t}^{a^{*},b}W(t,x,y,\lambda)\geq 0,\qquad\forall b\in\mathcal{B}[s,T],\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &\mathcal{L}_{t}^{a,b^{*}}W(t,x,y,\lambda)\leq 0,\qquad\forall a\in\mathcal{A}[s,T],\qquad\forall(t,x,y,\lambda)\in[s,T)\times\mathbb{R}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{>0},\\ &W(T,x,y,\lambda)=xy+\frac{1}{2\theta}y^{2},\end{cases} (26)

where ta,b\mathcal{L}_{t}^{a,b} is the infinitesimal generator of (X(t),Y(t),λ(t))(X(t),Y(t),\lambda(t)) given by

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Wt+{rx+π(μ0r)+(κruκr+κ)μ1λ+(ιrvιr+ι)kμ2ρ}Wx\displaystyle W_{t}+\biggl{\{}rx+\pi(\mu_{0}-r)+(\kappa_{r}u-\kappa_{r}+\kappa)\mu_{1}\lambda+(\iota_{r}v-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}W_{x}
+(δλ+ρμ2)Wλ+(12π2σ02+12u2σ12ρμ2δ+12k2v2ρσ22)Wxx\displaystyle+(-\delta\lambda+\rho\mu_{2})W_{\lambda}+(\frac{1}{2}\pi^{2}\sigma_{0}^{2}+\frac{1}{2}u^{2}\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}+\frac{1}{2}k^{2}v^{2}\rho\sigma_{2}^{2})W_{xx}
+12ρσ22Wλλ+(12y2o2+12y2p2+12y2q2)Wyy\displaystyle+\frac{1}{2}\rho\sigma_{2}^{2}W_{\lambda\lambda}+(\frac{1}{2}y^{2}o^{2}+\frac{1}{2}y^{2}p^{2}+\frac{1}{2}y^{2}q^{2})W_{yy} (27)
+(πσ0youσ1ρμ2δypkvσ2ρyq)Wxykvρσ22Wxλ+σ2ρyqWyλ.\displaystyle+(\pi\sigma_{0}yo-u\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}yp-kv\sigma_{2}\sqrt{\rho}yq)W_{xy}-kv\rho\sigma_{2}^{2}W_{x\lambda}+\sigma_{2}\sqrt{\rho}yqW_{y\lambda}.

Then,

W(s,x,y,λ)=supa𝒜[s,T]infb[s,T]Ja,b(s,x,y,λ).\displaystyle W(s,x,y,\lambda)=\sup_{a\in\mathcal{A}[s,T]}\inf_{b\in\mathcal{B}[s,T]}J^{a,b}(s,x,y,\lambda).

4.3 Solutions to the Diffusion Approximation Model

We take a similar procedure as that of Section 3 in this subsection. The explicit form of value functions and optimal strategies for the diffusion approximation model are also obtained. All the proofs in this section are delegated in Appendix B.

Theorem 4.2.

Assume the initial values X(0)=x0X(0)=x_{0}, Y(0)=1Y(0)=1 and λ(0)=λ0\lambda(0)=\lambda_{0}. The value function for Problem (4) is given by

W(t,x)=θ(er(Tt)x+α(t)λ(t)+β(t))24eξ(t)λ(t)2+η(t)λ(t)+ζ(t)+θ(erTx0+2θeξ(0)λ02+η(0)λ0+ζ(0)+α(0)λ0+β(0))24eξ(t)λ(t)2+η(t)λ(t)+ζ(t),\displaystyle W(t,x)=-\frac{\theta(e^{r(T-t)}x+\alpha(t)\lambda(t)+\beta(t))^{2}}{4e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}}+\frac{\theta(e^{rT}x_{0}+\frac{2}{\theta}e^{\xi(0)\lambda_{0}^{2}+\eta(0)\lambda_{0}+\zeta(0)}+\alpha(0)\lambda_{0}+\beta(0))^{2}}{4e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}}, (28)

and the optimal feedback strategies are given as follows

{π(t,x)=μ0rσ02(x+x0ert+1θeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(0)λ0+β(0))er(Tt)),u(t,x)=κrμ1σ12(x+x0ert+1θeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(0)λ0+β(0))er(Tt)),v(t,x)=1k(ιrμ2σ22+2ξ(t)λ(t)+η(t))(x+x0ert+1θeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(0)λ0+β(0))er(Tt))+α(t)ker(Tt),\displaystyle\begin{cases}\pi^{*}(t,x)=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(0)\lambda_{0}+\beta(0))e^{-r(T-t)}\bigg{)},\\ u^{*}(t,x)=&\frac{\kappa_{r}\mu_{1}}{\sigma_{1}^{2}}\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(0)\lambda_{0}+\beta(0))e^{-r(T-t)}\bigg{)},\\ v^{*}(t,x)=&\frac{1}{k}(\frac{\iota_{r}\mu_{2}}{\sigma_{2}^{2}}+2\xi(t)\lambda(t)+\eta(t))\bigg{(}-x+x_{0}e^{rt}+\frac{1}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(0)\lambda_{0}+\beta(0))e^{-r(T-t)}\bigg{)}+\frac{\alpha(t)}{k}e^{-r(T-t)},\end{cases}

where

{Δ=4δ28κr2μ12σ22δμ2σ12,d1,2=2δ±Δ4ρσ22,ξ(t)=κr2μ12σ12Ttρμ2(Tt)+ρμ2δ1{Δ=0}+d1d2(eΔ(Tt)1)d1eΔ(Tt)d21{Δ>0},η(t)=(2ιr+1)δκr2μ12σ12(Tt)2δ(Tt)+11{Δ=0}+4ρμ2(2ιr+1)d1d23Δ(eΔ2(Tt)1)2(1+2eΔ2(Tt))d1eΔ(Tt)d21{Δ>0},ζ(t)=ρtT(ρμ2(2ιr+1)η(s)+12ρσ22η(s)2+ρσ22ξ(s))𝑑s+ρ(μ0r)2σ02(Tt)+ρ2ιr2μ22σ22(Tt),α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)μ2r(1er(Tt)).\displaystyle\begin{cases}\Delta=&4\delta^{2}-\frac{8\kappa_{r}^{2}\mu_{1}^{2}\sigma_{2}^{2}\delta}{\mu_{2}\sigma_{1}^{2}},\\ d_{1,2}=&\frac{2\delta\pm\sqrt{\Delta}}{4\rho\sigma_{2}^{2}},\\ \xi(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{T-t}{\rho\mu_{2}(T-t)+\frac{\rho\mu_{2}}{\delta}}1_{\{\Delta=0\}}+\frac{d_{1}d_{2}(e^{\sqrt{\Delta}(T-t)}-1)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \eta(t)=&\frac{(2\iota_{r}+1)\delta\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{(T-t)^{2}}{\delta(T-t)+1}1_{\{\Delta=0\}}+\frac{4\rho\mu_{2}(2\iota_{r}+1)d_{1}d_{2}}{3\sqrt{\Delta}}\frac{\left(e^{\frac{\sqrt{\Delta}}{2}(T-t)}-1\right)^{2}\left(1+2e^{-\frac{\sqrt{\Delta}}{2}(T-t)}\right)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \zeta(t)=&\rho\int_{t}^{T}\bigg{(}\rho\mu_{2}(2\iota_{r}+1)\eta(s)+\frac{1}{2}\rho\sigma_{2}^{2}\eta(s)^{2}+\rho\sigma_{2}^{2}\xi(s)\bigg{)}ds+\frac{\rho(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-t)+\frac{\rho^{2}\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}(T-t),\\ \alpha(t)=&\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=&\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)\mu_{2}}{r}(1-e^{r(T-t)}).\end{cases}

To prove Theorem 4.2, we use a similar approach as in Section 3. We first find a candidate solution to the HJBI equation (26), then by Lemma 4.1 below, we provide expressions of value function and optimal strategies independent of process Y(t)Y^{*}(t), only as functions of X(t)X(t) and λ(t)\lambda(t). Finally, by setting s=0s=0, Y(0)=𝔼[dQdP|0]=𝔼[dQdP]=1Y(0)=\mathbb{E}[\frac{dQ}{dP}|\mathcal{F}_{0}]=\mathbb{E}[\frac{dQ}{dP}]=1 and λ(0)=λ0\lambda(0)=\lambda_{0}, we obtain the solutions to the original problem.

Proposition 4.1.

Consider the following partial differential equations

{0=Gt+rG(t),0=Ht+(δλ+ρ(2ιr+1)μ2)Hλ+12ρσ22Hλλ+H(t,λ)((μ0r)2σ02+κr2μ12λ2δρμ2σ12+ριr2μ22σ22),0=It+((κr+κ)μ1λ+(ιr+ι)kρμ2)G(t)+(δλ+ρ(ιr+1)μ2)Iλ+12ρσ22Iλλ,\displaystyle\begin{cases}0=&G_{t}+rG(t),\\ 0=&H_{t}+(-\delta\lambda+\rho(2\iota_{r}+1)\mu_{2})H_{\lambda}+\frac{1}{2}\rho\sigma_{2}^{2}H_{\lambda\lambda}+H(t,\lambda)\bigg{(}\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\kappa_{r}^{2}\mu_{1}^{2}\lambda^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}+\frac{\rho\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}\bigg{)},\\ 0=&I_{t}+\bigg{(}(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)k\rho\mu_{2}\bigg{)}G(t)+(-\delta\lambda+\rho(\iota_{r}+1)\mu_{2})I_{\lambda}+\frac{1}{2}\rho\sigma_{2}^{2}I_{\lambda\lambda},\end{cases} (29)

with boundry conditions G(T)=1G(T)=1, H(T,λ)=12θH(T,\lambda)=\frac{1}{2\theta}, I(T,λ)=0I(T,\lambda)=0. If there exist smooth enough solutions G(t)G(t), H(t,λ)H(t,\lambda) and I(t,λ)I(t,\lambda) for the above PDEs such that, for all t[0,T]t\in[0,T] and λ>0\lambda\in\mathbb{R}_{>0}, H(t,λ)>0H(t,\lambda)>0 holds, then the solution to the HJB equation (26) is of the following form:

W(t,x,y,λ)=G(t)xy+H(t,λ)y2+I(t,λ)y.W(t,x,y,\lambda)=G(t)xy+H(t,\lambda)y^{2}+I(t,\lambda)y. (30)

Moreover, the corresponding optimal feedback strategies is given by

π(t,x,y,λ)=2H(t,λ)(μ0r)yG(t)σ02,\displaystyle\pi^{*}(t,x,y,\lambda)=\frac{2H(t,\lambda)(\mu_{0}-r)y}{G(t)\sigma_{0}^{2}},
u(t,x,y,λ)=2H(t,λ)κrμ1λyG(t)σ12ρμ2δ,\displaystyle u^{*}(t,x,y,\lambda)=\frac{2H(t,\lambda)\kappa_{r}\mu_{1}\lambda y}{G(t)\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}},
v(t,x,y,λ)=1k(2H(t,λ)ιrμ2yG(t)σ22+2HλyG(t)+IλG(t)),\displaystyle v^{*}(t,x,y,\lambda)=\frac{1}{k}\bigg{(}\frac{2H(t,\lambda)\iota_{r}\mu_{2}y}{G(t)\sigma_{2}^{2}}+\frac{2H_{\lambda}y}{G(t)}+\frac{I_{\lambda}}{G(t)}\bigg{)},
o(t,x,y,λ)=μ0rσ0,\displaystyle o^{*}(t,x,y,\lambda)=-\frac{\mu_{0}-r}{\sigma_{0}}, (31)
p(t,x,y,λ)=2κrμ1λσ1ρμ2δ,\displaystyle p^{*}(t,x,y,\lambda)=\frac{2\kappa_{r}\mu_{1}\lambda}{\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}},
q(t,x,y,λ)=ιrμ2ρσ2.\displaystyle q^{*}(t,x,y,\lambda)=\frac{\iota_{r}\mu_{2}\sqrt{\rho}}{\sigma_{2}}.

The optimal strategies (31) depend on the auxiliary process Y(t)Y(t), which is not able to be observed in practice. Now we give an analogue of Lemma 3.1 to show the relationship between X(t)X(t) and Y(t)Y(t).

Lemma 4.1.

Let X(t)X^{*}(t) and Y(t)Y^{*}(t) be the state processes under the feedback strategies defined by (31), then

X(t)G(t)X(s)G(s)=2Y(t)H(t,λ(t))+2Y(s)H(s,λ(s))I(t,λ(t))+I(s,λ(s)).\displaystyle X^{*}(t)G(t)-X^{*}(s)G(s)=-2Y^{*}(t)H(t,\lambda(t))+2Y^{*}(s)H(s,\lambda(s))-I(t,\lambda(t))+I(s,\lambda(s)). (32)
Proof.

By the same method of Lemma 3.1, it is easy to prove that

d(X(t)G(t))=2d(Y(t)H(t,λ(t)))dI(t,λ(t)).\displaystyle d\biggl{(}X^{*}(t)G(t)\biggr{)}=-2d\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}-dI(t,\lambda(t)).

Note that the process λ(t)\lambda(t) does not depend on the controls, by substituting (32) into (30) and (31), we can derive the optimal feedback strategies as follows:

Theorem 4.3.

Let X(t)X^{*}(t) and Y(t)Y^{*}(t) be the state processes under the feedback strategies defined by (13), then for any 0stT0\leq s\leq t\leq T, the value function is given by

W(t,X(t))=(G(t)X(t)+I(t,λ(t)))24H(t,λ(t))+(G(s)X(s)+2H(s,λ(s)Y(s)+I(s,λ(s)))24H(t,λ(t))+K(t,λ(t)),\displaystyle W(t,X^{*}(t))=-\frac{(G(t)X^{*}(t)+I(t,\lambda(t)))^{2}}{4H(t,\lambda(t))}+\frac{(G(s)X^{*}(s)+2H(s,\lambda(s)Y^{*}(s)+I(s,\lambda(s)))^{2}}{4H(t,\lambda(t))}+K(t,\lambda(t)), (33)

and the optimal strategies π\pi^{*}, uu^{*} and vv^{*} at time tt satisfy

{π(t)=μ0rσ02(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t)),u(t)=κrμ1λ(t)σ12ρμ2δ(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t)),v(t)=1k(ιrμ2σ22+Hλ(t,λ(t))H(t,λ(t)))(X(t)+X(s)G(s)G(t)+2Y(s)H(s,λ(s))G(t)I(t,λ(t))G(t)+I(s,λ(s))G(t))+Iλ(t,λ(t))kG(t).\displaystyle\begin{cases}\pi^{*}(t)=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)},\\ u^{*}(t)=&\frac{\kappa_{r}\mu_{1}\lambda(t)}{\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)},\\ v^{*}(t)=&\frac{1}{k}\bigg{(}\frac{\iota_{r}\mu_{2}}{\sigma_{2}^{2}}+\frac{H_{\lambda}(t,\lambda(t))}{H(t,\lambda(t))}\bigg{)}\bigg{(}-X^{*}(t)+X^{*}(s)\frac{G(s)}{G(t)}+2Y^{*}(s)\frac{H(s,\lambda(s))}{G(t)}-\frac{I(t,\lambda(t))}{G(t)}+\frac{I(s,\lambda(s))}{G(t)}\bigg{)}\\ &+\frac{I_{\lambda}(t,\lambda(t))}{kG(t)}.\end{cases}

Next, we give the explicit forms of the solutions to (29).

Lemma 4.2.

If δμ2σ222κr2μ12σ12\frac{\delta\mu_{2}}{\sigma_{2}^{2}}\geq\frac{2\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}, then the solutions to (29) are as follows

G(t)=er(Tt),\displaystyle G(t)=e^{r(T-t)},
H(t,λ)=12θeξ(t)λ2+η(t)λ+ζ(t),\displaystyle H(t,\lambda)=\frac{1}{2\theta}e^{\xi(t)\lambda^{2}+\eta(t)\lambda+\zeta(t)},
I(t,λ)=α(t)λ+β(t),\displaystyle I(t,\lambda)=\alpha(t)\lambda+\beta(t), (34)

where

{Δ=4δ28κr2μ12σ22δμ2σ12,d1,2=2δ±Δ4ρσ22,ξ(t)=κr2μ12σ12Ttρμ2(Tt)+ρμ2δ1{Δ=0}+d1d2(eΔ(Tt)1)d1eΔ(Tt)d21{Δ>0},η(t)=(2ιr+1)δκr2μ12σ12(Tt)2δ(Tt)+11{Δ=0}+4ρμ2(2ιr+1)d1d23Δ(eΔ2(Tt)1)2(1+2eΔ2(Tt))d1eΔ(Tt)d21{Δ>0},ζ(t)=ρtT(ρμ2(2ιr+1)η(s)+12ρσ22η(s)2+ρσ22ξ(s))𝑑s+ρ(μ0r)2σ02(Tt)+ρ2ιr2μ22σ22(Tt),α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)μ2r(1er(Tt)).\displaystyle\begin{cases}\Delta=&4\delta^{2}-\frac{8\kappa_{r}^{2}\mu_{1}^{2}\sigma_{2}^{2}\delta}{\mu_{2}\sigma_{1}^{2}},\\ d_{1,2}=&\frac{2\delta\pm\sqrt{\Delta}}{4\rho\sigma_{2}^{2}},\\ \xi(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{T-t}{\rho\mu_{2}(T-t)+\frac{\rho\mu_{2}}{\delta}}1_{\{\Delta=0\}}+\frac{d_{1}d_{2}(e^{\sqrt{\Delta}(T-t)}-1)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \eta(t)=&\frac{(2\iota_{r}+1)\delta\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{(T-t)^{2}}{\delta(T-t)+1}1_{\{\Delta=0\}}+\frac{4\rho\mu_{2}(2\iota_{r}+1)d_{1}d_{2}}{3\sqrt{\Delta}}\frac{\left(e^{\frac{\sqrt{\Delta}}{2}(T-t)}-1\right)^{2}\left(1+2e^{-\frac{\sqrt{\Delta}}{2}(T-t)}\right)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \zeta(t)=&\rho\int_{t}^{T}\bigg{(}\rho\mu_{2}(2\iota_{r}+1)\eta(s)+\frac{1}{2}\rho\sigma_{2}^{2}\eta(s)^{2}+\rho\sigma_{2}^{2}\xi(s)\bigg{)}ds+\frac{\rho(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-t)+\frac{\rho^{2}\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}(T-t),\\ \alpha(t)=&\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=&\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)\mu_{2}}{r}(1-e^{r(T-t)}).\end{cases}

By far, from the above analysis and computation, we are able to find the solution of HJBI equation (26):

Theorem 4.4.

(solution of HJBI equation) Assume the initial values X(s)=xsX(s)=x_{s} and Y(s)=ysY(s)=y_{s}. The value function related to the diffusion approximation is given by W(t,x)C1,2([s,T]×)W(t,x)\in C^{1,2}([s,T]\times\mathbb{R}) as follows,

W(t,x;s,xs,ys)=θ(er(Tt)x+α(t)λ(t)+β(t))24eξ(t)λ(t)2+η(t)λ(t)+ζ(t)+θ(erTxs+2ysθeξ(s)λ(s)2+η(s)λ(s)+ζ(s)+α(s)λ(s)+β(s))24eξ(t)λ(t)2+η(t)λ(t)+ζ(t),\displaystyle W(t,x;s,x_{s},y_{s})=-\frac{\theta(e^{r(T-t)}x+\alpha(t)\lambda(t)+\beta(t))^{2}}{4e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}}+\frac{\theta(e^{rT}x_{s}+\frac{2y_{s}}{\theta}e^{\xi(s)\lambda(s)^{2}+\eta(s)\lambda(s)+\zeta(s)}+\alpha(s)\lambda(s)+\beta(s))^{2}}{4e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}}, (35)

and the equilibrium strategies are precommitted feedback

{π(t,x;s,xs,ys)=μ0rσ02(x+xser(ts)+ysθeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt)),u(t,x;s,xs,ys)=κrμ1σ12(x+xser(ts)+ysθeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt)),v(t,x;s,xs,ys)=1k(ιrμ2σ22+2ξ(t)λ(t)+η(t))(x+xser(ts)+ysθeξ(t)λ(t)2+η(t)λ(t)+ζ(t)er(Tt)(α(t)λ(t)+β(t))er(Tt)+(α(s)λ(s)+β(s))er(Tt))+1kα(t)er(Tt),\displaystyle\begin{cases}\pi^{*}(t,x;s,x_{s},y_{s})=&\frac{\mu_{0}-r}{\sigma_{0}^{2}}\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)},\\ u^{*}(t,x;s,x_{s},y_{s})=&\frac{\kappa_{r}\mu_{1}}{\sigma_{1}^{2}}\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)},\\ v^{*}(t,x;s,x_{s},y_{s})=&\frac{1}{k}(\frac{\iota_{r}\mu_{2}}{\sigma_{2}^{2}}+2\xi(t)\lambda(t)+\eta(t))\bigg{(}-x+x_{s}e^{r(t-s)}+\frac{y_{s}}{\theta}e^{\xi(t)\lambda(t)^{2}+\eta(t)\lambda(t)+\zeta(t)}e^{-r(T-t)}\\ &-(\alpha(t)\lambda(t)+\beta(t))e^{-r(T-t)}+(\alpha(s)\lambda(s)+\beta(s))e^{-r(T-t)}\bigg{)}+\frac{1}{k}\alpha(t)e^{-r(T-t)},\end{cases} (36)

and

{o(t)=μ0rσ0,p(t,z)=2κrμ1λσ1ρμ2δ,q(t,z)=ιrμ2ρσ2.\displaystyle\begin{cases}&o^{*}(t)=-\frac{\mu_{0}-r}{\sigma_{0}},\\ &p^{*}(t,z)=\frac{2\kappa_{r}\mu_{1}\lambda}{\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}},\\ &q^{*}(t,z)=\frac{\iota_{r}\mu_{2}\sqrt{\rho}}{\sigma_{2}}.\end{cases} (37)

5 Numerical Examples

In this section, we assume UiU_{i} is exponentially distributed with parameter β1=0.2\beta_{1}=0.2 and ViV_{i} is exponentially distributed with parameter β2=0.3\beta_{2}=0.3. Other parameters are supposed to be

  • T=100T=100 months is the time horizon of planning;

  • s=0s=0 is the initial time;

  • μ0=0.03,σ0=0.4\mu_{0}=0.03,\sigma_{0}=0.4 are the monthly expected return rate and the expected volatility of the stock, respectively;

  • r=0.01r=0.01 is the monthly return of the risk-free asset;

  • x0=100,λ0=1x_{0}=100,\quad\lambda_{0}=1 are the wealth of the insurer and the intensity of the ordinary insurance claims at the beginning;

  • θ=1\theta=1 is the factor of risk aversion in the MMV criterion;

  • κ=0.1,κr=0.105,(ι=0.1,ιr=0.12)\kappa=0.1,\quad\kappa_{r}=0.105,\quad(\iota=0.1,\quad\iota_{r}=0.12) are the safety loading of the insurer and reinsurer for the ordinary insurance claims (catastrophe insurance claims) respectively;

  • ρ=0.01,δ=0.01\rho=0.01,\quad\delta=0.01 are the intensity and the decay factor of the shot noise process. These two parameters mean the frequency of the occurrence of the catastrophe and the speed at which the effects of the catastrophe recede, respectively;

  • k=10000k=10000 is the ratio of the catastrophe claims size to the catastrophe impact.

We have generated two graphs to visualize the sensitivity of the optimal strategies uu and vv, with respect to changes in catastrophe-related parameters. In Figs.1 and 2, the red lines depict the optimal strategies of ordinary claims uu and that of catastrophe claims vv as functions of the surplus xx under the given parameter values. Meanwhile, the other colored lines represent variations in the optimal strategies when one parameter is altered at a time.

In Fig.1, observe the green and dark green lines, which illustrate that as the frequency and intensity of catastrophes increase (indicated by larger values of ρ\rho or smaller values of β2\beta_{2}), the insurer adopts a more aggressive stance, leading to higher retention levels. This behavior can be attributed to the fact that with increased catastrophe frequency and intensity, the catastrophe insurance premium (1+ι)ρkβ2\frac{(1+\iota)\rho k}{\beta_{2}} also rises, allowing the insurer to shoulder more risk.

In the same figure, the orange line shows that when the impact of a catastrophe fades more fast (corresponding to a larger δ\delta), the insurer adopts a more conservative approach with smaller retention levels. This is because the insurer’s ordinary insurance premium (1+κ)λ(t)β1\frac{(1+\kappa)\lambda(t)}{\beta_{1}} also diminishes more gradually for larger δ\delta.

Fig.2 illustrates how the optimal strategy vv of catastrophe claims changes in response to variations in the parameters. The orange line indicates that a larger ratio results in reduced retention. Furthermore, the purple lines in both figures highlight that higher reinsurance costs lead to greater retained risks. Finally, the blue lines illustrate that increased reinsurance premium incentivize the insurer to adopt a more aggressive strategy.

Refer to caption
Refer to caption
Figure 1: The optimal retention level uu of the ordinary insurance when t=1t=1 and λ(1)=5\lambda(1)=5.
Refer to caption
Refer to caption
Figure 2: The optimal retention level vv of the catastrophe insurance when t=1t=1 and λ(1)=5\lambda(1)=5.

In Fig.3, we plot the efficient frontiers for this problem at different time points, where the top half of the parabolas are the efficient frontiers, from which one can clearly see that when the risk (variance) is given, the expected return becomes larger over time.

Refer to caption
Figure 3: Efficient frontier for different time tt.

6 Conclusion

In this paper, we have studied the optimization problem of optimal investment and reinsurance for insurers facing catastrophic risks. Our study includes both jump models and diffusion approximation models, and the insurer’s primary goal is to maximize their terminal surplus under the MMV criterion. We first formulate the initial control problem as an auxiliary two-player zero-sum game, then find equilibrium solutions in explicit form through dynamic programming and solving an HJBI equation. Furthermore, we have presented the efficient frontier within the MMV criterion. Looking ahead, our future research endeavors may include: 1) integrating excess-of-loss reinsurance into the catastrophe model; 2) investigating scenarios where the insurer has access to only partial information about catastrophes, limiting their ability to design tailored premiums; 3) exploring games involving multiple insurance companies or games between insurers and SPVs; and more.

Funding: This work was supported by the National Natural Science Foundation of China 11931018 and 12271274.

Declarations

Conflict of interest: The authors have no relevant financial or non-financial interests to disclose.

References

  • [1] M. S. Bartlett. The spectral analysis of point processes. Journal of the Royal Statistical Society Series B-methodological, 25(2):264–281, 1963.
  • [2] J. Bi and J. Guo. Hedging unit-linked life insurance contracts in a financial market driven by shot-noise processes. Applied Stochastic Models in Business and Industry, 26(5):609–623, 2010.
  • [3] P. Born and W. K. Viscusi. The catastrophic effects of natural disasters on insurance markets. Journal of Risk and Uncertainty, 33(1):55–72, 2006.
  • [4] M. Brachetta and C. Ceci. Optimal proportional reinsurance and investment for stochastic factor models. Insurance: Mathematics and Economics, 87:15–33, 2019.
  • [5] P. Bremaud. Point processes and queues, martingale dynamics. Journal of the American Statistical Association, 78(383):745, 1983.
  • [6] J. Cao, D. Landriault, and B. Li. Optimal reinsurance-investment strategy for a dynamic contagion claim model. Insurance: Mathematics and Economics, 93:206–215, 2020.
  • [7] D. R. Cox. Some statistical methods connected with series of events. Journal of the Royal Statistical Society Series B-methodological, 17(2):129–157, 1955.
  • [8] D. R. Cox and V. Isham. Point Processes, volume 12. CRC Press, 1980.
  • [9] D. R. Cox and V. Isham. The virtual waiting-time and related processes. Advances in Applied Probability, 18(2):558–573, 1986.
  • [10] A. Dassios. Insurance, Storage and Point Processes: an Approach via Piecewise Deterministic Markov Processes. PhD thesis, 1987.
  • [11] A. Dassios and P. Embrechts. Martingales and insurance risk. Communications in Statistics. Stochastic Models, 5(2):181–217, 1989.
  • [12] A. Dassios and J. Jang. Pricing of catastrophe reinsurance and derivatives using the cox process with shot noise intensity. Finance and Stochastics, 7(1):73–95, 2003.
  • [13] A. Dassios and J. Jang. Kalman-bucy filtering for linear systems driven by the cox process with shot noise intensity and its application to the pricing of reinsurance contracts. Journal of Applied Probability, 42(1):93–107, 2005.
  • [14] A. Dassios and H. Zhao. A dynamic contagion process. Advances in applied probability, 43(3):814–846, 2011.
  • [15] M. H. A. Davis. Piecewise-deterministic markov processes: A general class of non-diffusion stochastic models. Journal of the Royal Statistical Society Series B-methodological, 46(3):353–376, 1984.
  • [16] F. Delbaen and W. Schachermayer. The variance-optimal martingale measure for continuous processes. Bernoulli, pages 81–105, 1996.
  • [17] U. Delong and R. Gerrard. Mean-variance portfolio selection for a non-life insurance company. Mathematical Methods of Operations Research, 66(2):339–367, 2007.
  • [18] M. Egami and V. R. Young. Indifference prices of structured catastrophe (CAT) bonds. Insurance: Mathematics and Economics, 42(2):771–778, 2008.
  • [19] A. Eichler, G. Leobacher, and M. Szolgyenyi. Utility indifference pricing of insurance catastrophe derivatives. European Actuarial Journal, 7(2):515–534, 2017.
  • [20] R. J. Elliott and T. K. Siu. Portfolio risk minimization and differential games. Nonlinear Analysis Theory Methods & Applications, 71(12):0–0, 2009.
  • [21] J. Grandell. Doubly Stochastic Poisson Processes, volume 529. Springer, 2006.
  • [22] J. Grandell. Aspects of Risk Theory. Springer Science & Business Media, 2012.
  • [23] L. P. Hansen, T. J. Sargent, G. Turmuhambetova, and N. Williams. Robust control and model misspecification. Journal of Economic Theory, 128(1):45–90, 2006.
  • [24] S. Jaimungal and T. Wang. Catastrophe options with stochastic interest rates and compound poisson losses. Insurance: Mathematics and Economics, 38(3):469–483, 2006.
  • [25] J.-W. Jang and Y. Krvavych. Shot noise process and pricing of extreme insurance claims in an economic environment. Technical report, Working Paper, Actuarial Studies, UNSW, Sydrey, Australia, 2003.
  • [26] J. M. Kabanov, R. Š. Lipcer, and A. Širjaev. Absolute continuity and singularity of locally absolutely continuous probability distributions. I. Mathematics of the USSR-Sbornik, 35(5):631, 1979.
  • [27] C. Kluppelberg and T. Mikosch. Explosive poisson shot noise processes with applications to risk reserves. Bernoulli, 1:125–147, 1995.
  • [28] B. Li and J. Guo. Optimal reinsurance and investment strategies for an insurer under monotone mean-variance criterion. RAIRO-Operations Research, 55(4):2469–2489, 2021.
  • [29] B. Li, J. Guo, and L. Tian. Monotone mean-variance preference and optimal reinsurance problem for the Cramér-Lundberg risk model. Preprint.
  • [30] R. Liptser and A. N. Shiryayev. Theory of Martingales, volume 49. Springer Science & Business Media, 2012.
  • [31] F. Maccheroni, M. Marinacci, A. Rustichini, and M. Taboga. Portfolio selection with monotone mean-variance preferences. Mathematical Finance, 19(3):487–521, 2009.
  • [32] S. Mataramvura and B. Øksendal. Risk minimizing portfolios and HJBI equations for stochastic differential games. Stochastics: An International Journal of Probability and Stochastic Processes, 80(4):317–337, 2008.
  • [33] R. F. Serfozo. Conditional poisson processes. Journal of Applied Probability, pages 288–302, 1972.
  • [34] M. S. Strub and D. Li. A note on monotone mean–variance preferences for continuous processes. Operations Research Letters, 48(4):397–400, 2020.
  • [35] J. Trybula and D. Zawisza. Continuous-time portfolio choice under monotone mean-variance preferences-stochastic factor case. Mathematics of Operations Research, 44(3):966–987, 2019.
  • [36] A. C̆Erný. Semimartingale theory of monotone mean-variance portfolio allocation. Mathematical Finance, 30(3), 2020.
  • [37] A. J. Unger. Pricing index-based catastrophe bonds: Part 1: Formulation and discretization issues using a numerical PDE approach. Computers & Geosciences, 36(2):139–149, 2010.
  • [38] D. W. Yeung and L. A. Petrosjan. Cooperative Stochastic Differential Games. Springer Science & Business Media, 2006.
  • [39] M. Zonggang, M. Chaoqun, and X. Shisong. Pricing zero-coupon catastrophe bonds using EVT with doubly stochastic poisson arrivals. Discrete Dynamics in Nature & Society, 2017:1–14, 2017.

Appendix A Proofs of Statements in Section 3

A.1 Proof of Proposition 3.1

Proof.

We suppose that the form of solution to (9) is given as follows

W(t,x,y,λ)=G(t)xy+H(t,λ)y2+I(t,λ)y+K(t,λ).\displaystyle W(t,x,y,\lambda)=G(t)xy+H(t,\lambda)y^{2}+I(t,\lambda)y+K(t,\lambda). (38)

Substituting (38) into (10) yields

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Gtxy+Hty2+Ity+Kt+{rx+(κr+κ)μ1λ\displaystyle G_{t}xy+H_{t}y^{2}+I_{t}y+K_{t}+\biggl{\{}rx+(-\kappa_{r}+\kappa)\mu_{1}\lambda
+(ιr+ι)kμ2ρ}G(t)y+(δλ)(Hλy2+Iλy+Kλ)+ρ>0{Hzy2+Izy+Kz}F2(dz)\displaystyle+(-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}G(t)y+(-\delta\lambda)(H_{\lambda}y^{2}+I_{\lambda}y+K_{\lambda})+\rho\int_{\mathbb{R}_{>0}}\{H_{z}y^{2}+I_{z}y+K_{z}\}F_{2}(dz)
+π(μ0r)G(t)y+πσ0yoG(t)+y2o2H(t,λ)\displaystyle+\pi(\mu_{0}-r)G(t)y+\pi\sigma_{0}yoG(t)+y^{2}o^{2}H(t,\lambda)
+κruμ1λG(t)y+λ>0{G(t)uzyp+H(t,λ)y2p2}F1(dz)\displaystyle+\kappa_{r}u\mu_{1}\lambda G(t)y+\lambda\int_{\mathbb{R}_{>0}}\{-G(t)uzyp+H(t,\lambda)y^{2}p^{2}\}F_{1}(dz)
+ιrkvμ2ρG(t)y+ρ>0{(2Hzy+IzG(t)kvz)yq+H(t,λ+z)y2q2}F2(dz).\displaystyle+\iota_{r}kv\mu_{2}\rho G(t)y+\rho\int_{\mathbb{R}_{>0}}\{(2H_{z}y+I_{z}-G(t)kvz)yq+H(t,\lambda+z)y^{2}q^{2}\}F_{2}(dz). (39)

By differentiating (39) with respect to oo, pp and qq and letting them equal 0, if H(t,λ)>0H(t,\lambda)>0 holds, the minimum of (39) is attained at

o=G(t)πσ02H(t,λ)y,\displaystyle o^{*}=-\frac{G(t)\pi\sigma_{0}}{2H(t,\lambda)y},
p(z)=G(t)uz2H(t,λ)y,\displaystyle p^{*}(z)=\frac{G(t)uz}{2H(t,\lambda)y},
q(z)=2Hzy+IzG(t)kvz2H(t,λ+z)y.\displaystyle q^{*}(z)=-\frac{2H_{z}y+I_{z}-G(t)kvz}{2H(t,\lambda+z)y}.

We then plug oo^{*}, p(z)p^{*}(z) and q(z)q^{*}(z) back into (39) and note that

ρ>0{(2Hzy+IzG(t)kvz)yq(z)+H(t,λ+z)y2q(z)2}F2(dz)\displaystyle\rho\int_{\mathbb{R}_{>0}}\{(2H_{z}y+I_{z}-G(t)kvz)yq^{*}(z)+H(t,\lambda+z)y^{2}q^{*}(z)^{2}\}F_{2}(dz)
=\displaystyle= 2ρy2𝐇~(Hz2z)ρ2𝐇~(Iz2z)ρ2G(t)2k2v2𝐇~(z)2ρy𝐇~(HzIzz)+2ρG(t)ykv𝐇~(Hz)+ρG(t)kv𝐇~(Iz).\displaystyle-2\rho y^{2}{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})-\frac{\rho}{2}{\bf\widetilde{H}}(\frac{I_{z}^{2}}{z})-\frac{\rho}{2}G(t)^{2}k^{2}v^{2}{\bf\widetilde{H}}(z)-2\rho y{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})+2\rho G(t)ykv{\bf\widetilde{H}}(H_{z})+\rho G(t)kv{\bf\widetilde{H}}(I_{z}).

Therefore, (39) becomes

ta,bW(t,x,y,λ)=Gtxy+Hty2+Ity+Kt+{rx+(κr+κ)μ1λ\displaystyle\mathcal{L}_{t}^{a,b^{*}}W(t,x,y,\lambda)=G_{t}xy+H_{t}y^{2}+I_{t}y+K_{t}+\biggl{\{}rx+(-\kappa_{r}+\kappa)\mu_{1}\lambda
+(ιr+ι)kμ2ρ}G(t)y+(δλ)(Hλy2+Iλy+Kλ)+ρ>0{Hzy2+Izy+Kz}F2(dz)\displaystyle+(-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}G(t)y+(-\delta\lambda)(H_{\lambda}y^{2}+I_{\lambda}y+K_{\lambda})+\rho\int_{\mathbb{R}_{>0}}\{H_{z}y^{2}+I_{z}y+K_{z}\}F_{2}(dz)
+π(μ0r)G(t)yπ2σ02G(t)24H(t,λ)+κruμ1λG(t)yλG(t)2u2σ124H(t,λ)\displaystyle+\pi(\mu_{0}-r)G(t)y-\frac{\pi^{2}\sigma_{0}^{2}G(t)^{2}}{4H(t,\lambda)}+\kappa_{r}u\mu_{1}\lambda G(t)y-\lambda\frac{G(t)^{2}u^{2}\sigma_{1}^{2}}{4H(t,\lambda)}
+ιrkvμ2ρG(t)yρ2G(t)2k2v2𝐇~(z)+2ρG(t)ykv𝐇~(Hz)+ρG(t)kv𝐇~(Iz)\displaystyle+\iota_{r}kv\mu_{2}\rho G(t)y-\frac{\rho}{2}G(t)^{2}k^{2}v^{2}{\bf\widetilde{H}}(z)+2\rho G(t)ykv{\bf\widetilde{H}}(H_{z})+\rho G(t)kv{\bf\widetilde{H}}(I_{z}) (40)
2ρy2𝐇~(Hz2z)ρ2𝐇~(Iz2z)2ρy𝐇~(HzIzz),\displaystyle-2\rho y^{2}{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})-\frac{\rho}{2}{\bf\widetilde{H}}(\frac{I_{z}^{2}}{z})-2\rho y{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z}),

where

𝐇~(f):=>0f(z)z2H(t,λ+z)F2(dz).\displaystyle{\bf\widetilde{H}}(f):=\int_{\mathbb{R}_{>0}}\frac{f(z)z}{2H(t,\lambda+z)}F_{2}(dz).

Similarly, by differentiating (40) with respect to π\pi, uu and vv and letting it equals 0, if G(t)>0G(t)>0 and H(t,λ)>0H(t,\lambda)>0 hold, the maximum of (40) is attained at

π=2H(t,λ)(μ0r)yG(t)σ02,\displaystyle\pi^{*}=\frac{2H(t,\lambda)(\mu_{0}-r)y}{G(t)\sigma_{0}^{2}},
u=2H(t,λ)κrμ1yG(t)σ12,\displaystyle u^{*}=\frac{2H(t,\lambda)\kappa_{r}\mu_{1}y}{G(t)\sigma_{1}^{2}},
v=1kG(t){ιrμ2y𝐇~(z)+2y𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)}.\displaystyle v^{*}=\frac{1}{kG(t)}\biggl{\{}\frac{\iota_{r}\mu_{2}y}{{\bf\widetilde{H}}(z)}+\frac{2y{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}.

By plugging π\pi^{*}, uu^{*} and vv^{*} back into (40), we obtain

ta,bW(t,x,y,λ)=xy{Gt+rG(t)}\displaystyle\mathcal{L}_{t}^{a^{*},b^{*}}W(t,x,y,\lambda)=xy\biggl{\{}G_{t}+rG(t)\biggr{\}}
+y2{HtδλHλ+ρ>0HzF2(dz)+H(t,λ)(μ0r)2σ02+H(t,λ)λκr2μ12σ12\displaystyle+y^{2}\biggl{\{}H_{t}-\delta\lambda H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)+\frac{H(t,\lambda)(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda)\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}
+ριr2μ22𝐇~(z)+2𝐇~(Hz)ριrμ2𝐇~(z)ρ{2𝐇~(Hz2z)2𝐇~(Hz)2𝐇~(z)+ιr2μ222𝐇~(z)}\displaystyle+\frac{\rho\iota_{r}^{2}\mu_{2}^{2}}{{\bf\widetilde{H}}(z)}+\frac{2{\bf\widetilde{H}}(H_{z})\rho\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}-\rho\biggl{\{}2{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})-2\frac{{\bf\widetilde{H}}(H_{z})^{2}}{{\bf\widetilde{H}}(z)}+\frac{\iota_{r}^{2}\mu_{2}^{2}}{2{\bf\widetilde{H}}(z)}\biggr{\}}
+y{ItδλIλ+ρ>0IzF2(dz)+((κr+κ)μ1λ+(ιr+ι)kμ2ρ)G(t)\displaystyle+y\biggl{\{}I_{t}-\delta\lambda I_{\lambda}+\rho\int_{\mathbb{R}_{>0}}I_{z}F_{2}(dz)+\bigg{(}(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)k\mu_{2}\rho\bigg{)}G(t)
+𝐇~(Iz)ριrμ2𝐇~(z)ρ{2𝐇~(HzIzz)2𝐇~(Hz)𝐇~(Iz)𝐇~(z)}\displaystyle+\frac{{\bf\widetilde{H}}(I_{z})\rho\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}-\rho\biggl{\{}2{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})-2\frac{{\bf\widetilde{H}}(H_{z}){\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}
+{KtδλKλ+ρ>0KzF2(dz)ρ2{𝐇~(Iz2z)𝐇~(Iz)2𝐇~(z)}}.\displaystyle+\biggl{\{}K_{t}-\delta\lambda K_{\lambda}+\rho\int_{\mathbb{R}_{>0}}K_{z}F_{2}(dz)-\frac{\rho}{2}\biggl{\{}{\bf\widetilde{H}}(\frac{I_{z}^{2}}{z})-\frac{{\bf\widetilde{H}}(I_{z})^{2}}{{\bf\widetilde{H}}(z)}\biggr{\}}\biggr{\}}.

By separation of variables, we get the partial differential equations (11) with the boundary conditions G(T)=1G(T)=1, H(T,λ)=12θH(T,\lambda)=\frac{1}{2\theta} and I(T,λ)=K(T,λ)=0I(T,\lambda)=K(T,\lambda)=0 which are derived from W(T,x,y,λ)=xy+12θy2W(T,x,y,\lambda)=xy+\frac{1}{2\theta}y^{2}.

A.2 Proof of Lemma 3.1

Proof.

It is sufficient to prove that

d(X(t)G(t))=2d(Y(t)H(t,λ(t)))dI(t,λ(t)).\displaystyle d\biggl{(}X^{*}(t)G(t)\biggr{)}=-2d\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}-dI(t,\lambda(t)).

Substituting (13) into (7a) and (7b) gives

dX(t)=\displaystyle dX^{*}(t)= (rX(t)(κrκ)μ1λ(t)(ιrι)kμ2ρ)dt\displaystyle\bigg{(}rX(t)-(\kappa_{r}-\kappa)\mu_{1}\lambda(t)-(\iota_{r}-\iota)k\mu_{2}\rho\bigg{)}dt
+2H(t,λ(t))(μ0r)2Y(t)G(t)σ02dt+λ(t)2H(t,λ(t))κr2μ12Y(t)G(t)σ12dt\displaystyle+\frac{2H(t,\lambda(t))(\mu_{0}-r)^{2}Y^{*}(t)}{G(t)\sigma_{0}^{2}}dt+\lambda(t)\frac{2H(t,\lambda(t))\kappa_{r}^{2}\mu_{1}^{2}Y^{*}(t)}{G(t)\sigma_{1}^{2}}dt
+ιrμ2ρ1G(t){ιrμ2Y(t)𝐇~(z)+2Y(t)𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)}dt\displaystyle+\iota_{r}\mu_{2}\rho\frac{1}{G(t)}\biggl{\{}\frac{\iota_{r}\mu_{2}Y^{*}(t)}{{\bf\widetilde{H}}(z)}+\frac{2Y^{*}(t){\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}dt
+2H(t,λ(t))(μ0r)Y(t)G(t)σ0dW0(t)>0z2H(t,λ(t))κrμ1Y(t)G(t)σ12N~1(dt,dz)\displaystyle+\frac{2H(t,\lambda(t))(\mu_{0}-r)Y^{*}(t)}{G(t)\sigma_{0}}dW_{0}(t)-\int_{\mathbb{R}_{>0}}z\frac{2H(t,\lambda(t))\kappa_{r}\mu_{1}Y^{*}(t-)}{G(t)\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
>0z1G(t){ιrμ2Y(t)𝐇~(z)+2Y(t)𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)}N~2(dt,dz),\displaystyle-\int_{\mathbb{R}_{>0}}z\frac{1}{G(t)}\biggl{\{}\frac{\iota_{r}\mu_{2}Y^{*}(t-)}{{\bf\widetilde{H}}(z)}+\frac{2Y^{*}(t-){\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}\widetilde{N}_{2}(dt,dz),

and

dY(t)=\displaystyle dY^{*}(t)= Y(t)μ0rσ0dW0(t)+>0Y(t)κrμ1zσ12N~1(dt,dz)\displaystyle-Y^{*}(t)\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}Y^{*}(t-)\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
>012H(t,λ(t)+z){(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}N~2(dt,dz).\displaystyle-\int_{\mathbb{R}_{>0}}\frac{1}{2H(t,\lambda(t)+z)}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t-)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}\widetilde{N}_{2}(dt,dz).

By applying Itô’s lemma to H(t,λ)H(t,\lambda) and I(t,λ)I(t,\lambda), we have

dI(t,λ(t))=\displaystyle dI(t,\lambda(t))= (Itδλ(t)Iλ+ρ>0IzF2(dz))dt+>0IzN~2(dt,dz),\displaystyle\bigg{(}I_{t}-\delta\lambda(t)I_{\lambda}+\rho\int_{\mathbb{R}_{>0}}I_{z}F_{2}(dz)\bigg{)}dt+\int_{\mathbb{R}_{>0}}I_{z}\widetilde{N}_{2}(dt,dz), (41)

and

dH(t,λ(t))=(Htδλ(t)Hλ+ρ>0HzF2(dz))dt+>0HzN~2(dt,dz).\displaystyle dH(t,\lambda(t))=\bigg{(}H_{t}-\delta\lambda(t)H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)\bigg{)}dt+\int_{\mathbb{R}_{>0}}H_{z}\widetilde{N}_{2}(dt,dz). (42)

By substituting (11) into (41) and (42), it follows that

dI(t,λ(t))=\displaystyle dI(t,\lambda(t))= ((κrκ)μ1λ+(ιrι)kμ2ρ)G(t)dt𝐇~(Iz)ριrμ2𝐇~(z)dt\displaystyle\bigg{(}(\kappa_{r}-\kappa)\mu_{1}\lambda+(\iota_{r}-\iota)k\mu_{2}\rho\bigg{)}G(t)dt-\frac{{\bf\widetilde{H}}(I_{z})\rho\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}dt
+ρ{2𝐇~(HzIzz)2𝐇~(Hz)𝐇~(Iz)𝐇~(z)}dt+>0IzN~2(dt,dz),\displaystyle+\rho\biggl{\{}2{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})-2\frac{{\bf\widetilde{H}}(H_{z}){\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}dt+\int_{\mathbb{R}_{>0}}I_{z}\widetilde{N}_{2}(dt,dz),

and

d(Y(t)H(t,λ(t)))\displaystyle d\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}
=\displaystyle= H(t,λ(t))dY(t)+Y(t)dH(t,λ(t))+d[H(,λ()),Y](t)\displaystyle H(t,\lambda(t))dY^{*}(t)+Y^{*}(t)dH(t,\lambda(t))+d\bigg{[}H(\cdot,\lambda(\cdot)),Y\bigg{]}(t)
=\displaystyle= H(t,λ(t))Y(t)μ0rσ0dW0(t)+>0H(t,λ(t))Y(t)κrμ1zσ12N~1(dt,dz)\displaystyle-H(t,\lambda(t))Y^{*}(t)\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}H(t,\lambda(t-))Y^{*}(t-)\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
>0H(t,λ(t))2H(t,λ(t)+z){(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}N~2(dt,dz)\displaystyle-\int_{\mathbb{R}_{>0}}\frac{H(t,\lambda(t))}{2H(t,\lambda(t)+z)}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t-)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}\widetilde{N}_{2}(dt,dz)
+(Htδλ(t)Hλ+ρ>0HzF2(dz))Y(t)dt+Y(t)>0HzN~2(dt,dz)\displaystyle+\bigg{(}H_{t}-\delta\lambda(t)H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)\bigg{)}Y^{*}(t)dt+Y^{*}(t)\int_{\mathbb{R}_{>0}}H_{z}\widetilde{N}_{2}(dt,dz)
>0Hz2H(t,λ(t)+z){(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}N2(dt,dz)\displaystyle-\int_{\mathbb{R}_{>0}}\frac{H_{z}}{2H(t,\lambda(t)+z)}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t-)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}N_{2}(dt,dz)
=\displaystyle= H(t,λ(t))Y(t)μ0rσ0dW0(t)+>0H(t,λ(t))Y(t)κrμ1zσ12N~1(dt,dz)\displaystyle-H(t,\lambda(t))Y^{*}(t)\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}H(t,\lambda(t-))Y^{*}(t-)\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
+(Htδλ(t)Hλ+ρ>0HzF2(dz))Y(t)dt+Y(t)>0HzN~2(dt,dz)\displaystyle+\bigg{(}H_{t}-\delta\lambda(t)H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)\bigg{)}Y^{*}(t)dt+Y^{*}(t)\int_{\mathbb{R}_{>0}}H_{z}\widetilde{N}_{2}(dt,dz)
>012{(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}N~2(dt,dz)\displaystyle-\int_{\mathbb{R}_{>0}}\frac{1}{2}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t-)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}\widetilde{N}_{2}(dt,dz)
ρ>0Hz2H(t,λ(t)+z){(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}F2(dz)𝑑t\displaystyle-\rho\int_{\mathbb{R}_{>0}}\frac{H_{z}}{2H(t,\lambda(t)+z)}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t-)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}F_{2}(dz)dt
=\displaystyle= H(t,λ(t))Y(t)μ0rσ0dW0(t)+>0H(t,λ(t))Y(t)κrμ1zσ12N~1(dt,dz)\displaystyle-H(t,\lambda(t))Y^{*}(t)\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}H(t,\lambda(t-))Y^{*}(t-)\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
+12>0{2z𝐇~(Hz)𝐇~(z)Y(t)+ιrμ2z𝐇~(z)Y(t)Iz+z𝐇~(Iz)𝐇~(z)}N~2(dt,dz)\displaystyle+\frac{1}{2}\int_{\mathbb{R}_{>0}}\biggl{\{}2\frac{z{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}Y^{*}(t-)+\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}Y^{*}(t-)-I_{z}+\frac{z{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}\widetilde{N}_{2}(dt,dz)
(H(t,λ(t))(μ0r)2σ02+H(t,λ(t))λ(t)κr2μ12σ12+ριr2μ222𝐇~(z)+𝐇~(Hz)ριrμ2𝐇~(z))Y(t)dt\displaystyle-\bigg{(}\frac{H(t,\lambda(t))(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda(t))\lambda(t)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{\rho\iota_{r}^{2}\mu_{2}^{2}}{2{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(H_{z})\rho\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t)dt
ρ{𝐇~(HzIzz)𝐇~(Hz)𝐇~(Iz)𝐇~(z)}dt.\displaystyle-\rho\biggl{\{}{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})-\frac{{\bf\widetilde{H}}(H_{z}){\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}dt.

The last equality is derived because

ρ>0Hz2H(t,λ(t)+z){(2(𝟏z𝐇~𝐇~(z))(Hz)ιrμ2z𝐇~(z))Y(t)+(𝟏z𝐇~𝐇~(z))(Iz)}F2(dz)\displaystyle\rho\int_{\mathbb{R}_{>0}}\frac{H_{z}}{2H(t,\lambda(t)+z)}\biggl{\{}\bigg{(}2({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(H_{z})-\frac{\iota_{r}\mu_{2}z}{{\bf\widetilde{H}}(z)}\bigg{)}Y^{*}(t)+({\bf 1}-\frac{z{\bf\widetilde{H}}}{{\bf\widetilde{H}}(z)})(I_{z})\biggr{\}}F_{2}(dz)
=\displaystyle= ρ{2Y(t)𝐇~(Hz2z)2Y(t)𝐇~(Hz)2𝐇~(z)Y(t)ιrμ2𝐇~(Hz)𝐇~(z)+𝐇~(HzIzz)𝐇~(Hz)𝐇~(Iz)𝐇~(z)}.\displaystyle\rho\biggl{\{}2Y^{*}(t){\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})-2Y^{*}(t)\frac{{\bf\widetilde{H}}(H_{z})^{2}}{{\bf\widetilde{H}}(z)}-Y^{*}(t)\frac{\iota_{r}\mu_{2}{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+{\bf\widetilde{H}}(\frac{H_{z}I_{z}}{z})-\frac{{\bf\widetilde{H}}(H_{z}){\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}.

Therefore,

d(X(t)G(t))=G(t)dX(t)G(t)rX(t)dt\displaystyle d\biggl{(}X^{*}(t)G(t)\biggr{)}=G(t)dX^{*}(t)-G(t)rX^{*}(t)dt
=\displaystyle= ((κrκ)μ1λ(t)+(ιrι)kμ2ρ)G(t)dt\displaystyle-\bigg{(}(\kappa_{r}-\kappa)\mu_{1}\lambda(t)+(\iota_{r}-\iota)k\mu_{2}\rho\bigg{)}G(t)dt
+2H(t,λ(t))(μ0r)2Y(t)σ02dt+λ(t)2H(t,λ(t))κr2μ12Y(t)σ12dt\displaystyle+\frac{2H(t,\lambda(t))(\mu_{0}-r)^{2}Y^{*}(t)}{\sigma_{0}^{2}}dt+\lambda(t)\frac{2H(t,\lambda(t))\kappa_{r}^{2}\mu_{1}^{2}Y^{*}(t)}{\sigma_{1}^{2}}dt
+ιrμ2ρ{ιrμ2Y(t)𝐇~(z)+2Y(t)𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)}dt\displaystyle+\iota_{r}\mu_{2}\rho\biggl{\{}\frac{\iota_{r}\mu_{2}Y^{*}(t)}{{\bf\widetilde{H}}(z)}+\frac{2Y^{*}(t){\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}dt
+2H(t,λ(t))(μ0r)Y(t)σ0dW0(t)>0z2H(t,λ(t))κrμ1Y(t)σ12N~1(dt,dz)\displaystyle+\frac{2H(t,\lambda(t))(\mu_{0}-r)Y^{*}(t)}{\sigma_{0}}dW_{0}(t)-\int_{\mathbb{R}_{>0}}z\frac{2H(t,\lambda(t-))\kappa_{r}\mu_{1}Y^{*}(t-)}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)
>0z{ιrμ2Y(t)𝐇~(z)+2Y(t)𝐇~(Hz)𝐇~(z)+𝐇~(Iz)𝐇~(z)}N~2(dt,dz)\displaystyle-\int_{\mathbb{R}_{>0}}z\biggl{\{}\frac{\iota_{r}\mu_{2}Y^{*}(t-)}{{\bf\widetilde{H}}(z)}+\frac{2Y^{*}(t-){\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}+\frac{{\bf\widetilde{H}}(I_{z})}{{\bf\widetilde{H}}(z)}\biggr{\}}\widetilde{N}_{2}(dt,dz)
=\displaystyle= 2d(Y(t)H(t,λ(t)))dI(t,λ(t)).\displaystyle-2d\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}-dI(t,\lambda(t)).

A.3 Proof of Lemma 3.2

Proof.

It is easy to obtain that

G(t)=er(Tt).\displaystyle G(t)=e^{r(T-t)}. (43)

Let us recall that H(t,λ)H(t,\lambda) satisfies

HtδλHλ+ρ>0HzF2(dz)+H(t,λ)(μ0r)2σ02+H(t,λ)λκr2μ12σ12+ρ(ιrμ2+2𝐇~(Hz))22𝐇~(z)2ρ𝐇~(Hz2z)=0.\displaystyle H_{t}-\delta\lambda H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)+\frac{H(t,\lambda)(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda)\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\frac{\rho\bigg{(}\iota_{r}\mu_{2}+2{\bf\widetilde{H}}(H_{z})\bigg{)}^{2}}{2{\bf\widetilde{H}}(z)}-2\rho{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})=0.

We hypothesize that it takes the following form

H(t,λ)=12θeη(t)λ+ζ(t).\displaystyle H(t,\lambda)=\frac{1}{2\theta}e^{\eta(t)\lambda+\zeta(t)}. (44)

By substituting (44) into (11) and by separating the variables, we obtain the following two ordinary differential equations

{η(t)δη(t)+κr2μ12σ12=0,ζ(t)+(μ0r)2σ02+ρ{((ιr+1)μ2>0zeη(t)zF2(dz))2>0z2eη(t)zF2(dz)>0eη(t)zF2(dz)+1}=0.\displaystyle\begin{cases}&\eta^{\prime}(t)-\delta\eta(t)+\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}=0,\\ &\zeta^{\prime}(t)+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}\\ &+\rho\bigg{\{}\frac{((\iota_{r}+1)\mu_{2}-\int_{\mathbb{R}_{>0}}ze^{-\eta(t)z}F_{2}(dz))^{2}}{\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(t)z}F_{2}(dz)}-\int_{\mathbb{R}_{>0}}e^{-\eta(t)z}F_{2}(dz)+1\bigg{\}}=0.\end{cases}

The solutions are given as follows

{η(t)=κr2μ12δσ12(1eδ(Tt)),ζ(t)=ρtT{((ιr+1)μ2>0zeη(s)zF2(dz))2>0z2eη(s)zF2(dz)>0eη(s)zF2(dz)}𝑑s+(ρ+(μ0r)2σ02)(Tt).\displaystyle\begin{cases}\eta(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\delta\sigma_{1}^{2}}(1-e^{-\delta(T-t)}),\\ \zeta(t)=&\rho\int_{t}^{T}\biggl{\{}\frac{((\iota_{r}+1)\mu_{2}-\int_{\mathbb{R}_{>0}}ze^{-\eta(s)z}F_{2}(dz))^{2}}{\int_{\mathbb{R}_{>0}}z^{2}e^{-\eta(s)z}F_{2}(dz)}-\int_{\mathbb{R}_{>0}}e^{-\eta(s)z}F_{2}(dz)\biggr{\}}ds\\ &+(\rho+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}})(T-t).\end{cases}

For I(t,λ)I(t,\lambda), we suppose it has the following form

I(t,λ)=α(t)λ+β(t).\displaystyle I(t,\lambda)=\alpha(t)\lambda+\beta(t). (45)

Substituting (43) and (45) into (11) gives

{α(t)δα(t)(κrκ)μ1er(Tt)=0,β(t)+ρα(t)μ2(ιr+1)(ιrι)kμ2ρer(Tt)=0,\displaystyle\begin{cases}\alpha^{\prime}(t)-\delta\alpha(t)-(\kappa_{r}-\kappa)\mu_{1}e^{r(T-t)}=0,\\ \beta^{\prime}(t)+\rho\alpha(t)\mu_{2}(\iota_{r}+1)-(\iota_{r}-\iota)k\mu_{2}\rho e^{r(T-t)}=0,\end{cases}

with solutions as follows

{α(t)=(κrκ)μ1tTeδ(st)er(Ts)𝑑s,β(t)=ρtT{α(s)μ2(ιr+1)(ιrι)kμ2er(Ts)}𝑑s,\displaystyle\begin{cases}\alpha(t)=-(\kappa_{r}-\kappa)\mu_{1}\int_{t}^{T}e^{-\delta(s-t)}e^{r(T-s)}ds,\\ \beta(t)=\rho\int_{t}^{T}\biggl{\{}\alpha(s)\mu_{2}(\iota_{r}+1)-(\iota_{r}-\iota)k\mu_{2}e^{r(T-s)}\biggr{\}}ds,\end{cases}

that is,

{α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)kμ2r(1er(Tt)).\displaystyle\begin{cases}\alpha(t)=\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)k\mu_{2}}{r}(1-e^{r(T-t)}).\end{cases}

Since

ρ2{𝐇~(Iz2z)𝐇~(Iz)2𝐇~(z)}=0,\displaystyle\frac{\rho}{2}\biggl{\{}{\bf\widetilde{H}}(\frac{I_{z}^{2}}{z})-\frac{{\bf\widetilde{H}}(I_{z})^{2}}{{\bf\widetilde{H}}(z)}\biggr{\}}=0,

and

K(T,λ)=0,\displaystyle K(T,\lambda)=0,

we have

K(t,λ)=0,t[0,T].\displaystyle K(t,\lambda)=0,\quad\forall\quad t\in[0,T].

A.4 Proof of Theorem 3.2

Proof.

We first prove that the strategies a(t)a^{*}(t) and b(t,z)b^{*}(t,z) defined by (21) and (22) belong to 𝒜[s,T]\mathcal{A}[s,T] and [s,T]\mathcal{B}[s,T], respectively. By Theorem 13 (2) and Lemma 7 (2) of [26], the SDE (7b) has a unique solution in the class of nonnegative local martingales which we also denote by YY. Since b(t,z)b^{*}(t,z) is deterministic, bounded and satisfies

p(t,z)1,\displaystyle p^{*}(t,z)\geq-1, (46)
q(t,z)1,\displaystyle q^{*}(t,z)\geq-1, (47)

YY is a square integrable martingale. Consequently, b(t,z)[s,T]b^{*}(t,z)\in\mathcal{B}[s,T].

On the other hand, since

supstT𝔼PY(t)2<+,\sup_{s\leq t\leq T}\mathbb{E}^{P}Y(t)^{2}<+\infty,

it holds that

𝔼P[sTπ(t,x,y,λ)2+u(t,x,y,λ)2+v(t,x,y,λ)2dt]<+,\mathbb{E}^{P}\left[\int_{s}^{T}\pi^{*}(t,x,y,\lambda)^{2}+u^{*}(t,x,y,\lambda)^{2}+v^{*}(t,x,y,\lambda)^{2}dt\right]<+\infty,

which proves that a(t)𝒜[s,T]a^{*}(t)\in\mathcal{A}[s,T].

We then verify that (12) is the value function of Problem (8). By applying Itô’s lemma to W(t,x,y,λ)W(t,x,y,\lambda), we have

W(T,X(T),Y(T),λ(T))=W(s,x,y,λ)+sTta,bW(t,X(t),Y(t),λ(t))𝑑t\displaystyle W(T,X(T),Y(T),\lambda(T))=W(s,x,y,\lambda)+\int_{s}^{T}\mathcal{L}_{t}^{a^{*},b^{*}}W(t,X(t),Y(t),\lambda(t))dt
+sTπ(t)σ0Wx𝑑W0(s)+sTY(t)o(t)Wy𝑑W0(s)\displaystyle+\int_{s}^{T}\pi^{*}(t)\sigma_{0}\frac{\partial W}{\partial x}dW_{0}(s)+\int_{s}^{T}Y(t)o^{*}(t)\frac{\partial W}{\partial y}dW_{0}(s)
+sT>0W(t,X(t)u(t)z,Y(t)(1+p(t,z)),λ(t))W(t,X(t),Y(t),λ(t))N~1(dt,dz)\displaystyle+\int_{s}^{T}\int_{\mathbb{R}_{>0}}W(t,X(t-)-u^{*}(t)z,Y(t-)(1+p^{*}(t,z)),\lambda(t-))-W(t,X(t-),Y(t-),\lambda(t-))\widetilde{N}_{1}(dt,dz)
+sT>0W(t,X(t)kv(t)z,Y(t)(1+q(t,z)),λ(t)+z)W(t,X(t),Y(t),λ(t))N~2(dt,dz).\displaystyle+\int_{s}^{T}\int_{\mathbb{R}_{>0}}W(t,X(t-)-kv^{*}(t)z,Y(t-)(1+q^{*}(t,z)),\lambda(t-)+z)-W(t,X(t-),Y(t-),\lambda(t-))\widetilde{N}_{2}(dt,dz).

Let τB=inf{t>0:|π(t)σ0Wx|B}inf{t>0:|Y(t)o(t)|B}inf{t>0:st>0|W(t,X(r)u(r)z,Y(r)(1+p(r,z)),λ(t))|λ(t)F1(dz)𝑑rB}inf{t>0:st>0|W(t,X(r)u(r)z,Y(r)(1+q(r,z)),λ(t)+z)|ρF2(dz)𝑑rB}\tau_{B}=\inf\bigg{\{}t>0:|\pi^{*}(t)\sigma_{0}\frac{\partial W}{\partial x}|\geq B\bigg{\}}\wedge\inf\bigg{\{}t>0:|Y(t)o^{*}(t)|\geq B\bigg{\}}\\ \wedge\inf\bigg{\{}t>0:\int_{s}^{t}\int_{\mathbb{R}_{>0}}|W(t,X(r-)-u(r)z,Y(r-)(1+p^{*}(r,z)),\lambda(t-))|\lambda(t)F_{1}(dz)dr\geq B\bigg{\}}\\ \wedge\inf\bigg{\{}t>0:\int_{s}^{t}\int_{\mathbb{R}_{>0}}|W(t,X(r-)-u(r)z,Y(r-)(1+q^{*}(r,z)),\lambda(t-)+z)|\rho F_{2}(dz)dr\geq B\bigg{\}}. We have

𝔼P[W(TτB,X(TτB),Y(TτB),λ(TτB))]\displaystyle\mathbb{E}^{P}\left[W(T\vee\tau_{B},X(T\vee\tau_{B}),Y(T\vee\tau_{B}),\lambda(T\vee\tau_{B}))\right]
=W(s,x,y,λ)+𝔼PsTτBta,bW(t,X(t),Y(t),λ(t))𝑑t.\displaystyle=W(s,x,y,\lambda)+\mathbb{E}^{P}\int_{s}^{T\vee\tau_{B}}\mathcal{L}_{t}^{a^{*},b^{*}}W(t,X(t),Y(t),\lambda(t))dt.

Since WW satisfies the HJBI equation, we obtain that

{𝔼P[W(TτB,Xa(TτB),Yb(TτB),λ(TτB))]=W(s,x,y,λ),𝔼P[W(TτB,Xa(TτB),Yb(TτB),λ(TτB))]W(s,x,y,λ),𝔼P[W(TτB,Xa(TτB),Yb(TτB),λ(TτB))]W(s,x,y,λ).\displaystyle\begin{cases}&\mathbb{E}^{P}\left[W(T\vee\tau_{B},X^{a^{*}}(T\vee\tau_{B}),Y^{b^{*}}(T\vee\tau_{B}),\lambda(T\vee\tau_{B}))\right]=W(s,x,y,\lambda),\\ &\mathbb{E}^{P}\left[W(T\vee\tau_{B},X^{a^{*}}(T\vee\tau_{B}),Y^{b}(T\vee\tau_{B}),\lambda(T\vee\tau_{B}))\right]\geq W(s,x,y,\lambda),\\ &\mathbb{E}^{P}\left[W(T\vee\tau_{B},X^{a}(T\vee\tau_{B}),Y^{b^{*}}(T\vee\tau_{B}),\lambda(T\vee\tau_{B}))\right]\leq W(s,x,y,\lambda).\end{cases}

By letting BB\to\infty, we obtain

Ja,b(s,x,y,λ)Ja,b(s,x,y)=W(s,x,y,λ)Ja,b(s,x,y,λ).J^{a,b^{*}}(s,x,y,\lambda)\leq J^{a^{*},b^{*}}(s,x,y)=W(s,x,y,\lambda)\leq J^{a^{*},b}(s,x,y,\lambda).

Consequently,

W(s,x,y,λ)=sup(π,u,v)𝒜[s,T](inf(o,p,q)[s,T]Ja,b(s,x,y,λ))=inf(o,p,q)[s,T](sup(π,u,v)𝒜[s,T]Ja,b(s,x,y,λ)),W(s,x,y,\lambda)=\sup_{(\pi,u,v)\in\mathcal{A}[s,T]}\left(\inf_{(o,p,q)\in\mathcal{B}[s,T]}J^{a,b}(s,x,y,\lambda)\right)=\inf_{(o,p,q)\in\mathcal{B}[s,T]}\left(\sup_{(\pi,u,v)\in\mathcal{A}[s,T]}J^{a,b}(s,x,y,\lambda)\right),

and (a(t),b(t,z))(a^{*}(t),b^{*}(t,z)) is a Nash equilibrium of Problem (8). ∎

A.5 Proof of Theorem 3.3

Proof.

By Theorem 3.1 and Lemma 3.2, to obtain 𝕍ars,x,y,λPX(t)\mathbb{V}ar_{s,x,y,\lambda}^{P}X^{*}(t), we only need to compute the values of

{𝔼s,x,y,λPλ(t),𝔼s,x,y,λPλ(t)2,𝔼s,x,y,λP[Y(t)H(t,λ(t))],𝔼s,x,y,λP[Y(t)2H(t,λ(t))2],𝔼s,x,y,λP[Y(t)H(t,λ(t))λ(t)].\displaystyle\begin{cases}\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t),\\ \mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t)^{2},\\ \mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))],\\ \mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)^{2}H(t,\lambda(t))^{2}],\\ \mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))\lambda(t)].\end{cases}

1) 𝔼s,x,y,λPλ(t)\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t) and 𝔼s,x,y,λPλ(t)2\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t)^{2}: By Feyman-Kac theorem, 𝔼s,x,y,λPλ(t)\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t) and 𝔼s,x,y,λPλ(t)2\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t)^{2} are the solutions of the following PDE

LsδλLλ+ρ>0LzF2(dz)=0\displaystyle L_{s}-\delta\lambda L_{\lambda}+\rho\int_{\mathbb{R}_{>0}}L_{z}F_{2}(dz)=0

with terminal values L(t,λ)=λL(t,\lambda)=\lambda and L(t,λ)=λ2L(t,\lambda)=\lambda^{2}, respectively. It is easy to obtain that

{𝔼s,x,y,λPλ(t)=eδ(ts)λ+ρμ2δ(1eδ(ts)),𝔼s,x,y,λPλ(t)2=(eδ(ts)λ+ρμ2δ(1eδ(ts)))2+ρσ222δ(1e2δ(ts)).\displaystyle\begin{cases}\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t)=e^{-\delta(t-s)}\lambda+\frac{\rho\mu_{2}}{\delta}(1-e^{-\delta(t-s)}),\\ \mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t)^{2}=\bigg{(}e^{-\delta(t-s)}\lambda+\frac{\rho\mu_{2}}{\delta}(1-e^{-\delta(t-s)})\bigg{)}^{2}+\frac{\rho\sigma_{2}^{2}}{2\delta}(1-e^{-2\delta(t-s)}).\end{cases}

2) 𝔼s,x,y,λP[Y(t)H(t,λ(t))]\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))]: To get this value, we first need to find the SDE that H(t,λ(t))H(t,\lambda(t)) satisfies. By applying Itô’s lemma to H(t,λ(t))H(t,\lambda(t)), it follows that

dH(t,λ(t))=(Htδλ(t)Hλ+ρ>0HzF2(dz))dt+>0HzN~2(dt,dz).\displaystyle dH(t,\lambda(t))=\bigg{(}H_{t}-\delta\lambda(t)H_{\lambda}+\rho\int_{\mathbb{R}_{>0}}H_{z}F_{2}(dz)\bigg{)}dt+\int_{\mathbb{R}_{>0}}H_{z}\widetilde{N}_{2}(dt,dz).

Note that

12H(t,λ)(ιrμ2𝐇~(z)+2𝐇~(Hz)𝐇~(z))=ϕ(t),\displaystyle\frac{1}{2H(t,\lambda)}\bigg{(}\frac{\iota_{r}\mu_{2}}{{\bf\widetilde{H}}(z)}+\frac{2{\bf\widetilde{H}}(H_{z})}{{\bf\widetilde{H}}(z)}\bigg{)}=\phi(t),

and by (11), we have

dH(t,λ(t))=\displaystyle dH(t,\lambda(t))= (H(t,λ(t))(μ0r)2σ02+H(t,λ(t))λ(t)κr2μ12σ12+2ρ(H(t,λ(t))2ϕ(t)2𝐇~(z)𝐇~(Hz2z)))dt\displaystyle-\bigg{(}\frac{H(t,\lambda(t))(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda(t))\lambda(t)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+2\rho\bigg{(}H(t,\lambda(t))^{2}\phi(t)^{2}{\bf\widetilde{H}}(z)-{\bf\widetilde{H}}(\frac{H_{z}^{2}}{z})\bigg{)}\bigg{)}dt
+>0HzN~2(dt,dz).\displaystyle+\int_{\mathbb{R}_{>0}}H_{z}\widetilde{N}_{2}(dt,dz).

Again by Itô’s lemma, we have

dY(t)H(t,λ(t))=\displaystyle dY^{*}(t)H(t,\lambda(t))= Y(t)H(t,λ(t)){(μ0r)2σ02+λ(t)κr2μ12σ12+ριrμ2ϕ(t)}dt\displaystyle-Y^{*}(t)H(t,\lambda(t))\biggl{\{}\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda(t)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(t)\biggr{\}}dt
+Y(t)H(t,λ(t)){μ0rσ0dW0(t)+>0κrμ1zσ12N~1(dt,dz)+>0ϕ(t)zN~2(dt,dz)}.\displaystyle+Y^{*}(t)H(t,\lambda(t))\biggl{\{}-\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(dt,dz)+\int_{\mathbb{R}_{>0}}\phi(t)z\widetilde{N}_{2}(dt,dz)\biggr{\}}.

Denote by

U1(t)=\displaystyle U_{1}(t)= Y(t)H(t,λ(t))yH(s,λ),\displaystyle\frac{Y^{*}(t)H(t,\lambda(t))}{yH(s,\lambda)},
A1(t)=\displaystyle A_{1}(t)= st{(μ0r)2σ02+λ(u)κr2μ12σ12+ριrμ2ϕ(u)}𝑑u,\displaystyle-\int_{s}^{t}\biggl{\{}\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda(u)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(u)\biggr{\}}du,
M1(t)=\displaystyle M_{1}(t)= stμ0rσ0𝑑W0(u)+st>0κrμ1zσ12N~1(du,dz)+st>0ϕ(u)zN~2(du,dz).\displaystyle-\int_{s}^{t}\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}(u)+\int_{s}^{t}\int_{\mathbb{R}_{>0}}\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}(du,dz)+\int_{s}^{t}\int_{\mathbb{R}_{>0}}\phi(u)z\widetilde{N}_{2}(du,dz).

Then U1(t)U_{1}(t) satisfies

dU1(t)=U1(t)dA1(t)+U1(t)dM1(t),U1(s)=1.\displaystyle dU_{1}(t)=U_{1}(t)dA_{1}(t)+U_{1}(t)dM_{1}(t),\quad U_{1}(s)=1.

Note that eA1(t)U1(t)e^{-A_{1}(t)}U_{1}(t) is a stochastic exponential satisfying

eA1(t)U1(t)=1+steA1(u)U1(u)𝑑M1(u),\displaystyle e^{-A_{1}(t)}U_{1}(t)=1+\int_{s}^{t}e^{-A_{1}(u)}U_{1}(u)dM_{1}(u),

we can define a new probability measure P1P_{1} by

dP1dP|t=eA1(t)U1(t).\displaystyle\frac{dP_{1}}{dP}\bigg{|}_{\mathcal{F}_{t}}=e^{-A_{1}(t)}U_{1}(t).

Under the probability measure P1P_{1}, the following stochastic measure is a martingale

N~2P1(dt,dz)=N~2(dt,dz)ϕ(t)zρF2(dz)dt,\displaystyle\widetilde{N}_{2}^{P_{1}}(dt,dz)=\widetilde{N}_{2}(dt,dz)-\phi(t)z\rho F_{2}(dz)dt,

such that

𝔼s,x,y,λP1N2(t)=st>0(ϕ(u)z+1)ρF2(dz)𝑑u.\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P_{1}}N_{2}(t)=\int_{s}^{t}\int_{\mathbb{R}_{>0}}(\phi(u)z+1)\rho F_{2}(dz)du.

Now the representation of λ\lambda under the new probability P1P_{1} is as follows

dλ(t)=(δλ(t)+ρμ2+ρσ22ϕ(t))dt+>0zN~2P1(dt,dz).\displaystyle d\lambda(t)=(-\delta\lambda(t)+\rho\mu_{2}+\rho\sigma_{2}^{2}\phi(t))dt+\int_{\mathbb{R}_{>0}}z\widetilde{N}_{2}^{P_{1}}(dt,dz).

By Feyman-Kac theorem,

LYH(s,λ):=𝔼s,x,y,λP1est{(μ0r)2σ02+λ(u)κr2μ12σ12+ριrμ2ϕ(u)}𝑑u,\displaystyle L^{YH}(s,\lambda):=\mathbb{E}_{s,x,y,\lambda}^{P_{1}}e^{-\int_{s}^{t}\{\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda(u)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(u)\}du},

is the unique probability solution of the following PDE,

{LsYHδλLλYH+ρ>0LzYH(ϕ(s)z+1)F2(dz){(μ0r)2σ02+λκr2μ12σ12+ριrμ2ϕ(s)}LYH(s,λ)=0,LYH(t,λ)=1.\displaystyle\begin{cases}L^{YH}_{s}-\delta\lambda L^{YH}_{\lambda}+\rho\int_{\mathbb{R}_{>0}}L^{YH}_{z}(\phi(s)z+1)F_{2}(dz)-\biggl{\{}\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(s)\biggr{\}}L^{YH}(s,\lambda)=0,\\ L^{YH}(t,\lambda)=1.\end{cases} (48)

We solve it to obtain

LYH(s,λ)=eψ1(s,t)λ+ψ2(s,t)eη(s)λζ(s)=12θeψ1(s,t)λ+ψ2(s,t)H(s,λ)1,\displaystyle L^{YH}(s,\lambda)=e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}e^{-\eta(s)\lambda-\zeta(s)}=\frac{1}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}H(s,\lambda)^{-1}, (49)

where

{ψ1(s,t)=κr2μ12δσ12(eδ(ts)eδ(Ts)),ψ2(s,t)=ζ(t)+ρst>0eη(u)z(eψ1(u,t)z1)(ϕ(u)z+1)F2(dz)𝑑u.\displaystyle\begin{cases}\psi_{1}(s,t)=\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\delta\sigma_{1}^{2}}(e^{-\delta(t-s)}-e^{-\delta(T-s)}),\\ \psi_{2}(s,t)=\zeta(t)+\rho\int_{s}^{t}\int_{\mathbb{R}_{>0}}e^{-\eta(u)z}(e^{\psi_{1}(u,t)z}-1)(\phi(u)z+1)F_{2}(dz)du.\end{cases}

Therefore,

𝔼s,x,y,λP[Y(t)H(t,λ(t))]=\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))]= yH(s,λ)𝔼s,x,y,λPU1(t)\displaystyle yH(s,\lambda)\mathbb{E}_{s,x,y,\lambda}^{P}U_{1}(t)
=\displaystyle= yH(s,λ)𝔼s,x,y,λP[eA1(t)eA1(t)U1(t)]\displaystyle yH(s,\lambda)\mathbb{E}_{s,x,y,\lambda}^{P}[e^{A_{1}(t)}e^{-A_{1}(t)}U_{1}(t)]
=\displaystyle= yH(s,λ)LYH(s,λ)\displaystyle yH(s,\lambda)L^{YH}(s,\lambda)
=\displaystyle= y2θeψ1(s,t)λ+ψ2(s,t).\displaystyle\frac{y}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}.

3) 𝔼s,x,y,λP[Y(t)2H(T,λ(t))2]\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)^{2}H(T,\lambda(t))^{2}]: We use the similar method as shown in 2). First by Itô’s lemma,

d(Y(t)H(t,λ(t)))2=\displaystyle d\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}^{2}= (Y(t)H(t,λ(t)))2{(μ0r)2σ02λ(t)κr2μ12σ122ριrμ2ϕ(t)+ρσ22ϕ(t)2}dt\displaystyle\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}^{2}\biggl{\{}-\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}-\frac{\lambda(t)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}-2\rho\iota_{r}\mu_{2}\phi(t)+\rho\sigma_{2}^{2}\phi(t)^{2}\biggr{\}}dt
+(Y(t)H(t,λ(t)))2{2(μ0r)σ0dW0(t)+>0(2κrμ1zσ12+κr2μ12z2σ14)N~1(dt,dz)\displaystyle+\bigg{(}Y^{*}(t)H(t,\lambda(t))\bigg{)}^{2}\biggl{\{}-\frac{2(\mu_{0}-r)}{\sigma_{0}}dW_{0}(t)+\int_{\mathbb{R}_{>0}}(\frac{2\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}+\frac{\kappa_{r}^{2}\mu_{1}^{2}z^{2}}{\sigma_{1}^{4}})\widetilde{N}_{1}(dt,dz)
+>0(2ϕ(t)z+ϕ(t)2z2)N~2(dt,dz)}.\displaystyle+\int_{\mathbb{R}_{>0}}(2\phi(t)z+\phi(t)^{2}z^{2})\widetilde{N}_{2}(dt,dz)\biggr{\}}.

Let

U2(t)=\displaystyle U_{2}(t)= Y(t)2H(t,λ(t))2y2H(s,λ)2,\displaystyle\frac{Y^{*}(t)^{2}H(t,\lambda(t))^{2}}{y^{2}H(s,\lambda)^{2}},
A2(t)=\displaystyle A_{2}(t)= st{(μ0r)2σ02λ(u)κr2μ12σ122ριrμ2ϕ(u)+ρϕ(u)2σ22}𝑑u,\displaystyle\int_{s}^{t}\biggl{\{}-\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}-\frac{\lambda(u)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}-2\rho\iota_{r}\mu_{2}\phi(u)+\rho\phi(u)^{2}\sigma_{2}^{2}\biggr{\}}du,
M2(t)=\displaystyle M_{2}(t)= st2(μ0r)σ0𝑑W0(u)+st>0(2κrμ1zσ12+κr2μ12z2σ14)N~1(du,dz)+st>0(2ϕ(u)z+ϕ(u)2z2)N~2(du,dz).\displaystyle-\int_{s}^{t}\frac{2(\mu_{0}-r)}{\sigma_{0}}dW_{0}(u)+\int_{s}^{t}\int_{\mathbb{R}_{>0}}(\frac{2\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}+\frac{\kappa_{r}^{2}\mu_{1}^{2}z^{2}}{\sigma_{1}^{4}})\widetilde{N}_{1}(du,dz)+\int_{s}^{t}\int_{\mathbb{R}_{>0}}(2\phi(u)z+\phi(u)^{2}z^{2})\widetilde{N}_{2}(du,dz).

Thus U2(t)U_{2}(t) satisfies

dU2(t)=U2(t)dA2(t)+U2(t)dM2(t),U1(s)=1,\displaystyle dU_{2}(t)=U_{2}(t)dA_{2}(t)+U_{2}(t)dM_{2}(t),\quad U_{1}(s)=1,

and

eA2(t)U2(t)=1+steA2(u)U2(u)𝑑M2(u).\displaystyle e^{-A_{2}(t)}U_{2}(t)=1+\int_{s}^{t}e^{-A_{2}(u)}U_{2}(u)dM_{2}(u).

Define a new probability measure P2P_{2} by

dP2dP|t=eA2(t)U2(t).\displaystyle\frac{dP_{2}}{dP}\bigg{|}_{\mathcal{F}_{t}}=e^{-A_{2}(t)}U_{2}(t).

The compensated Poisson process of N2N_{2} under P2P_{2} is given by

N~2P2(dt,dz)=N~2(dt,dz)(2ϕ(t)z+ϕ(t)2z2)ρF2(dz)dt.\displaystyle\widetilde{N}_{2}^{P_{2}}(dt,dz)=\widetilde{N}_{2}(dt,dz)-(2\phi(t)z+\phi(t)^{2}z^{2})\rho F_{2}(dz)dt.

Moreover,

𝔼s,x,y,λP2N2(t)=st>0(ϕ(u)z+1)2ρF2(dz)𝑑u.\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P_{2}}N_{2}(t)=\int_{s}^{t}\int_{\mathbb{R}_{>0}}(\phi(u)z+1)^{2}\rho F_{2}(dz)du.

Hence the representation of λ\lambda under P2P_{2} is as follows

dλ(t)=(δλ(t)+ρμ2+2ρσ22ϕ(t)+ρϕ(t)2>0z3F2(dz))dt+>0zN~2P2(dt,dz).\displaystyle d\lambda(t)=(-\delta\lambda(t)+\rho\mu_{2}+2\rho\sigma_{2}^{2}\phi(t)+\rho\phi(t)^{2}\int_{\mathbb{R}_{>0}}z^{3}F_{2}(dz))dt+\int_{\mathbb{R}_{>0}}z\widetilde{N}_{2}^{P_{2}}(dt,dz).

By Feyman-Kac theorem,

L(YH)2(s,λ):=𝔼s,x,y,λP2est{(μ0r)2σ02λ(u)κr2μ12σ122ριrμ2ϕ(u)+ρσ22ϕ(u)2}𝑑u\displaystyle L^{(YH)^{2}}(s,\lambda):=\mathbb{E}_{s,x,y,\lambda}^{P_{2}}e^{\int_{s}^{t}\{-\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}-\frac{\lambda(u)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}-2\rho\iota_{r}\mu_{2}\phi(u)+\rho\sigma_{2}^{2}\phi(u)^{2}\}du}

is the unique probability solution of the following PDE

{Ls(YH)2δλLλ(YH)2+ρ>0Lz(YH)2(ϕ(s)z+1)2F2(dz)+{(μ0r)2σ02λκr2μ12σ122ριrμ2ϕ(s)+ρσ22ϕ(s)2}L(YH)2(s,λ)=0,L(YH)2(t,λ)=1,\displaystyle\begin{cases}L^{(YH)^{2}}_{s}-\delta\lambda L^{(YH)^{2}}_{\lambda}+\rho\int_{\mathbb{R}_{>0}}L^{(YH)^{2}}_{z}(\phi(s)z+1)^{2}F_{2}(dz)\\ +\biggl{\{}-\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}-\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}-2\rho\iota_{r}\mu_{2}\phi(s)+\rho\sigma_{2}^{2}\phi(s)^{2}\biggr{\}}L^{(YH)^{2}}(s,\lambda)=0,\\ L^{(YH)^{2}}(t,\lambda)=1,\end{cases}

in other words,

L(YH)2(s,λ)=eψ1(s,t)λ+ψ3(s,t)eη(s)λζ(s)=12θeψ1(s,t)λ+ψ3(s,t)H(s,λ)1,\displaystyle L^{(YH)^{2}}(s,\lambda)=e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}e^{-\eta(s)\lambda-\zeta(s)}=\frac{1}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}H(s,\lambda)^{-1},

where

ψ3(s,t)=ζ(t)+ρst>0eη(u)z(eψ1(u,t)z1)(ϕ(u)z+1)2F2(dz)𝑑u.\displaystyle\psi_{3}(s,t)=\zeta(t)+\rho\int_{s}^{t}\int_{\mathbb{R}_{>0}}e^{-\eta(u)z}(e^{\psi_{1}(u,t)z}-1)(\phi(u)z+1)^{2}F_{2}(dz)du.

Therefore

𝔼s,x,y,λP[Y(t)2H(t,λ(t))2]=\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)^{2}H(t,\lambda(t))^{2}]= y2H(s,λ)2𝔼s,x,y,λPU2(t)\displaystyle y^{2}H(s,\lambda)^{2}\mathbb{E}_{s,x,y,\lambda}^{P}U_{2}(t)
=\displaystyle= y2H(s,λ)2𝔼s,x,y,λP[eA2(t)eA2(t)U2(t)]\displaystyle y^{2}H(s,\lambda)^{2}\mathbb{E}_{s,x,y,\lambda}^{P}[e^{A_{2}(t)}e^{-A_{2}(t)}U_{2}(t)]
=\displaystyle= y2H(s,λ)2L(YH)2(s,λ)\displaystyle y^{2}H(s,\lambda)^{2}L^{(YH)^{2}}(s,\lambda)
=\displaystyle= y22θeψ1(s,t)λ+ψ3(s,t)H(s,λ).\displaystyle\frac{y^{2}}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}H(s,\lambda).

4) 𝔼s,x,y,λP[Y(t)H(t,λ(t))λ(t)]\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))\lambda(t)]: It can also be obtained by using Feyman-Kac theorem that

LλYH(s,λ):=𝔼s,x,y,λP1[λ(t)est{(μ0r)2σ02+λ(u)κr2μ12σ12+ριrμ2ϕ(u)}𝑑u]\displaystyle L^{\lambda YH}(s,\lambda):=\mathbb{E}_{s,x,y,\lambda}^{P_{1}}[\lambda(t)e^{-\int_{s}^{t}\{\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda(u)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(u)\}du}] (50)

is the unique probability solution of the following PDE

{LsλYHδλLλλYH+ρ>0LzλYH(ϕ(s)z+1)F2(dz){(μ0r)2σ02+λκr2μ12σ12+ριrμ2ϕ(s)}LλYH(s,λ)=0LλYH(t,λ)=λ.\displaystyle\begin{cases}L^{\lambda YH}_{s}-\delta\lambda L^{\lambda YH}_{\lambda}+\rho\int_{\mathbb{R}_{>0}}L^{\lambda YH}_{z}(\phi(s)z+1)F_{2}(dz)-\biggl{\{}\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\lambda\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}+\rho\iota_{r}\mu_{2}\phi(s)\biggr{\}}L^{\lambda YH}(s,\lambda)=0\\ L^{\lambda YH}(t,\lambda)=\lambda.\end{cases} (51)

To solve the PDE, we suppose the form of LλYH(s,λ):=LYH(s,λ)L~(s,λ)L^{\lambda YH}(s,\lambda):=L^{YH}(s,\lambda)\widetilde{L}(s,\lambda). Recall that LYH(s,λ)L^{YH}(s,\lambda) satisfies (48) and (49). By substituting LYH(s,λ)L~(s,λ)L^{YH}(s,\lambda)\widetilde{L}(s,\lambda) into (51), we see L~(s,λ)\widetilde{L}(s,\lambda) satisfies

{L~sδλL~λ+ρ>0L~zeη(s)z(ϕ(s)z+1)F2(dz)=0,L~(t,λ)=λ,\displaystyle\begin{cases}\widetilde{L}_{s}-\delta\lambda\widetilde{L}_{\lambda}+\rho\int_{\mathbb{R}_{>0}}\widetilde{L}_{z}e^{-\eta(s)z}(\phi(s)z+1)F_{2}(dz)=0,\\ \widetilde{L}(t,\lambda)=\lambda,\end{cases}

which gives

L~(s,λ)=eδ(ts)λ+ρ(ιr+1)μ2δ(1eδ(ts)).\displaystyle\widetilde{L}(s,\lambda)=e^{-\delta(t-s)}\lambda+\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)}).

Hence,

LλYH(s,λ)=\displaystyle L^{\lambda YH}(s,\lambda)= (eδ(ts)λ+ρ(ιr+1)μ2δ(1eδ(ts)))eψ1(s,t)λ+ψ2(s,t)eη(s)λζ(s)\displaystyle\bigg{(}e^{-\delta(t-s)}\lambda+\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)})\bigg{)}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}e^{-\eta(s)\lambda-\zeta(s)}
=\displaystyle= 12θ(eδ(ts)λ+ρ(ιr+1)μ2δ(1eδ(ts)))eψ1(s,t)λ+ψ2(s,t)H(s,λ)1.\displaystyle\frac{1}{2\theta}\bigg{(}e^{-\delta(t-s)}\lambda+\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)})\bigg{)}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}H(s,\lambda)^{-1}.

Therefore

𝔼s,x,y,λP[Y(t)H(t,λ(t))λ(t)]=\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))\lambda(t)]= yH(s,λ)𝔼s,x,y,λP[λ(t)U1(t)]\displaystyle yH(s,\lambda)\mathbb{E}_{s,x,y,\lambda}^{P}[\lambda(t)U_{1}(t)]
=\displaystyle= yH(s,λ)𝔼s,x,y,λP[λ(t)eA1(t)eA1(t)U1(t)]\displaystyle yH(s,\lambda)\mathbb{E}_{s,x,y,\lambda}^{P}[\lambda(t)e^{A_{1}(t)}e^{-A_{1}(t)}U_{1}(t)]
=\displaystyle= yH(s,λ)LλYH(s,λ)\displaystyle yH(s,\lambda)L^{\lambda YH}(s,\lambda)
=\displaystyle= y2θeψ1(s,t)λ+ψ2(s,t)(eδ(ts)λ+ρ(ιr+1)μ2δ(1eδ(ts))).\displaystyle\frac{y}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}\bigg{(}e^{-\delta(t-s)}\lambda+\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)})\bigg{)}.

As a result, we obtian

𝕍ars,x,y,λP[Y(t)H(t,λ(t))]=y24θ2(2θeψ1(s,t)λ+ψ3(s,t)H(s,λ)e2ψ1(s,t)λ+2ψ2(s,t)),\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))]=\frac{y^{2}}{4\theta^{2}}\bigg{(}2\theta e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}H(s,\lambda)-e^{2\psi_{1}(s,t)\lambda+2\psi_{2}(s,t)}\bigg{)},
𝕍ars,x,y,λP[I(t,λ(t))]=α(t)2ρσ222δ(1e2δ(ts)),\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}[I(t,\lambda(t))]=\alpha(t)^{2}\frac{\rho\sigma_{2}^{2}}{2\delta}(1-e^{-2\delta(t-s)}),
Covs,x,y,λP[Y(t)H(t,λ(t)),I(t,λ(t))]=α(t)y2θeψ1(s,t)λ+ψ2(s,t)ριrμ2δ(1eδ(ts)).\displaystyle Cov_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t)),I(t,\lambda(t))]=\alpha(t)\frac{y}{2\theta}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}\frac{\rho\iota_{r}\mu_{2}}{\delta}(1-e^{-\delta(t-s)}).

Therefore

𝕍ars,x,y,λP[X(t)G(t)]=\displaystyle\mathbb{V}ar_{s,x,y,\lambda}^{P}[X^{*}(t)G(t)]= y2θ2(2θeψ1(s,t)λ+ψ3(s,t)H(s,λ)e2ψ1(s,t)λ+2ψ2(s,t))+α(t)2ρσ222δ(1e2δ(ts))\displaystyle\frac{y^{2}}{\theta^{2}}\bigg{(}2\theta e^{\psi_{1}(s,t)\lambda+\psi_{3}(s,t)}H(s,\lambda)-e^{2\psi_{1}(s,t)\lambda+2\psi_{2}(s,t)}\bigg{)}+\alpha(t)^{2}\frac{\rho\sigma_{2}^{2}}{2\delta}(1-e^{-2\delta(t-s)})
+2α(t)yθeψ1(s,t)λ+ψ2(s,t)ριrμ2δ(1eδ(ts)).\displaystyle+2\alpha(t)\frac{y}{\theta}e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}\frac{\rho\iota_{r}\mu_{2}}{\delta}(1-e^{-\delta(t-s)}). (52)

Moreover, by Theorem 3.1 and Lemma 3.2, we have

𝔼s,x,y,λP[X(t)G(t)]𝔼s,x,y,λQ[X(t)G(t)]\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P}[X^{*}(t)G(t)]-\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}[X^{*}(t)G(t)]
=\displaystyle= 2𝔼s,x,y,λQ[Y(t)H(t,λ(t))]2𝔼s,x,y,λP[Y(t)H(t,λ(t))]+α(t)𝔼s,x,y,λQλ(t)α(t)𝔼s,x,y,λPλ(t).\displaystyle 2\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}[Y^{*}(t)H(t,\lambda(t))]-2\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))]+\alpha(t)\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}\lambda(t)-\alpha(t)\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t).

Since 𝔼s,x,y,λP[Y(t)H(t,λ(t))]\mathbb{E}_{s,x,y,\lambda}^{P}[Y^{*}(t)H(t,\lambda(t))] and 𝔼s,x,y,λPλ(t)\mathbb{E}_{s,x,y,\lambda}^{P}\lambda(t) are derived in the first part of the proof, we only need to compute the values of 𝔼s,x,y,λQ[Y(t)H(t,λ(t))]\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}[Y^{*}(t)H(t,\lambda(t))] and 𝔼s,x,y,λQλ(t)\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}\lambda(t).

Denote by

dW0Q(t):=dW0(t)+μ0rσ0dt,\displaystyle dW_{0}^{Q^{*}}(t):=dW_{0}(t)+\frac{\mu_{0}-r}{\sigma_{0}}dt,
N~1Q(dt,dz):=N~1(dt,dz)λ(t)μ1κrzσ12F1(dz)dt,\displaystyle\widetilde{N}_{1}^{Q^{*}}(dt,dz):=\widetilde{N}_{1}(dt,dz)-\frac{\lambda(t)\mu_{1}\kappa_{r}z}{\sigma_{1}^{2}}F_{1}(dz)dt,
N~2Q(dt,dz):=N~2(dt,dz)ρ(eη(t)z+ϕ(t)eη(t)zz1)F2(dz)dt.\displaystyle\widetilde{N}_{2}^{Q^{*}}(dt,dz):=\widetilde{N}_{2}(dt,dz)-\rho\bigg{(}e^{-\eta(t)z}+\phi(t)e^{-\eta(t)z}z-1\bigg{)}F_{2}(dz)dt.

It follows from Theorem 10 and Theorem 11 in [26] that W0QW_{0}^{Q^{*}} is a standard Brownian motion, N~1Q\widetilde{N}_{1}^{Q^{*}} and N~2Q\widetilde{N}_{2}^{Q^{*}} are the compensated compound Poisson processes with respect to N1N_{1} and N2N_{2} under probability measure QQ^{*}. Thus,

dY(t)H(t,λ(t))=\displaystyle dY^{*}(t)H(t,\lambda(t))= Y(t)H(t,λ(t)){μ0rσ0dW0Q(t)+>0κrμ1zσ12N~1Q(dt,dz)+>0ϕ(t)zN~2Q(dt,dz)},\displaystyle Y^{*}(t)H(t,\lambda(t))\biggl{\{}-\frac{\mu_{0}-r}{\sigma_{0}}dW_{0}^{Q^{*}}(t)+\int_{\mathbb{R}_{>0}}\frac{\kappa_{r}\mu_{1}z}{\sigma_{1}^{2}}\widetilde{N}_{1}^{Q^{*}}(dt,dz)+\int_{\mathbb{R}_{>0}}\phi(t)z\widetilde{N}_{2}^{Q^{*}}(dt,dz)\biggr{\}},

and

dλ(t)=(δλ(t)+ρ(ιr+1)μ2)dt+>0zN~2Q(dt,dz).\displaystyle d\lambda(t)=(-\delta\lambda(t)+\rho(\iota_{r}+1)\mu_{2})dt+\int_{\mathbb{R}_{>0}}z\widetilde{N}_{2}^{Q^{*}}(dt,dz).

Therefore, Y(t)H(t,λ(t))Y^{*}(t)H(t,\lambda(t)) is a martingale under QQ^{*} such that

𝔼s,x,y,λQ[Y(t)H(t,λ(t))]=yH(s,λ).\displaystyle\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}[Y^{*}(t)H(t,\lambda(t))]=yH(s,\lambda).

By the same method in 1), we also have

𝔼s,x,y,λQλ(t)=eδ(ts)λ+ρ(ιr+1)μ2δ(1eδ(ts)).\displaystyle\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}\lambda(t)=e^{-\delta(t-s)}\lambda+\frac{\rho(\iota_{r}+1)\mu_{2}}{\delta}(1-e^{-\delta(t-s)}).

As a result,

𝔼s,x,y,λP[X(t)G(t)]𝔼s,x,y,λQ[X(t)G(t)]=\displaystyle\mathbb{E}_{s,x,y,\lambda}^{P}[X^{*}(t)G(t)]-\mathbb{E}_{s,x,y,\lambda}^{Q^{*}}[X^{*}(t)G(t)]= yθ(eη(s)λ+ζ(s)eψ1(s,t)λ+ψ2(s,t))+α(t)ριrμ2δ(1eδ(ts)).\displaystyle\frac{y}{\theta}\bigg{(}e^{\eta(s)\lambda+\zeta(s)}-e^{\psi_{1}(s,t)\lambda+\psi_{2}(s,t)}\bigg{)}+\alpha(t)\frac{\rho\iota_{r}\mu_{2}}{\delta}(1-e^{-\delta(t-s)}). (53)

To complete the proof, we only need to substitute (53) into (52) and eliminate θ\theta.

Appendix B Proofs of Statements in Section 4

B.1 Proof of Proposition 4.1

Proof.

We suppose that the form of solution to (26) is as follows

W(t,x,y,λ)=G(t)xy+H(t,λ)y2+I(t,λ)y.\displaystyle W(t,x,y,\lambda)=G(t)xy+H(t,\lambda)y^{2}+I(t,\lambda)y. (54)

Substituting (54) into (27) gives

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Gtxy+Hty2+Ity+{rx+(κr+κ)μ1λ+(ιr+ι)kμ2ρ}G(t)y\displaystyle G_{t}xy+H_{t}y^{2}+I_{t}y+\biggl{\{}rx+(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}G(t)y
+(δλ+ρμ2)(Hλy2+Iλy)+12ρσ22(Hλλy2+Iλλy)\displaystyle+(-\delta\lambda+\rho\mu_{2})(H_{\lambda}y^{2}+I_{\lambda}y)+\frac{1}{2}\rho\sigma_{2}^{2}(H_{\lambda\lambda}y^{2}+I_{\lambda\lambda}y)
+π(μ0r)G(t)y+πσ0yoG(t)+y2o2H(t,λ)\displaystyle+\pi(\mu_{0}-r)G(t)y+\pi\sigma_{0}yoG(t)+y^{2}o^{2}H(t,\lambda)
+κruμ1λG(t)yuσ1ρμ2δypG(t)+y2p2H(t,λ)\displaystyle+\kappa_{r}u\mu_{1}\lambda G(t)y-u\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}ypG(t)+y^{2}p^{2}H(t,\lambda)
+ιrkvμ2ρG(t)y+σ2ρyq(2Hλy+IλG(t)kv)+y2q2H(t,λ).\displaystyle+\iota_{r}kv\mu_{2}\rho G(t)y+\sigma_{2}\sqrt{\rho}yq(2H_{\lambda}y+I_{\lambda}-G(t)kv)+y^{2}q^{2}H(t,\lambda). (55)

If H(t,λ)>0H(t,\lambda)>0 holds, we differentiate (55) with respect to oo, pp and qq and let them equal 0, respectively, then the minimum of (55) is attained at

o=\displaystyle o^{*}= G(t)πσ02H(t,λ)y,\displaystyle-\frac{G(t)\pi\sigma_{0}}{2H(t,\lambda)y},
p=\displaystyle p^{*}= G(t)uσ1ρμ2δ2H(t,λ)y,\displaystyle\frac{G(t)u\sigma_{1}\sqrt{\frac{\rho\mu_{2}}{\delta}}}{2H(t,\lambda)y},
q=\displaystyle q^{*}= σ2ρ(2Hλy+IλG(t)kv)2H(t,λ)y.\displaystyle-\frac{\sigma_{2}\sqrt{\rho}(2H_{\lambda}y+I_{\lambda}-G(t)kv)}{2H(t,\lambda)y}.

Plugging oo^{*}, pp^{*} and qq^{*} back into (55) gives

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Gtxy+Hty2+Ity+{rx+(κr+κ)μ1λ+(ιr+ι)μ2ρ}G(t)y\displaystyle G_{t}xy+H_{t}y^{2}+I_{t}y+\biggl{\{}rx+(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)\mu_{2}\rho\biggr{\}}G(t)y
+(δλ+ρμ2)(Hλy2+Iλy)+12ρσ22(Hλλy2+Iλλy)\displaystyle+(-\delta\lambda+\rho\mu_{2})(H_{\lambda}y^{2}+I_{\lambda}y)+\frac{1}{2}\rho\sigma_{2}^{2}(H_{\lambda\lambda}y^{2}+I_{\lambda\lambda}y)
+π(μ0r)G(t)yG(t)2π2σ024H(t,λ)\displaystyle+\pi(\mu_{0}-r)G(t)y-\frac{G(t)^{2}\pi^{2}\sigma_{0}^{2}}{4H(t,\lambda)}
+κruμ1λG(t)yG(t)2u2σ12ρμ2δ4H(t,λ)\displaystyle+\kappa_{r}u\mu_{1}\lambda G(t)y-\frac{G(t)^{2}u^{2}\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}}{4H(t,\lambda)}
+ιrkvμ2ρG(t)yσ22ρ(2Hλy+IλG(t)kv)24H(t,λ).\displaystyle+\iota_{r}kv\mu_{2}\rho G(t)y-\frac{\sigma_{2}^{2}\rho(2H_{\lambda}y+I_{\lambda}-G(t)kv)^{2}}{4H(t,\lambda)}. (56)

If both G(t)>0G(t)>0 and H(t,λ)>0H(t,\lambda)>0, by differentiating (56) with respect to π\pi, uu and vv and letting it equal 0, we obtain

π=\displaystyle\pi^{*}= 2H(t,λ)(μ0r)yG(t)σ02,\displaystyle\frac{2H(t,\lambda)(\mu_{0}-r)y}{G(t)\sigma_{0}^{2}},
u=\displaystyle u^{*}= 2H(t,λ)κrμ1λyG(t)σ12ρμ2δ,\displaystyle\frac{2H(t,\lambda)\kappa_{r}\mu_{1}\lambda y}{G(t)\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}},
v=\displaystyle v^{*}= 1k(2H(t,λ)ιrμ2yG(t)σ22+2HλyG(t)+IλG(t)),\displaystyle\frac{1}{k}\bigg{(}\frac{2H(t,\lambda)\iota_{r}\mu_{2}y}{G(t)\sigma_{2}^{2}}+\frac{2H_{\lambda}y}{G(t)}+\frac{I_{\lambda}}{G(t)}\bigg{)},

and the maximum of (56) is attained at (π,u,v)(\pi^{*},u^{*},v^{*}).

Let us substitute π\pi^{*}, uu^{*} and vv^{*} back into (56),

ta,bW(t,x,y,λ)=\displaystyle\mathcal{L}_{t}^{a,b}W(t,x,y,\lambda)= Gtxy+Hty2+Ity+{rx+(κr+κ)μ1λ+(ιr+ι)kμ2ρ}G(t)y\displaystyle G_{t}xy+H_{t}y^{2}+I_{t}y+\biggl{\{}rx+(-\kappa_{r}+\kappa)\mu_{1}\lambda+(-\iota_{r}+\iota)k\mu_{2}\rho\biggr{\}}G(t)y
+(δλ+ρμ2)(Hλy2+Iλy)+12ρσ22(Hλλy2+Iλλy)\displaystyle+(-\delta\lambda+\rho\mu_{2})(H_{\lambda}y^{2}+I_{\lambda}y)+\frac{1}{2}\rho\sigma_{2}^{2}(H_{\lambda\lambda}y^{2}+I_{\lambda\lambda}y)
+H(t,λ)(μ0r)2y2σ02+H(t,λ)κr2μ12λ2y2σ12ρμ2δ+H(t,λ)ριr2μ22y2σ22\displaystyle+\frac{H(t,\lambda)(\mu_{0}-r)^{2}y^{2}}{\sigma_{0}^{2}}+\frac{H(t,\lambda)\kappa_{r}^{2}\mu_{1}^{2}\lambda^{2}y^{2}}{\sigma_{1}^{2}\frac{\rho\mu_{2}}{\delta}}+\frac{H(t,\lambda)\rho\iota_{r}^{2}\mu_{2}^{2}y^{2}}{\sigma_{2}^{2}}
+2Hλιrμ2ρy2+Iλιrμ2ρy,\displaystyle+2H_{\lambda}\iota_{r}\mu_{2}\rho y^{2}+I_{\lambda}\iota_{r}\mu_{2}\rho y,

then (29) is obtained by separation of variables. ∎

B.2 Proof of Lemma 4.2

Proof.

It is easy to obtain that

G(t)=er(Tt).\displaystyle G(t)=e^{r(T-t)}. (57)

For H(t,λ)H(t,\lambda), we suppose that it has the following form

H(t,λ)=12θeξ(t)λ2+η(t)λ+ζ(t).\displaystyle H(t,\lambda)=\frac{1}{2\theta}e^{\xi(t)\lambda^{2}+\eta(t)\lambda+\zeta(t)}. (58)

Substituting (58) into (29) gives

{ξ(t)+2ρσ22ξ(t)22δξ(t)+κr2μ12δρμ2σ12=0,η(t)(δ2ρσ22ξ(t))η(t)+2ρμ2(2ιr+1)ξ(t)=0,ζ(t)+ρμ2(2ιr+1)η(t)+12ρσ22η(t)2+ρσ22ξ(t)+(μ0r)2σ02+ριr2μ22σ22=0.\displaystyle\begin{cases}\xi^{\prime}(t)+2\rho\sigma_{2}^{2}\xi(t)^{2}-2\delta\xi(t)+\frac{\kappa_{r}^{2}\mu_{1}^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}=0,\\ \eta^{\prime}(t)-(\delta-2\rho\sigma_{2}^{2}\xi(t))\eta(t)+2\rho\mu_{2}(2\iota_{r}+1)\xi(t)=0,\\ \zeta^{\prime}(t)+\rho\mu_{2}(2\iota_{r}+1)\eta(t)+\frac{1}{2}\rho\sigma_{2}^{2}\eta(t)^{2}+\rho\sigma_{2}^{2}\xi(t)+\frac{(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}+\frac{\rho\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}=0.\end{cases}

For ξ(t)\xi(t), we have

(Tt)=\displaystyle(T-t)= tTdξ(s)2ρσ22ξ(s)22δξ(s)+κr2μ12δρμ2σ12.\displaystyle-\int_{t}^{T}\frac{d\xi(s)}{2\rho\sigma_{2}^{2}\xi(s)^{2}-2\delta\xi(s)+\frac{\kappa_{r}^{2}\mu_{1}^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}}.

The characteristic equation is given by

2ρσ22d22δd+κr2μ12δρμ2σ12=0,\displaystyle 2\rho\sigma_{2}^{2}d^{2}-2\delta d+\frac{\kappa_{r}^{2}\mu_{1}^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}=0,

which has two solutions, namely,

d1,2=2δ±Δ4ρσ22,\displaystyle d_{1,2}=\frac{2\delta\pm\sqrt{\Delta}}{4\rho\sigma_{2}^{2}},

where

Δ:=\displaystyle\Delta:= 4δ28κr2μ12σ22δμ2σ12.\displaystyle 4\delta^{2}-\frac{8\kappa_{r}^{2}\mu_{1}^{2}\sigma_{2}^{2}\delta}{\mu_{2}\sigma_{1}^{2}}.

If Δ=0\Delta=0, then d1=d2=δ2ρσ22d_{1}=d_{2}=\frac{\delta}{2\rho\sigma_{2}^{2}}, and

(Tt)=\displaystyle(T-t)= tTdξ(s)2ρσ22ξ(s)22δξ(s)+κr2μ12δρμ2σ12\displaystyle-\int_{t}^{T}\frac{d\xi(s)}{2\rho\sigma_{2}^{2}\xi(s)^{2}-2\delta\xi(s)+\frac{\kappa_{r}^{2}\mu_{1}^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}}
=\displaystyle= 12ρσ22tT1(ξ(s)d1)2𝑑ξ(s)\displaystyle-\frac{1}{2\rho\sigma_{2}^{2}}\int_{t}^{T}\frac{1}{(\xi(s)-d_{1})^{2}}d\xi(s)
=\displaystyle= 12ρσ22(1d1+1ξ(t)d1),\displaystyle-\frac{1}{2\rho\sigma_{2}^{2}}\bigg{(}\frac{1}{d_{1}}+\frac{1}{\xi(t)-d_{1}}\bigg{)},

thus

ξ(t)=\displaystyle\xi(t)= κr2μ12σ12Ttρμ2(Tt)+ρμ2δ.\displaystyle\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{T-t}{\rho\mu_{2}(T-t)+\frac{\rho\mu_{2}}{\delta}}.

If Δ>0\Delta>0,

Tt=\displaystyle T-t= tTdξ(s)2ρσ22ξ(s)22δξ(s)+κr2μ12δρμ2σ12\displaystyle-\int_{t}^{T}\frac{d\xi(s)}{2\rho\sigma_{2}^{2}\xi(s)^{2}-2\delta\xi(s)+\frac{\kappa_{r}^{2}\mu_{1}^{2}\delta}{\rho\mu_{2}\sigma_{1}^{2}}}
=\displaystyle= 12ρσ22(d1d2)tT1ξ(s)d11ξ(s)d2dξ(s)\displaystyle-\frac{1}{2\rho\sigma_{2}^{2}(d_{1}-d_{2})}\int_{t}^{T}\frac{1}{\xi(s)-d_{1}}-\frac{1}{\xi(s)-d_{2}}d\xi(s)
=\displaystyle= 1Δln|ξ(s)d1ξ(s)d2||tT.\displaystyle-\frac{1}{\sqrt{\Delta}}\ln\left|\frac{\xi(s)-d_{1}}{\xi(s)-d_{2}}\right||_{t}^{T}.

Since ξ(t)\xi(t) is a continuous function with boundary condition ξ(T)=0<d2<d1\xi(T)=0<d_{2}<d_{1}, we assert that ξ(t)=0<d2<d1\xi(t)=0<d_{2}<d_{1} for all t[0,T]t\in[0,T]. Factually, if {t:ξ(t)d2}[0,T]\{t:\xi(t)\geq d_{2}\}\cap[0,T]\neq\varnothing, then, by its continuity, there exists a t0[0,T]t_{0}\in[0,T] such that ξ(t0)=d2\xi(t_{0})=d_{2}. In this case,

Δ(Tt0)=lnd1ξ(t0)d2ξ(t0)lnd1d2=,\displaystyle\sqrt{\Delta}(T-t_{0})=\ln\frac{d_{1}-\xi(t_{0})}{d_{2}-\xi(t_{0})}-\ln\frac{d_{1}}{d_{2}}=-\infty,

which is a contradiction. Hence

Δ(Tt)=\displaystyle\sqrt{\Delta}(T-t)= lnd1ξ(t)d2ξ(t)lnd1d2,\displaystyle\ln\frac{d_{1}-\xi(t)}{d_{2}-\xi(t)}-\ln\frac{d_{1}}{d_{2}},

which gives

ξ(t)=d1d2(eΔ(Tt)1)d1eΔ(Tt)d2.\displaystyle\xi(t)=\frac{d_{1}d_{2}(e^{\sqrt{\Delta}(T-t)}-1)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}.

Moreover since d1>d2d_{1}>d_{2}, we have ξ(t)>0\xi(t)>0 for all t[0,T]t\in[0,T].

For η(t)\eta(t), by integration, we have

η(t)=\displaystyle\eta(t)= tTets(δ2ρσ22ξ(τ))𝑑τ2ρμ2(2ιr+1)ξ(s)𝑑s.\displaystyle\int_{t}^{T}e^{-\int_{t}^{s}\left(\delta-2\rho\sigma_{2}^{2}\xi(\tau)\right)d\tau}2\rho\mu_{2}(2\iota_{r}+1)\xi(s)ds.

If Δ=0\Delta=0, we have

δ=2κr2μ12σ22μ2σ12.\displaystyle\delta=\frac{2\kappa_{r}^{2}\mu_{1}^{2}\sigma_{2}^{2}}{\mu_{2}\sigma_{1}^{2}}.

Hence

2ρσ22tsξ(τ)𝑑τ=\displaystyle 2\rho\sigma_{2}^{2}\int_{t}^{s}\xi(\tau)d\tau= 2ρσ22κr2μ12σ12tsTτρμ2(Tτ)+ρμ2δ𝑑τ\displaystyle\frac{2\rho\sigma_{2}^{2}\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\int_{t}^{s}\frac{T-\tau}{\rho\mu_{2}(T-\tau)+\frac{\rho\mu_{2}}{\delta}}d\tau
=\displaystyle= 2σ22κr2μ12σ12μ2(st+1δln(δ(Ts)+1δ(Tt)+1))\displaystyle\frac{2\sigma_{2}^{2}\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}\mu_{2}}\left(s-t+\frac{1}{\delta}\ln\left(\frac{\delta(T-s)+1}{\delta(T-t)+1}\right)\right)
=\displaystyle= δ(st)+ln(δ(Ts)+1δ(Tt)+1),\displaystyle\delta(s-t)+\ln\left(\frac{\delta(T-s)+1}{\delta(T-t)+1}\right),

and

ets(δ2ρσ22ξ(τ))𝑑τ=\displaystyle e^{-\int_{t}^{s}\left(\delta-2\rho\sigma_{2}^{2}\xi(\tau)\right)d\tau}= δ(Ts)+1δ(Tt)+1.\displaystyle\frac{\delta(T-s)+1}{\delta(T-t)+1}.

Therefore,

η(t)=\displaystyle\eta(t)= tTets(δ2ρσ22ξ(τ))𝑑τ2ρμ2(2ιr+1)ξ(s)𝑑s\displaystyle\int_{t}^{T}e^{-\int_{t}^{s}\left(\delta-2\rho\sigma_{2}^{2}\xi(\tau)\right)d\tau}2\rho\mu_{2}(2\iota_{r}+1)\xi(s)ds
=\displaystyle= 2ρμ2(2ιr+1)tTδ(Ts)+1δ(Tt)+1κr2μ12σ12Tsρμ2(Ts)+ρμ2δ𝑑s\displaystyle 2\rho\mu_{2}(2\iota_{r}+1)\int_{t}^{T}\frac{\delta(T-s)+1}{\delta(T-t)+1}\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{T-s}{\rho\mu_{2}(T-s)+\frac{\rho\mu_{2}}{\delta}}ds
=\displaystyle= 2(2ιr+1)κr2μ12σ12tTδ(Ts)+1δ(Tt)+1δ(Ts)δ(Ts)+1𝑑s\displaystyle\frac{2(2\iota_{r}+1)\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\int_{t}^{T}\frac{\delta(T-s)+1}{\delta(T-t)+1}\frac{\delta(T-s)}{\delta(T-s)+1}ds
=\displaystyle= (2ιr+1)δκr2μ12σ12(Tt)2δ(Tt)+1.\displaystyle\frac{(2\iota_{r}+1)\delta\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{(T-t)^{2}}{\delta(T-t)+1}.

If Δ>0\Delta>0, then

2ρσ22tsξ(τ)𝑑τ=\displaystyle 2\rho\sigma_{2}^{2}\int_{t}^{s}\xi(\tau)d\tau= 2ρσ22ts(d1+(d2d1)eΔ(Tr)eΔ(Tτ)d2d1)𝑑τ\displaystyle 2\rho\sigma_{2}^{2}\int_{t}^{s}\left(d_{1}+(d_{2}-d_{1})\frac{e^{\sqrt{\Delta}(T-r)}}{e^{\sqrt{\Delta}(T-\tau)}-\frac{d_{2}}{d_{1}}}\right)d\tau
=\displaystyle= 2ρσ22tsd1𝑑τ2ρσ22ts((d1d2)eΔ(Tr)eΔ(Tτ)d2d1)𝑑τ\displaystyle 2\rho\sigma_{2}^{2}\int_{t}^{s}d_{1}d\tau-2\rho\sigma_{2}^{2}\int_{t}^{s}\left((d_{1}-d_{2})\frac{e^{\sqrt{\Delta}(T-r)}}{e^{\sqrt{\Delta}(T-\tau)}-\frac{d_{2}}{d_{1}}}\right)d\tau
=\displaystyle= 2ρσ22d1(st)+ln(d1eΔ(Ts)d2d1eΔ(Tt)d2),\displaystyle 2\rho\sigma_{2}^{2}d_{1}(s-t)+\ln{\left(\frac{d_{1}e^{\sqrt{\Delta}(T-s)}-d_{2}}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}\right)},

and

ets(δ2ρσ22ξ(τ))𝑑τ=\displaystyle e^{-\int_{t}^{s}\left(\delta-2\rho\sigma_{2}^{2}\xi(\tau)\right)d\tau}= eΔ2(ts)(d1eΔ(Ts)d2d1eΔ(Tt)d2).\displaystyle e^{\frac{\sqrt{\Delta}}{2}(t-s)}\left(\frac{d_{1}e^{\sqrt{\Delta}(T-s)}-d_{2}}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}\right).

Thus η(t)\eta(t) can be simplified as

η(t)=\displaystyle\eta(t)= tTets(δ2ρσ22ξ(τ))𝑑τ2ρμ2(2ιr+1)ξ(s)𝑑s\displaystyle\int_{t}^{T}e^{-\int_{t}^{s}\left(\delta-2\rho\sigma_{2}^{2}\xi(\tau)\right)d\tau}2\rho\mu_{2}(2\iota_{r}+1)\xi(s)ds
=\displaystyle= 2ρμ2(2ιr+1)tTeΔ2(ts)(d1eΔ(Ts)d2d1eΔ(Tt)d2)(d1d2(eΔ(Ts)1)d1eΔ(Ts)d2)𝑑s\displaystyle 2\rho\mu_{2}(2\iota_{r}+1)\int_{t}^{T}e^{\frac{\sqrt{\Delta}}{2}(t-s)}\left(\frac{d_{1}e^{\sqrt{\Delta}(T-s)}-d_{2}}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}\right)\left(\frac{d_{1}d_{2}(e^{\sqrt{\Delta}(T-s)}-1)}{d_{1}e^{\sqrt{\Delta}(T-s)}-d_{2}}\right)ds
=\displaystyle= 4ρμ2(2ιr+1)d1d23Δ(eΔ2(Tt)1)2(1+2eΔ2(Tt))d1eΔ(Tt)d2.\displaystyle\frac{4\rho\mu_{2}(2\iota_{r}+1)d_{1}d_{2}}{3\sqrt{\Delta}}\frac{\left(e^{\frac{\sqrt{\Delta}}{2}(T-t)}-1\right)^{2}\left(1+2e^{-\frac{\sqrt{\Delta}}{2}(T-t)}\right)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}.

By integrating with respect tt, ζ(t)\zeta(t) can be obtained as follows

ζ(t)=ρtT(ρμ2(2ιr+1)η(s)+12ρσ22η(s)2+ρσ22ξ(s))𝑑s+ρ(μ0r)2σ02(Tt)+ρ2ιr2μ22σ22(Tt).\displaystyle\zeta(t)=\rho\int_{t}^{T}\bigg{(}\rho\mu_{2}(2\iota_{r}+1)\eta(s)+\frac{1}{2}\rho\sigma_{2}^{2}\eta(s)^{2}+\rho\sigma_{2}^{2}\xi(s)\bigg{)}ds+\frac{\rho(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-t)+\frac{\rho^{2}\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}(T-t).

To sum up, we have

{ξ(t)=κr2μ12σ12Ttρμ2(Tt)+ρμ2δ1{Δ=0}+d1d2(eΔ(Tt)1)d1eΔ(Tt)d21{Δ>0},η(t)=(2ιr+1)δκr2μ12σ12(Tt)2δ(Tt)+11{Δ=0}+4ρμ2(2ιr+1)d1d23Δ(eΔ2(Tt)1)2(1+2eΔ2(Tt))d1eΔ(Tt)d21{Δ>0},ζ(t)=ρtT(ρμ2(2ιr+1)η(s)+12ρσ22η(s)2+ρσ22ξ(s))𝑑s+ρ(μ0r)2σ02(Tt)+ρ2ιr2μ22σ22(Tt),\displaystyle\begin{cases}\xi(t)=&\frac{\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{T-t}{\rho\mu_{2}(T-t)+\frac{\rho\mu_{2}}{\delta}}1_{\{\Delta=0\}}+\frac{d_{1}d_{2}(e^{\sqrt{\Delta}(T-t)}-1)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \eta(t)=&\frac{(2\iota_{r}+1)\delta\kappa_{r}^{2}\mu_{1}^{2}}{\sigma_{1}^{2}}\frac{(T-t)^{2}}{\delta(T-t)+1}1_{\{\Delta=0\}}+\frac{4\rho\mu_{2}(2\iota_{r}+1)d_{1}d_{2}}{3\sqrt{\Delta}}\frac{\left(e^{\frac{\sqrt{\Delta}}{2}(T-t)}-1\right)^{2}\left(1+2e^{-\frac{\sqrt{\Delta}}{2}(T-t)}\right)}{d_{1}e^{\sqrt{\Delta}(T-t)}-d_{2}}1_{\{\Delta>0\}},\\ \zeta(t)=&\rho\int_{t}^{T}\bigg{(}\rho\mu_{2}(2\iota_{r}+1)\eta(s)+\frac{1}{2}\rho\sigma_{2}^{2}\eta(s)^{2}+\rho\sigma_{2}^{2}\xi(s)\bigg{)}ds+\frac{\rho(\mu_{0}-r)^{2}}{\sigma_{0}^{2}}(T-t)+\frac{\rho^{2}\iota_{r}^{2}\mu_{2}^{2}}{\sigma_{2}^{2}}(T-t),\end{cases}

where

Δ=4δ28κr2μ12σ22δμ2σ12,d1,2=2δ±Δ4ρσ22.\displaystyle\Delta=4\delta^{2}-\frac{8\kappa_{r}^{2}\mu_{1}^{2}\sigma_{2}^{2}\delta}{\mu_{2}\sigma_{1}^{2}},\quad d_{1,2}=\frac{2\delta\pm\sqrt{\Delta}}{4\rho\sigma_{2}^{2}}.

We make the ansatz that I(t,λ)I(t,\lambda) has the following form

I(t,λ)=α(t)λ+β(t).\displaystyle I(t,\lambda)=\alpha(t)\lambda+\beta(t). (59)

Similarly, substituting (57) and (59) into (29) gives

{α(t)δα(t)(κrκ)μ1er(Tt)=0,β(t)(ιrι)kρμ2er(Tt)+(1+ιr)ρμ2α(t)=0,\displaystyle\begin{cases}\alpha^{\prime}(t)-\delta\alpha(t)-(\kappa_{r}-\kappa)\mu_{1}e^{r(T-t)}=0,\\ \beta^{\prime}(t)-(\iota_{r}-\iota)k\rho\mu_{2}e^{r(T-t)}+(1+\iota_{r})\rho\mu_{2}\alpha(t)=0,\end{cases}

which admit the solutions

{α(t)=(κrκ)μ1tTeδ(st)er(Ts)𝑑s,β(t)=ρtT{α(s)(1+ιr)μ2(ιrι)kμ2er(Ts)}𝑑s,\displaystyle\begin{cases}\alpha(t)=-(\kappa_{r}-\kappa)\mu_{1}\int_{t}^{T}e^{-\delta(s-t)}e^{r(T-s)}ds,\\ \beta(t)=\rho\int_{t}^{T}\biggl{\{}\alpha(s)(1+\iota_{r})\mu_{2}-(\iota_{r}-\iota)k\mu_{2}e^{r(T-s)}\biggr{\}}ds,\end{cases}

i.e.,

{α(t)=(κrκ)μ1δ+r(eδ(Tt)er(Tt)),β(t)=ρ(κrκ)(ιr+1)μ1μ2δ+r(1δ(1eδ(Tt))+1r(1er(Tt)))ρ(ιrι)kμ2r(1er(Tt)).\displaystyle\begin{cases}\alpha(t)=\frac{(\kappa_{r}-\kappa)\mu_{1}}{\delta+r}(e^{-\delta(T-t)}-e^{r(T-t)}),\\ \beta(t)=\frac{\rho(\kappa_{r}-\kappa)(\iota_{r}+1)\mu_{1}\mu_{2}}{\delta+r}\bigg{(}\frac{1}{\delta}(1-e^{-\delta(T-t)})+\frac{1}{r}(1-e^{r(T-t)})\bigg{)}-\frac{\rho(\iota_{r}-\iota)k\mu_{2}}{r}(1-e^{r(T-t)}).\end{cases}