This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Model of Distributed Disorders Detection

Abstract.

The paper deals with disorders detection in the multivariate stochastic process. We consider the multidimensional Poisson process or the multivariate renewal process. This class of processes can be used as a description of the distributed detection system. The multivariate renewal process can be seen as the sequence of random vectors, where parts of its coordinates are holding times, others are the size of jumps and the index of stream, at which the new event appears. It is assumed that at each stream two kinds of changes are possible: in the holding time or in the size of jumps distribution. The various specific mutual relations between the change points are possible. The aim of the research is to derive the detectors which realize the optimal value of the specified criterion. The change point moment estimates have been obtained in some cases. The difficulties have appeared for the dependent streams with unspecified order of change points. The presented results suggest further research on the construction of detectors in the general model.

Key words and phrases:
Disorder problem, change-point problems, quickest detection, sequential procedure, stopping time, dynamic programming, Markov process.
1991 Mathematics Subject Classification:
Primary: 93E10, 62L15; Secondary: 93E03, 60G40 .
Corresponding author: Krzysztof J. Szajowski

Krzysztof J. Szajowski

Wrocław University of Science and Technology

Faculty of Pure and Applied Mathematics

Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland


(Communicated by the associate editor Arnd Rösch)

1. Introduction

The subject of discussion is the model that describes the phenomenon of piecewise deterministic signals. Time-intervals between jumps are random variables and jumps’ size is also random. The modeled object, for a random period, is in a homogeneous state. In a random time the signal changes its nature and time between jumps, although further random, but have a different distribution, or the size of jumps changes its distribution. After the change is a time homogeneous process and at random time another change. In the present case, there may be one change in the distribution of time between notifications and one change in the distribution of jumps’ size. The aim is to locate the two changes in real time. The other change in the behavior of the process does not continue to alter. Signals of this nature appear in technical issues, medicine, and finance.

1.1. A preliminary consideration

In this paper the construction of the mathematical model of the phenomenon described above requires the determination of a probabilistic space in which all random quantities, random variables and stochastic processes, are defined. Let (Ω,,𝐏())(\Omega,{\mathcal{F}},{\bf P}(\cdot)) be fixed probability space. The consideration will focus on the renewal-reward model of change points detection. A renewal–reward process is a jump process with a general holding time distributions and general distribution of jumps (see Brémaud [Brémaud(1981)], Jacobsen [Jacobsen(2006)]). Let us denote {Wi}i=1\{W_{i}\}_{i=1}^{\infty} a sequence of iid rv (rewards) with the finite expected value. The random variable Yt=i=1XtWiY_{t}=\sum_{i=1}^{X_{t}}W_{i}, where XtX_{t} is the renewal process, is called a renewal-reward process. On the turn, a renewal process is a pure jump process with a general distribution of the holding times. To describe this type of processes, let us consider a sequence of positive iid rv {Si}i=1\{S_{i}\}_{i=1}^{\infty} with the finite and positive expectation. The renewal process Xt=n=1𝕀{Jnt}=sup{n:Jnt}X_{t}=\sum^{\infty}_{n=1}\mathbb{I}_{\{J_{n}\leq t\}}=\sup\left\{\,n:J_{n}\leq t\,\right\}, where Jn=i=1nSiJ_{n}=\sum_{i=1}^{n}S_{i} for each n>0n>0. The mentioned changes of distributions are assumed to appear on the sequences {Si}i=1\{S_{i}\}_{i=1}^{\infty} and {Wi}i=1\{W_{i}\}_{i=1}^{\infty} at some random moments θ1\theta_{1} and θ2\theta_{2}, respectively. The general detectors of disorders take values at +\Re^{+}. In this approach the class of decision function will be restricted to the event (the jump) moments. This means that the moments of the decision can be identified with the index jumping moments in the process. The underlined models can be represented as the sequences of random vectors {(Sn,Wn)}n=1\{(S_{n},W_{n})\}_{n=1}^{\infty}. The r.v. WnW_{n} and SnS_{n} do not need to be necessarily independent in contrast to the classical theory of the renewal–reward processes. The aim of the research is to formulate the rigorous model of the problem and to investigate them.

1.2. A motivation

The motivation for the project is the wide literature related to the optimal stopping problem and the scarce results for the multiple stopping settings. The related questions for the change point models are also stimulating. There is satisfactory literature describing the state of art in the disorder problem or the change-point problems in the off-line and on–line setting. Let us mention the monograph by Brodsky and Darkhovsky [Brodsky and Darkhovsky(1993)] and the story by Shiryaev [Shiryaev(2006)].

The disorder problem has a long history (see Shiryaev [Shiryaev(2006)], Proceedings of AMS-IMS-SIAM Summer Research Conference on ”Change point problems” E. Carlstein et al. (ed) [Carlstein et al.(1994)Carlstein, Múller, and Siegmund]). It is known from the early papers by Page [Page(1954), Page(1955)], Girshick and Rubin [Girshick and Rubin(1952)]. However, it was A.N. Kolmogorow who had formulated the realistic and mathematically precise model of rapid detection of a change point or disorder. His student, A.N. Shiryaev [Shiryaev(1961), Shiryaev(1963)] (see the history described in [Shiryaev(2006)]) published the first important results for the sequential problems of disorder detection in the Poisson distribution. This direction of research is the subject of intensive work of contemporary projects. Let us mention papers by Gal’chuk and Rozowski [Galčuk and Rozovskiĭ(1971)], Davis [Davis(1974)], Peskir and Shiryaev [Peskir and Shiryaev(2002)] (see also Gapeev [Gapeev(2005)], Bayraktar, Dayanik and Karatzas [Bayraktar and Dayanik(2006)]. The manifold experiment to formulate the adequate model of disorder for more complex processes which appear in the observed phenomena of nature and economy, showed extreme difficulties (see Fuh [Fuh(2004)], Ivanoff and Merzbach [Ivanoff and Merzbach(2010)], Szajowski [Szajowski(2011)]).

The work continues to carry out research described in the papers published by Sarnowski and the author [Sarnowski and Szajowski(2008)], [Sarnowski and Szajowski(2011)] on the change point problem for the undefined, the Markovian type processes before and after the disorder. Related results can be found in the papers by Bojdecki i Hosza [Bojdecki and Hosza(1984)], Szajowski [Szajowski(1992)], Yoshida [Yoshida(1983)], Yakir [Yakir(1994)] and Moustakides [Moustakides(1998)]. The most important guidelines for considering the results of this work are contained in the authors’ works [Szajowski(2011)] on multivariate disorders detection.

The key technique used in the work is based on a multiple optimal stopping a Markov process. The fundamental knowledge on the optimal stopping of random processes can be found in the monographies by Chow, Robbins and Siegmund [Chow et al.(1971)Chow, Robbins, and Siegmund], [Shiryaev(2008)] or Peskir and Shiryaev [Peskir and Shiryaev(2006)]. The first work devoted to the multiple stopping of the discrete time sequences processes was published by Haggstroma [Haggstrom(1967)] and for the Markov processes Nikolaev [Nikolaev(1981)] (see also Eidukjavicjus [Eidukjavicjus(1979)], Nikolaev[Nikolaev(1998)]). The extension of the model to semi-Markov processes has been made by Stadje [Stadje(1985)]. Recent results for continuous time processes have been presented by Kobylanski, Quenez and Rouy-Mironescu [Kobylanski et al.(2010)Kobylanski, Quenez, and Rouy-Mironescu]) and the various applications of such an approach are described in papers by Feng and Xiao [Feng and Xiao(2000b)], [Feng and Xiao(2000a)], Karpowicz and Szajowski [Karpowicz and Szajowski(2007a), Karpowicz and Szajowski(2012)]. For the risk process such extension can be found in papers by Karpowicz and Szajowski [Karpowicz and Szajowski(2007b)]. However, in the approach applied here there is no need to refer to the results for the semi-Markov processes. The analytical difficulties for such problems open various questions concerning the construction of algorithms, approximate solutions and Monte Carlo methods.

1.3. Variation of the basic problems

A general formulation starts from a random sequences {Si}i=1\{S_{i}\}_{i=1}^{\infty} and {Wi}i=1\{W_{i}\}_{i=1}^{\infty} having segments that are the homogeneous Markov sequences. Each segment has its own transition probability law and the length of the segment is unknown and random. It means that there are random moments θ1\theta_{1} and θ2\theta_{2}, at which the source of observations is changed. The transition probabilities of each process at the segment are chosen from a given set of distributions. An a priori distributions of the disorder moments are given. The problem is to construct the detection algorithm of the disorders. The algorithm should detect the change points with the given precision and maximal probability.

To be more precise, let us see examples. It is easy to ascertain that in the renewal–reward process the distribution of holding time can be first changed in the moment θ1\theta_{1}, and next, in the moment θ2θ1\theta_{2}\geq\theta_{1} the distribution of the random variables WnW_{n} changes. One can formulate three such problems:

  • it is known that θ1<θ2\theta_{1}<\theta_{2}, ie holding time has the distribution change moment earlier than the rewards distribution;

  • the order of change points is different: the first the reward sequences are disordered;

  • there is no information which sequences will change distribution as the first.

In Bayesian model the last case needs additional a priori information about the chance that the disorder of rewards will appear before the holding time disorder.

If the process is a model of signals from the distributed sources then the disorders in different sources are combined to construct the final decision about the reason of non-homogeneous behavior of the process. Based on the idea of a simple game the model of the fusion center is proposed. The strategy of detection at each segment (source) of the process is defined as the equilibrium in a non-cooperative game between the selfish sensors (see [Szajowski(2011)]).

For the single process having structure similar to those of the generalized compound Poisson processes the temporal disorder is possible. It is a natural problem, mentioned by researchers (see [Bayraktar and Poor(2007)], [Yoshida(1983)] and others). When the model of the process is equivalent to the sequence of a vector of random variables, it is possible that each coordinate changes their distribution in some moments which could be different at each coordinate.

Our environment is described by the states of nature and they are viable over time. There are no objective boundaries in the state space defining safe or unsafe world. These borders are marked by a subjective knowledge. After the appointment of the task of maintaining a safe environment the aim is to observe and monitor the states in time to know their relation to the boundaries. Methodology presented in the paper allows to prepare the environment description and the detectors of the borders of the areas. For the illustration of a description it was selected variant environment in which states form the renewal–reward process. Areas of the state space, acceptable or not, describe the observed distributions of the process. Acceptable states of the system have a distribution, and unacceptable otherwise. Due to the multi-dimensional character of the process the specified areas are determined by the boundaries of the components of the state vector. The model is analysed and the boundaries are determine for the areas of warning states. To determine the boundaries of the detector indicates that to have the correct detection it is necessary a priori knowledge of the possible scenarios achieve a state of emergency. Efficient detection of threats is only possible if the nature implements provided by our scenario. Strategy to little knowledge about the possible scenario is far less effective and it is much more cumbersome to implement.

2. Random switching between Markov processes

Let us formulate the general detection for at most two change points problem or switching detection problem. et us consider an observable sequence of random variables (Xn)n(X_{n})_{n\in\mathbb{N}} with the homogeneous structure on the time intervals: n[0,θ11]n\in[0,\theta_{1}-1], [θ1,θ21][\theta_{1},\theta_{2}-1] and [θ2,)[\theta_{2},\infty). The parameters θ1\theta_{1}, θ2\theta_{2} are pairs of random variables with values in \mathbb{N} having distributions:

𝐏(θ1=j)\displaystyle{\bf P}(\theta_{1}=j) =\displaystyle= 𝕀{j=0}(j)π+𝕀{j>0}(j)π¯p1j1q1,\displaystyle{\mathbb{I}}_{\{j=0\}}(j)\pi+{\mathbb{I}}_{\{j>0\}}(j)\bar{\pi}p_{1}^{j-1}q_{1}, (1)
𝐏(θ2=kθ1=j)\displaystyle{\bf P}(\theta_{2}=k\mid\theta_{1}=j) =\displaystyle= 𝕀{k=j}(k)ρ+𝕀{k>j}(k)ρ¯p2kj1q2\displaystyle{\mathbb{I}}_{\{k=j\}}(k)\rho+{\mathbb{I}}_{\{k>j\}}(k)\bar{\rho}p_{2}^{k-j-1}q_{2} (2)

where j=0,1,2,j=0,1,2,..., k=j,j+1,j+2,k=j,j+1,j+2,..., π¯=1π\bar{\pi}=1-\pi, ρ¯=1ρ\bar{\rho}=1-\rho.

In segments the distribution depends additionally on the parameter ϵi\epsilon_{i} with values from the finite set 𝒦={1,2,,K}{\mathcal{K}}=\{1,2,\ldots,K\}. It is assumed that 𝐏(ϵi=s)=rsi{\bf P}(\epsilon_{i}=s)=r^{i}_{s}, s𝒦s\in{\mathcal{K}} and i{0,1,2}i\in\{0,1,2\}.

In the sequel if there is only one possible process in the segment then the second index will be abandoned.

Additionally, there are Markov processes (Xnis,𝒢nis,𝐏xis)(X_{n}^{is},\mathcal{G}_{n}^{is},{\bf P}_{x}^{is}), i=0,1,2i=0,1,2, s𝒦s\in{\mathcal{K}}, where σ\sigma-fields 𝒢nis\mathcal{G}_{n}^{is} are the smallest σ\sigma-fields for which (Xnis)n=0{(X^{is}_{n})}_{n=0}^{\infty} are adapted, respectively. The process (Xn)n(X_{n})_{n\in\mathbb{N}} is connected with the random variables θ1\theta_{1}, θ2\theta_{2}, ϵi\epsilon_{i} and the Markov processes {Xnis}n=0\{X_{n}^{is}\}_{n=0}^{\infty} as follows:

Xn\displaystyle X_{n} =\displaystyle= Xn0s0𝕀{θ1>n,ϵ0=s0}+Xnθ1+11s1𝕀{X01=xθ110,θ1n<θ2,ϵ1=s1}\displaystyle X^{0s_{0}}_{n}{\mathbb{I}}_{\{\theta_{1}>n,\epsilon_{0}=s_{0}\}}+X^{1s_{1}}_{n-\theta_{1}+1}{\mathbb{I}}_{\{X^{1}_{0}=x^{0}_{\theta_{1}-1},\theta_{1}\leq n<\theta_{2},\epsilon_{1}=s_{1}\}}
+Xnθ2+12s2𝕀{X02=xθ2θ11,θ2n,ϵ2=s2}.\displaystyle\mbox{}+X^{2s_{2}}_{n-\theta_{2}+1}{\mathbb{I}}_{\{X^{2}_{0}=x^{1}_{\theta_{2}-\theta_{1}},\theta_{2}\leq n,\epsilon_{2}=s_{2}\}}.

The observable sequence of rv is defined on the space (Ω,,𝐏)(\Omega,\mathcal{F},{\bf P}) with values in Borel subset (𝔼,)(\mathbb{E},\mathcal{B}), 𝔼𝐑\mathbb{E}\subset\mathbf{R} with σ\sigma-additive measure μ\mu. The measures 𝐏xi(){{\bf P}^{i}_{x}}(\cdot) on {\mathcal{F}}, i=0,1,2i=0,1,2, have the following representation:

𝐏xis(ω:X1isB)\displaystyle{\bf P}^{is}_{x}(\omega:X_{1}^{is}\in B) =\displaystyle= Bfxis(y)μ(dy)=Bμxis(dy)=μxis(B),\displaystyle\int_{B}f_{x}^{is}(y)\mu(dy)=\int_{B}\mu_{x}^{is}(dy)=\mu^{is}_{x}(B),

for any BB\in{\mathcal{B}}, where fxis()f_{x}^{is}(\cdot) are different and fxisi(y)/fx((i+1)mod3)s(i+1)mod3(y)<f_{x}^{is_{i}}(y)/f_{x}^{((i+1)\text{mod}3)s_{(i+1)\text{mod}3}}(y)<\infty for i=0,1,2i=0,1,2, s𝒦s_{\cdot}\in{\mathcal{K}} and all x,y𝔼x,y\in\mathbb{E}.

2.1. Finite dimensional distribution of process

For any Dn={ω:XiBi,i=1,,n}D_{n}=\{\omega:X_{i}\in B_{i},\;i=1,\ldots,n\}, where BiB_{i}\in{\mathcal{B}}, and any x𝔼x\in\mathbb{E} define

𝐏x(Dn)\displaystyle{\bf P}_{x}(D_{n}) =\displaystyle= ×i=1nBiSn(x,yn)μ(dyn)=×i=1nBiμx(dyn)=μx(×i=1nBi).\displaystyle\int_{\times_{i=1}^{n}B_{i}}S_{n}(x,\vec{y}_{n})\mu(d\vec{y}_{n})=\int_{\times_{i=1}^{n}B_{i}}\mu_{x}(d\vec{y}_{n})=\mu_{x}(\times_{i=1}^{n}B_{i}).

Let 𝒮{\mathcal{S}} be the set of all stopping times with respect to (n)({\mathcal{F}}_{n}), n=0,1,n=0,1,\ldots and 𝒯={(τ,σ):τσ,τ,σ𝒮}{\mathcal{T}}=\{(\tau,\sigma):\tau\leq\sigma,\ \tau,\sigma\in{\mathcal{S}}\}.

2.2. Criteria of change–point detection

The aim of the DM is to indicate the moments of switching with given precision d1,d2d_{1},d_{2} (Problem Dd1d2\mbox{D}_{d_{1}d_{2}}). We want to determine a pair of stopping times (τ,σ)𝒯(\tau^{*},\sigma^{*})\in{\mathcal{T}} such that for every x𝔼x\in\mathbb{E}

𝐏x(d1,lτθ1d1,r,d2,lσθ2d2,r)\displaystyle{\bf P}_{x}(-d_{1,l}\leq\tau^{*}-\theta_{1}\leq d_{1,r},d_{2,l}\leq\sigma^{*}-\theta_{2}\leq d_{2,r}) (4)
 =sup0τσ<(τ,σ)𝒯𝐏x(d1,lτθ1d1,r,d2,lσθ2d2,r).\displaystyle\mbox{\;}=\sup_{\stackrel{{\scriptstyle(\tau,\sigma)\in{\mathcal{T}}}}{{0\leq\tau\leq\sigma<\infty}}}{\bf P}_{x}(d_{1,l}\leq\tau-\theta_{1}\leq d_{1,r},d_{2,l}\leq\sigma-\theta_{2}\leq d_{2,r}).

The problem with the fixed transition distributions at each segment has been formulated in [Szajowski(1996)] and has been extended to the case 0θ1θ20\leq\theta_{1}\leq\theta_{2} in [Szajowski(2011)]. The investigated models here assume that between θ1\theta_{1} and θ2\theta_{2}, the distribution is chosen from a given set (for simplification having two elements only). The distribution is predetermined in two models and chosen randomly in one model.

Let us introduce the following notation:

Lstu1(x¯k,n)\displaystyle L_{st}^{u_{1}}(\underline{x}_{k,n}) =\displaystyle= r=k+1nstfxr10(xr)r=nst+1ntfxr11u1(xr)r=nt+1nfxr11(xr),\displaystyle\displaystyle{\prod\limits_{r=k+1}^{n-s-t}}f_{x_{r-1}}^{0}(x_{r})\displaystyle{\prod\limits_{r=n-s-t+1}^{n-t}}f_{x_{r-1}}^{1u_{1}}(x_{r})\displaystyle{\prod\limits_{r=n-t+1}^{n}}f_{x_{r-1}}^{1}(x_{r}),
A¯k,n\displaystyle\underline{A}_{k,n} =\displaystyle= ×i=knAi=Ak×Ak+1××An, Aiu1𝒦\displaystyle\times_{i=k}^{n}A_{i}=A_{k}\times A_{k+1}\times\ldots\times A_{n},\;\mbox{ $A_{i}\in{\mathcal{B}}$, $u_{1}\in{\mathcal{K}}$}

where the convention i=j1j2xi=1\prod_{i=j_{1}}^{j_{2}}x_{i}=1 for j1>j2j_{1}>j_{2} is used.

Let BiB_{i}\in{{\mathcal{B}}}, minm\leq i\leq n and let us assume that X0=xX_{0}=x and denote D¯m,n={ω:Xi(ω)Ai,min}\underline{D}_{m,n}=\{\omega:X_{i}(\omega)\in A_{i},m\leq i\leq n\}. For Di={ω:XiAi}iD_{i}=\{\omega:X_{i}\in A_{i}\}\in{\mathcal{F}}_{i}, minm\leq i\leq n we have by properties of the density function Sn(x¯1,n)S_{n}(\underline{x}_{1,n}) with respect to the measure μ()\mu(\cdot)

As,tLm,nu1(x¯s1,t)μ(dx¯s,t)=𝔓m,nu1(Xs1,D¯s,t),\displaystyle\int_{A_{s,t}}L_{m,n}^{u_{1}}(\underline{x}_{s-1,t})\mu(d\underline{x}_{s,t})=\mathfrak{P}_{m,n}^{u_{1}}(X_{s-1},\underline{D}_{s,t}),

where m+nts+1m+n\leq t-s+1, u1𝒦u_{1}\in{\mathcal{K}}. Let us now define functions S()S_{\cdot}(\cdot) and H(,,,)H_{\cdot}(\cdot,\cdot,\cdot,\cdot) and the sequence of functions Sn:×i=1n𝔼S_{n}:\times_{i=1}^{n}\mathbb{E}\rightarrow\Re as follows: S0(x0)=1S_{0}(x_{0})=1 and for n1n\geq 1:

Sn(x¯n)\displaystyle S_{n}(\underline{x}_{n}) =\displaystyle= fxϵ1,θ1θ2n(x¯1,n)+fxϵ1,θ1n<θ2(x¯1,n)\displaystyle f^{\epsilon_{1},\theta_{1}\leq\theta_{2}\leq n}_{x}(\underline{x}_{1,n})+f^{\epsilon_{1},\theta_{1}\leq n<\theta_{2}}_{x}(\underline{x}_{1,n})
+fxϵ1,θ1=θ2>n(x¯1,n)+fxϵ1,n<θ1<θ2(x¯1,n).\displaystyle\mbox{}+f^{\epsilon_{1},\theta_{1}=\theta_{2}>n}_{x}(\underline{x}_{1,n})+f^{\epsilon_{1},n<\theta_{1}<\theta_{2}}_{x}(\underline{x}_{1,n}).

Aditionally, we have

𝐇(,,,)\displaystyle\mathbf{H}(\cdot,\cdot,\cdot,\cdot) =\displaystyle= f(xn+1|xn).\displaystyle f(x_{n+1}|x_{n}). (6)

For further calculation it is important to have nn-dimensional distribution for various configurations of disorders.

fxϵ1,θ1θ2n(x¯1,n)\displaystyle f^{\epsilon_{1},\theta_{1}\leq\theta_{2}\leq n}_{x}(\underline{x}_{1,n}) =\displaystyle= π¯ρu𝒦ru1{j=1np1j1q1L0,nj+1u(x¯0,n)\displaystyle\bar{\pi}\rho\sum_{u\in{\mathcal{K}}}r^{1}_{u}\{\sum_{j=1}^{n}p_{1}^{j-1}q_{1}L_{0,n-j+1}^{u}(\underline{x}_{0,n})
+\displaystyle+ π¯ρ¯j=1n1k=j+1n{p1j1q1p2kj1q2Lkj,nk+1u(x¯0,n)}+πρL0,nu(x¯0,n)}\displaystyle\bar{\pi}\bar{\rho}\sum_{j=1}^{n-1}\sum_{k=j+1}^{n}\{p_{1}^{j-1}q_{1}p_{2}^{k-j-1}q_{2}L_{k-j,n-k+1}^{u}(\underline{x}_{0,n})\}+\pi\rho L_{0,n}^{u}(\underline{x}_{0,n})\}
fxϵ1,θ1n<θ2(x¯1,n)\displaystyle f^{\epsilon_{1},\theta_{1}\leq n<\theta_{2}}_{x}(\underline{x}_{1,n}) =\displaystyle= ρ¯u𝒦ru1[π¯j=1n{p1j1q1p2njLnj+1,0u(x¯0,n)}\displaystyle\bar{\rho}\sum_{u\in{\mathcal{K}}}r^{1}_{u}[\bar{\pi}\sum_{j=1}^{n}\{p_{1}^{j-1}q_{1}p_{2}^{n-j}L_{n-j+1,0}^{u}(\underline{x}_{0,n})\}
+πj=1n{p2j1q2Lj1,nj+1u(x¯0,n)}]\displaystyle\mbox{}+\pi\sum_{j=1}^{n}\{p_{2}^{j-1}q_{2}L_{j-1,n-j+1}^{u}(\underline{x}_{0,n})\}]
fxϵ1,θ1=θ2>n(x¯1,n)\displaystyle f^{\epsilon_{1},\theta_{1}=\theta_{2}>n}_{x}(\underline{x}_{1,n}) =\displaystyle= ρπ¯p1nu𝒦ru1L0,0u(x¯0,n)\displaystyle\rho\bar{\pi}p_{1}^{n}\sum_{u\in{\mathcal{K}}}r^{1}_{u}L_{0,0}^{u}(\underline{x}_{0,n}) (9)
fxϵ1,n<θ1<θ2(x¯1,n)\displaystyle f^{\epsilon_{1},n<\theta_{1}<\theta_{2}}_{x}(\underline{x}_{1,n}) =\displaystyle= ρ¯π¯p1nu𝒦ru1L0,0u(x¯0,n).\displaystyle\bar{\rho}\bar{\pi}p_{1}^{n}\sum_{u\in{\mathcal{K}}}r^{1}_{u}L_{0,0}^{u}(\underline{x}_{0,n}). (10)

Denote u¯,v¯=i=1duivi\left\langle\ \underline{u}\>,\underline{v}\right\rangle=\sum_{i=1}^{d}u_{i}v_{i} and 1𝕀=(1,1,,1)K1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}=(1,1,\ldots,1)\in\Re^{K}.We have (cf. [Sarnowski and Szajowski(2008)], [Dube and Mazumdar(2001)])

Lemma 2.1.

For n>0n>0 the function Sn(x¯1,n)S_{n}(\underline{x}_{1,n}) follows recursion

Sn+1(x¯1,n+1)\displaystyle S_{n+1}(\underline{x}_{1,n+1}) =𝐇(xn,xn+1,Πn,Υn)Sn(x¯1,n)\displaystyle=\mathbf{H}(x_{n},x_{n+1},\overrightarrow{\Pi}_{n},\Upsilon_{n})S_{n}(\underline{x}_{1,n}) (11)
where for π=(α,β,γ)\vec{\pi}=(\alpha,\beta,\gamma), and υ=(υ1,υ2,,υK)\upsilon=(\upsilon^{1},\upsilon^{2},\ldots,\upsilon_{K})
𝐇(x,y,π,υ)\displaystyle\mathbf{H}(x,y,\vec{\pi},\upsilon) =(1α)p1fx0(y)+[q2α+p2β+q1γ]fx2(y)\displaystyle=(1-\alpha)p_{1}f^{0}_{x}(y)+[q_{2}\alpha+p_{2}\beta+q_{1}\gamma]f^{2}_{x}(y) (12)
+p2(αβ)+q1(υαγ),fx1¯(y).\displaystyle\mbox{}+\langle\ p_{2}(\alpha-\beta)+q_{1}(\upsilon-\alpha-\gamma)\>,\underline{f^{1}_{x}}(y)\rangle.

Here Πn=(Πn1¯,Πn2¯,Πn12¯)\overrightarrow{\Pi}_{n}=(\underline{\Pi_{n}^{1}},\underline{\Pi_{n}^{2}},\underline{\Pi_{n}^{12}}) and Υn=(Υn1,Υn2,,ΥnK)\Upsilon_{n}=(\Upsilon_{n}^{1},\Upsilon_{n}^{2},\ldots,\Upsilon_{n}^{K}). One can formulate: 𝐇(x,y,π,υ)= 1𝕀,𝐇¯=u𝒦𝐇¯u(x,y,πu,υu)\mathbf{H}(x,y,\vec{\pi},\upsilon)=\left\langle\ 1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}\>,\underline{\mathbf{H}}\right\rangle=\sum_{u\in{\mathcal{K}}}\underline{\mathbf{H}}^{u}(x,y,\vec{\pi}^{u},\upsilon^{u}). Let us assume 0θ1θ20\leq\theta_{1}\leq\theta_{2} and suppose that BiB_{i}\in{{\mathcal{B}}}, 1in+11\leq i\leq n+1 and X0=xX_{0}=x and denote Dn={ω:Xi(ω)Bi,1in}D_{n}=\{\omega:X_{i}(\omega)\in B_{i},1\leq i\leq n\}. For Ai={ω:XiBi}iA_{i}=\{\omega:X_{i}\in B_{i}\}\in{\mathcal{F}}_{i}, 1in+11\leq i\leq n+1 we have by properties of the density function Sn(x¯1,n)S_{n}(\underline{x}_{1,n}) with respect to the measure μ()\mu(\cdot)

Dn+1𝑑𝐏x\displaystyle\int_{D_{n+1}}d{\bf P}_{x} =\displaystyle= Dn𝕀An+1𝑑𝐏x.\displaystyle\int_{D_{n}}{\mathbb{I}}_{A_{n+1}}d{\bf P}_{x}.

Now we split the conditional probability of {Xn+1An+1,ϵ1=u}\{X_{n+1}\in A_{n+1},\epsilon_{1}=u\} into the following parts

𝐏x(Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n}) (13)
=\displaystyle= 𝐏x(n<θ1<θ2,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(n<\theta_{1}<\theta_{2},X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
+\displaystyle+ 𝐏x(θ1n<θ2,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(\theta_{1}\leq n<\theta_{2},X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n}) (14)
+\displaystyle+ 𝐏x(n<θ1=θ2,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(n<\theta_{1}=\theta_{2},X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n}) (15)
+\displaystyle+ 𝐏x(θ1θ2n,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(\theta_{1}\leq\theta_{2}\leq n,X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n}) (16)
In (13):

we have:

Dn𝐏x(θ2>θ1>n,Xn+1An+1,ϵ1=un)d𝐏x\displaystyle\int_{D_{n}}{\bf P}_{x}(\theta_{2}>\theta_{1}>n,X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n})d{\bf P}_{x}
=\displaystyle= ×i=1nBi(fxϵ1=u,n<θ1<θ2(x¯1,n)Bn+1(p1fxn0(xn+1)+q1fxn1,u(xn+1))μ(dxn+1))μ(dx¯1,n)\displaystyle\int_{\times_{i=1}^{n}B_{i}}(f^{\epsilon_{1}=u,n<\theta_{1}<\theta_{2}}_{x}(\underline{x}_{1,n})\int_{B_{n+1}}(p_{1}f^{0}_{x_{n}}(x_{n+1})+q_{1}f^{1,u}_{x_{n}}(x_{n+1}))\mu(dx_{n+1}))\mu(d\underline{x}_{1,n})
=\displaystyle= Dn𝐏x(θ2>θ1>n,ϵ1=un)[𝐏Xn0(An+1)p1+q1𝐏Xn1,u(An+1)]d𝐏x.\displaystyle\int_{D_{n}}{\bf P}_{x}(\theta_{2}>\theta_{1}>n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})[{\bf P}_{X_{n}}^{0}(A_{n+1})p_{1}+q_{1}{\bf P}_{X_{n}}^{1,u}(A_{n+1})]d{\bf P}_{x}.
In (14):

we get by similar arguments as for (13)

𝐏x(θ1n<θ2,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(\theta_{1}\leq n<\theta_{2},X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
=\displaystyle= (𝐏x(θ1n,ϵ1=un)𝐏x(θ2n,ϵ1=un))\displaystyle\left({\bf P}_{x}(\theta_{1}\leq n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})-{\bf P}_{x}(\theta_{2}\leq n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})\right)
×[q2𝐏Xn2(An+1)+p2𝐏Xn1,u(An+1)]\displaystyle\times\mskip-3.0mu[q_{2}{\bf P}_{X_{n}}^{2}(A_{n+1})+p_{2}{\bf P}_{X_{n}}^{1,u}(A_{n+1})]
In (16):

this part has the form:

𝐏x(θ2n,Xn+1An+1,ϵ1=un)=𝐏x(θ2n,ϵ1=un)𝐏Xn2(An+1){\bf P}_{x}(\theta_{2}\leq n,X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n})={\bf P}_{x}(\theta_{2}\leq n,\epsilon_{1}=u\mid{\mathcal{F}}_{n}){\bf P}_{X_{n}}^{2}(A_{n+1})
In (15):

the conditional probability is equal to

𝐏x(θ1=θ2>n,Xn+1An+1,ϵ1=un)\displaystyle{\bf P}_{x}(\theta_{1}=\theta_{2}>n,X_{n+1}\in A_{n+1},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
=\displaystyle= 𝐏x(θ1=θ2>n,ϵ1=un)[q1𝐏Xn2(An+1)+p1𝐏Xn0(An+1)]\displaystyle{\bf P}_{x}(\theta_{1}=\theta_{2}>n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})[q_{1}{\bf P}_{X_{n}}^{2}(A_{n+1})+p_{1}{\bf P}_{X_{n}}^{0}(A_{n+1})]

These formulae lead to

f(Xn+1|X¯1,n)=𝐇(Xn,Xn+1,Πn1,Πn2,Πn12,Υn).f(X_{n+1}|\underline{X}_{1,n})=\mathbf{H}(X_{n},X_{n+1},\Pi_{n}^{1},\Pi_{n}^{2},\Pi_{n}^{12},\Upsilon_{n}).

2.3. nn-dimensional vs. n+dn+d-dimensional distributions

It is not too difficult to get the recursive formula for 𝐇()\mathbf{H}(\cdot).

𝐇(x,y,π,υ)\displaystyle\mathbf{H}(x,y,\vec{\pi},\upsilon) =u𝒦𝐇¯u(x,y,πu,υu)\displaystyle=\sum_{u\in{\mathcal{K}}}\underline{\mathbf{H}}^{u}(x,y,\vec{\pi}^{u},\upsilon^{u}) (17)
𝐇¯u(x,y,πu,υu)\displaystyle\underline{\mathbf{H}}^{u}(x,y,\vec{\pi}^{u},\upsilon^{u}) =(υuαu)p1fx0(y)+[p2(αuβu)+q1(υuαuγu)]fx1,u(y)\displaystyle=(\upsilon^{u}-\alpha^{u})p_{1}f^{0}_{x}(y)+[p_{2}(\alpha^{u}-\beta^{u})+q_{1}(\upsilon^{u}-\alpha^{u}-\gamma^{u})]f^{1,u}_{x}(y) (18)
+[q2αu+p2βu+q1γu]fx2(y).\displaystyle+[q_{2}\alpha^{u}+p_{2}\beta^{u}+q_{1}\gamma^{u}]f^{2}_{x}(y).
Lemma 2.2.

For n>0n>0 the density function Sn(x¯1,n)S_{n}(\underline{x}_{1,n}) follows recursion

Sn+d(x¯1,n+d)\displaystyle S_{n+d}(\underline{x}_{1,n+d}) =𝐆d(x¯n,n+d,Πn,Υn)Sn(x¯0,n)\displaystyle=\mathbf{G}_{d}(\underline{x}_{n,n+d},\overrightarrow{\Pi}_{n},\Upsilon_{n})S_{n}(\underline{x}_{0,n}) (19)
where
𝐆d(x¯n,n+d,Πn,Υn)\displaystyle\vspace{-1.5ex}\mathbf{G}_{d}(\underline{x}_{n,n+d},\overrightarrow{\Pi}_{n},\Upsilon_{n}) =f(x¯n+1,n+dx¯0,n)\displaystyle=f(\underline{x}_{n+1,n+d}\mid\underline{x}_{0,n}) (20)

Now we split the conditional probability of A¯n+1,n+d\underline{A}_{n+1,n+d} into the following parts

𝐏x(X¯n+1,n+dB¯n+1,n+dn)\displaystyle{\bf P}_{x}(\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d}\mid{\mathcal{F}}_{n})
=\displaystyle= u𝒦[𝐏x(n<θ1<θ2,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)\displaystyle\sum_{u\in{\mathcal{K}}}[{\bf P}_{x}(n<\theta_{1}<\theta_{2},\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
+\displaystyle+ 𝐏x(θ1n<θ2,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)\displaystyle{\bf P}_{x}(\theta_{1}\leq n<\theta_{2},\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
+\displaystyle+ 𝐏x(n<θ1=θ2,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)\displaystyle{\bf P}_{x}(n<\theta_{1}=\theta_{2},\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
+\displaystyle+ 𝐏x(θ1θ2n,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)]\displaystyle{\bf P}_{x}(\theta_{1}\leq\theta_{2}\leq n,\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})]
Dn𝐏x(θ2>θ1>n,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)d𝐏x\displaystyle\int_{D_{n}}{\bf P}_{x}(\theta_{2}>\theta_{1}>n,\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})d{\bf P}_{x}
=\displaystyle= Dn𝐏x(θ2>θ1>n,ϵ1=un)[j=1dp1j1q1𝔓j,dj+1u(Xn,A¯n+1,n+d)\displaystyle\int_{D_{n}}{\bf P}_{x}(\theta_{2}>\theta_{1}>n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})[\sum_{j=1}^{d}p_{1}^{j-1}q_{1}\mathfrak{P}_{j,d-j+1}^{u}(X_{n},\underline{A}_{n+1,n+d})
+p1d𝔓0,0u(Xn,An+1,n+d)]d𝐏x.\displaystyle+p_{1}^{d}\mathfrak{P}_{0,0}^{u}(X_{n},A_{n+1,n+d})]d{\bf P}_{x}.

Let us calculate: 𝐏x(θ1n<θ2,X¯n+1,n+dB¯n+1,n+d,ϵ1=un){\bf P}_{x}(\theta_{1}\leq n<\theta_{2},\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n}). This probability can be calculated as follows:

=\displaystyle= j=1d𝐏x(θ1n<θ2,θ2=n+j,Xn+1,n+dB¯n+1,n+d,ϵ1=un)\displaystyle\sum_{j=1}^{d}{\bf P}_{x}(\theta_{1}\leq n<\theta_{2},\theta_{2}=n+j,X_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
+𝐏x(θ1n<θ2,θ2n+d,X¯n+1,n+dB¯n+1,n+d,ϵ1=un)\displaystyle+{\bf P}_{x}(\theta_{1}\leq n<\theta_{2},\theta_{2}\neq n+d,\underline{X}_{n+1,n+d}\in\underline{B}_{n+1,n+d},\epsilon_{1}=u\mid{\mathcal{F}}_{n})
=\displaystyle= (𝐏x(θ1n,ϵ1=un)𝐏x(θ2n,ϵ1=un))\displaystyle\left({\bf P}_{x}(\theta_{1}\leq n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})-{\bf P}_{x}(\theta_{2}\leq n,\epsilon_{1}=u\mid{\mathcal{F}}_{n})\right)
×[j=1dp2j𝔓j,dj+1u(Xn,A¯n+1,n+d)+p2d𝐏Xn1,u(An+1)]\displaystyle\times\mskip-3.0mu[\sum_{j=1}^{d}p_{2}^{j}\mathfrak{P}_{j,d-j+1}^{u}(X_{n},\underline{A}_{n+1,n+d})+p_{2}^{d}{\bf P}_{X_{n}}^{1,u}(A_{n+1})]

3. On some a posteriori processes

For (4) the following a posteriori processes are crucial (cf. [Yoshida(1983)], [Szajowski(1992)]).

Πni,u\displaystyle\Pi^{i,u}_{n} =\displaystyle= 𝐏x(θin,ϵ1=u|n),\displaystyle{\bf P}_{x}(\theta_{i}\leq n,\epsilon_{1}=u|{\mathcal{F}}_{n}), (21)
Πni\displaystyle\Pi^{i}_{n} =\displaystyle= 1𝕀,Πni¯=u𝒦Πni,u=𝐏x(θin|n),\displaystyle\left\langle\ 1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}\>,\underline{\Pi^{i}_{n}}\right\rangle=\sum_{u\in{\mathcal{K}}}\Pi^{i,u}_{n}={\bf P}_{x}(\theta_{i}\leq n|{\mathcal{F}}_{n}), (22)
Πn12,u\displaystyle\Pi^{12,u}_{n} =\displaystyle= Px(θ1=θ2>n,ϵ1=u|nn),\displaystyle P_{x}(\theta_{1}=\theta_{2}>n,\epsilon_{1}=u|{\mathcal{F}}_{nn}), (23)
Πn12\displaystyle\Pi^{12}_{n} =\displaystyle= 1𝕀,Πn12¯=𝐏x(θ1=θ2>n|n),\displaystyle\left\langle\ 1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}\>,\underline{\Pi^{12}_{n}}\right\rangle={\bf P}_{x}(\theta_{1}=\theta_{2}>n|{\mathcal{F}}_{n}), (24)
Πmnu\displaystyle\Pi_{mn}^{u} =\displaystyle= 𝐏x(θ1=m,θ2>n,ϵ1=u|mn),\displaystyle{\bf P}_{x}(\theta_{1}=m,\theta_{2}>n,\epsilon_{1}=u|{\mathcal{F}}_{mn}), (25)
Πmn\displaystyle\Pi_{mn} =\displaystyle= 1𝕀,Πmni¯=𝐏x(θ1=m,θ2>n|mn),\displaystyle\left\langle\ 1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}\>,\underline{\Pi^{i}_{mn}}\right\rangle={\bf P}_{x}(\theta_{1}=m,\theta_{2}>n|{\mathcal{F}}_{mn}), (26)
Υnu\displaystyle\Upsilon_{n}^{u} =\displaystyle= 𝐏x(ϵ1=u|n),\displaystyle{\bf P}_{x}(\epsilon_{1}=u|{\mathcal{F}}_{n}), (27)

for m,n=1,2,m,n=1,2,\ldots, m<nm<n, i=1,2i=1,2. Also n=nn{\mathcal{F}}_{n}={\mathcal{F}}_{nn}.

3.1. Recursive form of posteriors

A posteriori processes: Πn1,u{\Pi}^{1,u}_{n}, Πn2,u{\Pi}^{2,u}_{n}, Πn12,u{\Pi}^{12,u}_{n}, Π¯mnu\underline{\Pi}_{m\;{n}}^{u} can be calculated based on the following formula:

Π¯¯n+11,u\displaystyle\overline{\underline{\Pi}}^{1,u}_{n+1} =\displaystyle= Π¯¯n1,up1fxn0(xn+1)𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn)\displaystyle\overline{\underline{\Pi}}^{1,u}_{n}\frac{p_{1}f_{x_{n}}^{0}(x_{n+1})}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})} (28)
Π¯n+12,u\displaystyle\underline{\Pi}^{2,u}_{n+1} =\displaystyle= (q2Π¯n1,u+p2Π¯n2,u+q1Π¯n12,u)fx2(y)𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn),\displaystyle\frac{(q_{2}\underline{\Pi}^{1,u}_{n}+p_{2}\underline{\Pi}^{2,u}_{n}+q_{1}\underline{\Pi}^{12,u}_{n})f^{2}_{x}(y)}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})}, (29)
Π¯n+112,u\displaystyle\underline{\Pi}^{12,u}_{n+1} =\displaystyle= p1Π¯n12,ufx0(y)𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn),\displaystyle\frac{p_{1}\underline{\Pi}^{12,u}_{n}f^{0}_{x}(y)}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})}, (30)
Π¯mn+1u\displaystyle\underline{\Pi}_{m\;{n+1}}^{u} =\displaystyle= p2Π¯mnufx1,u(y)𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn),\displaystyle\frac{p_{2}\underline{\Pi}_{m\;{n}}^{u}f^{1,u}_{x}(y)}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})}, (31)

for m,n=1,2,m,n=1,2,\ldots, m<nm<n, i=1,2i=1,2.

3.2. Markov processes based on observations

Let us definie Υnu\Upsilon_{n}^{u} in the recursive way.

Υnu\displaystyle\Upsilon_{n}^{u} =\displaystyle= Πn1,u+Π¯n1,u,\displaystyle\Pi^{1,u}_{n}+\overline{\Pi}^{1,u}_{n}, (32)
Υn+1u\displaystyle\Upsilon_{n+1}^{u} =\displaystyle= fxn0(xn+1)[Υnuq1Π¯n12,u]+fx1,u(y)p2[Π¯n1,uΠ¯n2,u]𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn)\displaystyle\frac{f_{x_{n}}^{0}(x_{n+1})[\Upsilon_{n}^{u}-q_{1}\underline{\Pi}^{12,u}_{n}]+f^{1,u}_{x}(y)p_{2}[\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{2,u}_{n}]}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})}
+fxn2(xn+1)[q2(Π¯n1,u+Π¯n2,u)+Π¯n2,u+q1Π¯n12,u]𝐇(x,y,Π¯n1,Π¯n2,Π¯n12,Υn)\displaystyle+\frac{f_{x_{n}}^{2}(x_{n+1})[q_{2}(\underline{\Pi}^{1,u}_{n}+\underline{\Pi}^{2,u}_{n})+\underline{\Pi}^{2,u}_{n}+q_{1}\underline{\Pi}^{12,u}_{n}]}{\mathbf{H}(x,y,\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n})}

For recursive representation of (28)–(3.2) we need the following functions:

Π¯1,u(x,y,α,β,γ,υ)\displaystyle\underline{\Pi}^{1,u}(x,y,\alpha,\beta,\gamma,\upsilon) =\displaystyle= υu[p1(υuαu)fx0(y)]𝐇1(x,y,α,β,γ,υ)\displaystyle\upsilon^{u}-[p_{1}(\upsilon^{u}-\alpha^{u})f^{0}_{x}(y)]{\mathbf{H}^{-1}(x,y,\alpha,\beta,\gamma,\upsilon)}
Π¯2,u(x,y,α,β,γ,υ)\displaystyle\underline{\Pi}^{2,u}(x,y,\alpha,\beta,\gamma,\upsilon) =\displaystyle= [(q2αu+p2βu+q1γu)fx2(y)]𝐇1(x,y,α,β,γ,υ)\displaystyle[(q_{2}\alpha^{u}+p_{2}\beta^{u}+q_{1}\gamma^{u})f^{2}_{x}(y)]{\mathbf{H}^{-1}(x,y,\alpha,\beta,\gamma,\upsilon)}
Π¯12,u(x,y,α,β,γ,υ)\displaystyle\underline{\Pi}^{12,u}(x,y,\alpha,\beta,\gamma,\upsilon) =\displaystyle= p1γufx0(y)𝐇1(x,y,α,β,γ,υ)\displaystyle p_{1}\gamma^{u}f^{0}_{x}(y){\mathbf{H}^{-1}(x,y,\alpha,\beta,\gamma,\upsilon)}
Π¯u(x,y,α,β,γ,δ,υ)\displaystyle\underline{\Pi}^{u}(x,y,\alpha,\beta,\gamma,\delta,\upsilon) =\displaystyle= p2δufx1,u(y)𝐇1(x,y,α,β,γ,υ).\displaystyle p_{2}\delta^{u}f^{1,u}_{x}(y){\mathbf{H}^{-1}(x,y,\alpha,\beta,\gamma,\upsilon)}.

The function 𝐇¯u\underline{\mathbf{H}}^{u} and 𝐇\mathbf{H} will be used in representations of the posteriors.

𝐇¯u(x,y,α,β,γ,υ)\displaystyle\underline{\mathbf{H}}^{u}(x,y,\alpha,\beta,\gamma,\upsilon) =\displaystyle= (υuαu)p1fx0(y)\displaystyle(\upsilon^{u}-\alpha^{u})p_{1}f^{0}_{x}(y)
+[p2(αuβu)+q1(υuαuγu)]fx1,u(y)\displaystyle\mbox{}+[p_{2}(\alpha^{u}-\beta^{u})+q_{1}(\upsilon^{u}-\alpha^{u}-\gamma^{u})]f^{1,u}_{x}(y)
+[q2αu+p2βu+q1γu]fx2(y).\displaystyle\mbox{}+[q_{2}\alpha^{u}+p_{2}\beta^{u}+q_{1}\gamma^{u}]f^{2}_{x}(y).

There is 𝐇(x,y,α,β,γ,υ)= 1𝕀,𝐇¯\mathbf{H}(x,y,\alpha,\beta,\gamma,\upsilon)=\left\langle\ 1\mskip-3.0mu\mskip-3.0mu{\mathbb{I}}\>,\underline{\mathbf{H}}\right\rangle. In the sequel we adopt the following denotations: α=(α,β,γ){\overrightarrow{\alpha}}=(\alpha,\beta,\gamma) and Πnu=(Π¯n1,u,Π¯n2,u,Π¯n12,u){\overrightarrow{\Pi}}_{n}^{u}=(\underline{\Pi}^{1,u}_{n},\underline{\Pi}^{2,u}_{n},\underline{\Pi}^{12,u}_{n}).

4. The disorder problem vs. the stopping problem

The basic formulae used in the transformation of the disorder problems to the stopping problems are given in the following

Lemma 4.1.

For each x𝔼x\in\mathbb{E} and each Borel function u:u:\Re\rightarrow\Re the following formulae for m,n=1,2,m,n=1,2,\ldots, m<nm<n, i=1,2i=1,2, hold:

Πn+1i,u\displaystyle\Pi^{i,u}_{n+1} =\displaystyle= Π¯i,u(Xn,Xn+1,Π¯n1,Π¯n2,Π¯n12,Υn)\displaystyle\underline{\Pi}^{i,u}(X_{n},X_{n+1},\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n}) (34)
Πn+112,u\displaystyle\Pi^{12,u}_{n+1} =\displaystyle= Π¯12,u(Xn,Xn+1,Π¯n1,Π¯n2,Π¯n12,Υn)\displaystyle\underline{\Pi}^{12,u}(X_{n},X_{n+1},\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\Upsilon_{n}) (35)
Πmn+1u\displaystyle\Pi_{m\,n+1}^{u} =\displaystyle= Π¯(Xn,Xn+1,Π¯n1,Π¯n2,Π¯n12,Π¯mn,Υn)\displaystyle\underline{\Pi}(X_{n},X_{n+1},\underline{\Pi}^{1}_{n},\underline{\Pi}^{2}_{n},\underline{\Pi}^{12}_{n},\underline{\Pi}_{m\,n},\Upsilon_{n}) (36)

with the boundary condition Π01,u=πru\Pi^{1,u}_{0}=\pi r_{u}, Π02,u(x)=πruρ\Pi^{2,u}_{0}(x)=\pi r_{u}\rho, Π¯mmu=(1ρ)q1fXm11,u(Xm)p1fXm10(Xm)(ΥnuΠ¯m1,u)\underline{\Pi}_{m\,m}^{u}=(1-\rho)\frac{q_{1}f^{1,u}_{X_{m-1}}(X_{m})}{p_{1}f^{0}_{X_{m-1}}(X_{m})}(\Upsilon_{n}^{u}-\underline{\Pi}^{1,u}_{m}).

Lemma 4.2.

For the problem  4 the following formulae are valied:

  1. (1)

    𝐏x(θ2=θ1>n+1,ϵ=u|n)=p1Π¯n12,u{\bf P}_{x}(\theta_{2}=\theta_{1}>n+1,\epsilon=u|{\mathcal{F}}_{n})=p_{1}\underline{\Pi}^{12,u}_{n};

  2. (2)

    𝐏x(θ2>θ1>n+1,ϵ=u|n)=p1(ΥnuΠ¯n1,uΠ¯n12,u){\bf P}_{x}(\theta_{2}>\theta_{1}>n+1,\epsilon=u|{\mathcal{F}}_{n})=p_{1}(\Upsilon_{n}^{u}-\underline{\Pi}_{n}^{1,u}-\underline{\Pi}_{n}^{12,u});

  3. (3)

    𝐏x(θ1n+1,ϵ=u|n)=𝐏(θ1n+1<θ2,ϵ=u|n)+𝐏(θ2n+1,ϵ=u|n){\bf P}_{x}(\theta_{1}\leq n+1,\epsilon=u|{\mathcal{F}}_{n})={\bf P}(\theta_{1}\leq n+1<\theta_{2},\epsilon=u|{\mathcal{F}}_{n})+{\bf P}(\theta_{2}\leq n+1,\epsilon=u|{\mathcal{F}}_{n});

  4. (4)

    𝐏(θ1n+1<θ2,ϵ=u|n)=q1(1Π¯n1,uΠ¯n12,u)+p2(Π¯n1,uΠ¯n2,u){\bf P}(\theta_{1}\leq n+1<\theta_{2},\epsilon=u|{\mathcal{F}}_{n})=q_{1}(1-\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{12,u}_{n})+p_{2}(\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{2,u}_{n});

  5. (5)

    𝐏x(θn+1,ϵ=u|n)=q2Π¯n1,u+p2Π¯n2,u+q1Π¯n12,u{\bf P}_{x}(\theta_{\leq}n+1,\epsilon=u|{\mathcal{F}}_{n})=q_{2}\underline{\Pi}^{1,u}_{n}+p_{2}\underline{\Pi}_{n}^{2,u}+q_{1}\underline{\Pi}^{12,u}_{n}.

Lemma 4.3.

For each x𝐄x\in\mathbf{E} and each Borel function u:𝐑𝐑u:\mathbf{R}\longrightarrow\mathbf{R} the following equations are fulfilled.

𝐄x(u(Xn+1)(ΥnuΠ¯n+11,u)n)\displaystyle{\bf E}_{x}\left(u(X_{n+1})(\Upsilon_{n}^{u}-\underline{\Pi}_{n+1}^{1,u})\mid\mathcal{F}_{n}\right) =\displaystyle= (1Π¯n1,uΠ¯n12,u)p1\displaystyle(1-\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{12,u}_{n})p_{1}
×𝔼u(y)fXn0(y)μXn(dy),\displaystyle\times\int_{\mathbb{E}}u(y)f^{0}_{X_{n}}(y)\mu_{X_{n}}(dy),
𝐄x(u(Xn+1)(Π¯n+11,uΠ¯n+12,u)n)\displaystyle{\bf E}_{x}\left(u(X_{n+1})(\underline{\Pi}_{n+1}^{1,u}-\underline{\Pi}_{n+1}^{2,u})\mid{\mathcal{F}}_{n}\right) =\displaystyle= [q1(1Π¯n1,uΠ¯n12,u)+p2(Π¯n1,uΠ¯n2,u)]\displaystyle\left[q_{1}(1-\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{12,u}_{n})+p_{2}(\underline{\Pi}^{1,u}_{n}-\underline{\Pi}^{2,u}_{n})\right] (38)
×𝔼u(x)fXn1,u(y)μXn(dy),\displaystyle\times\int_{\mathbb{E}}u(x)f^{1,u}_{X_{n}}(y)\mu_{X_{n}}(dy),
𝐄x(u(Xn+1)Π¯n+12,u)n)\displaystyle{\bf E}_{x}\left(u(X_{n+1})\underline{\Pi}_{n+1}^{2,u})\mid{\mathcal{F}}_{n}\right) =\displaystyle= [q2Π¯n1,u+p2Π¯n2,u+q1Π¯n12,u]\displaystyle\left[q_{2}\underline{\Pi}^{1,u}_{n}+p_{2}\underline{\Pi}_{n}^{2,u}+q_{1}\underline{\Pi}^{12,u}_{n}\right]
×𝔼u(y)fXn2(y)μXn(dy),\displaystyle\times\int_{\mathbb{E}}\mskip-3.0muu(y)f^{2}_{X_{n}}(y)\mu_{X_{n}}(dy),
𝐄x(u(Xn+1)Π¯n+112,u)n)\displaystyle{\bf E}_{x}\left(u(X_{n+1})\underline{\Pi}^{12,u}_{n+1})\mid{\mathcal{F}}_{n}\right) =\displaystyle= [p1Π¯n12,u]𝔼u(y)fXn0(y)μXn(dy)\displaystyle\left[p_{1}\underline{\Pi}^{12,u}_{n}\right]\mskip-3.0mu\int_{\mathbb{E}}\mskip-3.0muu(y)f^{0}_{X_{n}}(y)\mu_{X_{n}}(dy) (40)
𝐄x(u(Xn+1)|n)\displaystyle{\bf E}_{x}(u(X_{n+1})|{\mathcal{F}}_{n}) =\displaystyle= 𝔼u(y)𝐇(Xn,y,Πn(x))μXn(dy)\displaystyle\int_{\mathbb{E}}u(y)\mathbf{H}(X_{n},y,{\overrightarrow{\Pi}}_{n}(x))\mu_{X_{n}}(dy) (41)

5. An equivalent issue – the double optimal stopping problem

A compound stopping variable is a pair (τ,σ)(\tau,\sigma) of stopping times such that τσ\tau\leq\sigma a.e.. Denote 𝒯m={(τ,σ)𝒯:τm}{\mathcal{T}}_{m}=\{(\tau,\sigma)\in{\mathcal{T}}:\tau\geq m\}, 𝒯mn={(τ,σ)𝒯:τ=m,σn}{\mathcal{T}}_{mn}=\{(\tau,\sigma)\in{\mathcal{T}}:\tau=m,\sigma\geq n\} and 𝒮m={τ𝒮:τm}{\mathcal{S}}_{m}=\{\tau\in{\mathcal{S}}:\tau\geq m\} (mn=n{\mathcal{F}}_{mn}={\mathcal{F}}_{n}, m,nm,n\in\mathbb{N}, mnm\leq n).

We define two-parameter stochastic sequence ξ(x)={ξmn}m,n,m<n,x𝔼\xi(x)=\{\xi_{mn}\}_{m,n\in\mathbb{N},\ m<n,\ x\in\mathbb{E}}, where ξmn=𝐏x(θ1=m,θ2=n|mn)\xi_{mn}={\bf P}_{x}(\theta_{1}=m,\theta_{2}=n|{\mathcal{F}}_{mn}).

5.1. The optimal compound stopping variable

For every x𝔼x\in\mathbb{E}, m,nm,n\in\mathbb{N}, m<nm<n let us define the optimal stopping problem of ξ(x)\xi(x) on 𝒯mn+={(τ,σ)𝒯mn:τ<σ}{\mathcal{T}}^{+}_{mn}=\{(\tau,\sigma)\in{\mathcal{T}}_{mn}:\tau<\sigma\}. A compound stopping variable (τ,σ)(\tau^{*},\sigma^{*}) is said to be optimal in 𝒯m+{\mathcal{T}}^{+}_{m} if 𝐄xξτσ=sup(τ,σ)𝒯m+𝐄xξτσ{\bf E}_{x}\xi_{\tau^{*}\sigma^{*}}=\sup_{(\tau,\sigma)\in{\mathcal{T}}^{+}_{m}}{\bf E}_{x}\xi_{\tau\sigma}
(or in 𝒯mn+{\mathcal{T}}^{+}_{mn} if 𝐄xξτσ=sup(τ,σ)𝒯mn+𝐄xξτσ{\bf E}_{x}\xi_{\tau^{*}\sigma^{*}}=\sup_{(\tau,\sigma)\in{\mathcal{T}}^{+}_{mn}}{\bf E}_{x}\xi_{\tau\sigma}). Let us define

ηmn=esssup(τ,σ)𝒯mn+𝐄x(ξτσ|mn).\eta_{mn}=\mathop{\mathrm{ess\,sup}}\displaylimits_{(\tau,\sigma)\in{\mathcal{T}}^{+}_{mn}}{\bf E}_{x}(\xi_{\tau\sigma}|{\mathcal{F}}_{mn}). (42)

If we put ξm=0\xi_{m\infty}=0, then

ηmn=esssup(τ,σ)𝒯mn+𝐏x(θ1=τ,θ2=σ|mn).\eta_{mn}=\mathop{\mathrm{ess\,sup}}\displaylimits_{(\tau,\sigma)\in{\mathcal{T}}^{+}_{mn}}{\bf P}_{x}(\theta_{1}=\tau,\theta_{2}=\sigma|{\mathcal{F}}_{mn}).

From the theory of optimal stopping for the double indexed processes (cf. [Haggstrom(1967)], [Nikolaev(1981)]) the sequence ηmn\eta_{mn} satisfies

ηmn=max{ξmn,𝐄(ηmn+1|mn)}.\eta_{mn}=\max\{\xi_{mn},{\bf E}(\eta_{mn+1}|{\mathcal{F}}_{mn})\}.

If σm=inf{n>m:ηmn=ξmn}\sigma^{*}_{m}=\inf\{n>m:\eta_{mn}=\xi_{mn}\}, then (m,σn)(m,\sigma^{*}_{n}) is optimal in 𝒯mn+{\mathcal{T}}^{+}_{mn} and ηmn=𝐄x(ξmσn|mn)\eta_{mn}={\bf E}_{x}(\xi_{m\sigma^{*}_{n}}|{\mathcal{F}}_{mn}) a.e.. Define η^mn=max{ξmn,𝐄(ηmn+1|mn)}\hat{\eta}_{mn}=\max\{\xi_{mn},{\bf E}(\eta_{m\;n+1}|{\mathcal{F}}_{mn})\} for nmn\geq m if σ^m=inf{nm:η^mn=ξmn}\hat{\sigma}^{*}_{m}=\inf\{n\geq m:\hat{\eta}_{mn}=\xi_{mn}\}, then (m,σ^m)(m,\hat{\sigma}^{*}_{m}) is optimal in 𝒯mn{\mathcal{T}}_{mn} and η^mm=𝐄x(ξmσm|mm)\hat{\eta}_{mm}={\bf E}_{x}(\xi_{m\sigma^{*}_{m}}|{\mathcal{F}}_{mm}) a.e..

Lemma 5.1.

Stopping time σm\sigma^{*}_{m} is optimal for every stopping problem (42).

What is left is to consider the optimal stopping problem for (ηmn)m=0,n=m,(\eta_{mn})_{m=0,n=m}^{\infty,\infty} on (𝒯mn)m=0,n=m,({\mathcal{T}}_{mn})_{m=0,n=m}^{\infty,\infty}. For further consideration denote ηm=𝐄x(ηmm+1|m)\eta_{m}={\bf E}_{x}(\eta_{mm+1}|{\mathcal{F}}_{m}). The first stop payoff will be determined. Let us define:

Vm=esssupτ𝒮m𝐄x(ητ|m).V_{m}=\mathop{\mathrm{ess\,sup}}\displaylimits_{\tau\in{\mathcal{S}}_{m}}{\bf E}_{x}(\eta_{\tau}|{\mathcal{F}}_{m}). (43)

Then Vm=max{ηm,𝐄x(Vm+1|m)} a.e.V_{m}=\max\{\eta_{m},{\bf E}_{x}(V_{m+1}|{\mathcal{F}}_{m})\}\mbox{ a.e.} and we define τn=inf{kn:Vk=ηk}\tau^{*}_{n}=\inf\{k\geq n:V_{k}=\eta_{k}\}.

Lemma 5.2 (5.2).

The strategy τ0\tau^{*}_{0} is the optimal strategy of the first stop.

5.2. Solution of the equivalent double stopping problem

For this presentation the case d1=d2=0d_{1}=d_{2}=0 is considered. Let us construct multidimensional Markov chains such that ξmn\xi_{mn} and ηm\eta_{m} will be the functions of their states. By considering the a posteriori processes we get ξ00=πρ\xi_{00}=\pi\rho and for m<nm<n

ξmnx=L.4.2𝐏x(θ1=m,θ2=n|mn)={q2p2Π¯mn(x),fXn12(Xn)f¯Xn11(Xn) for m<nρq1p1fXm12(Xm)fXm10(Xm)(1Πm1)for n=m.\xi_{m\,n}^{x}\stackrel{{\scriptstyle L.\ref{multidistform}}}{{=}}{\bf P}_{x}(\theta_{1}=m,\theta_{2}=n|{\mathcal{F}}_{m\,n})=\left\{\begin{array}[]{ll}\frac{q_{2}}{p_{2}}\langle\ \underline{\Pi}_{m\,n}(x)\>,\frac{f^{2}_{X_{n-1}}(X_{n})}{\underline{f}^{1}_{X_{n-1}}(X_{n})}\rangle&\text{ for $m<n$}\\ \rho\frac{q_{1}}{p_{1}}\frac{f^{2}_{X_{m-1}}(X_{m})}{f^{0}_{X_{m-1}}(X_{m})}(1-\Pi_{m}^{1})&\text{for $n=m$.}\end{array}\right. (44)

The vector (Xn,Xn+1,Πn,Π¯mn,Υn)(X_{n},X_{n+1},{\overrightarrow{\Pi}}_{n},\underline{\Pi}_{m\,n},\Upsilon_{n}) for n=m+1,m+2,n=m+1,m+2,\ldots is a function of (Xn1,Xn,Πn1,Π¯mn1,Υn)(X_{n-1},X_{n},{\overrightarrow{\Pi}}_{n-1},\underline{\Pi}_{m\,{n-1}},\Upsilon_{n}) and Xn+1X_{n+1}. Besides the conditional distribution of Xn+1X_{n+1} given n{\mathcal{F}}_{n} depends on XnX_{n}, Π¯n1(x)\underline{\Pi}^{1}_{n}(x), Π¯n2(x)\underline{\Pi}^{2}_{n}(x) and Υn\Upsilon_{n} only.

These facts imply that {(Xn,Xn+1,Πn,Π¯mn,Υn)}n=m+1\{(X_{n},X_{n+1},{\overrightarrow{\Pi}}_{n},\underline{\Pi}_{m\,n},\Upsilon_{n})\}_{n=m+1}^{\infty} form a homogeneous Markov process. This allows us to reduce the basic problem (42) for each mm to the optimal stopping problem of the Markov process Zm(x)={(Xn1,Xn,Πn,Π¯mn,Υn),m,n,m<n,x𝔼}Z_{m}(x)=\{(X_{n-1},X_{n},{\overrightarrow{\Pi}}_{n},\underline{\Pi}_{m\,n},\Upsilon_{n}),\ m,n\in\mathbb{N},\ m<n,\ x\in\mathbb{E}\} with the reward function h(t,u,α,δ,υ)=q2p2δ¯,ft2(u)f¯t1(u)h(t,u,{\overrightarrow{\alpha}},\delta,\upsilon)=\frac{q_{2}}{p_{2}}\langle\underline{\delta},\frac{f^{2}_{t}(u)}{\underline{f}^{1}_{t}(u)}\rangle.

Lemma 5.3.

A solution of the optimal stopping problem (42) for m=1,2,m=1,2,\ldots has a form

σm=inf{n>m:Π¯mnΠmn,fXn12(Xn)f¯Xn11(Xn)R(Xn,Π¯mn)}\sigma^{*}_{m}=\inf\{n>m:\langle\frac{\underline{\Pi}_{m\;n}}{\Pi_{m\;n}},\frac{f^{2}_{X_{n-1}}(X_{n})}{\underline{f}^{1}_{X_{n-1}}(X_{n})}\rangle\geq R^{*}(X_{n},\underline{\Pi}_{m\;n})\}

where R(t,δ¯)=p2𝔼r(t,s,δ¯),f¯t1(s)μt(ds)R^{*}(t,\underline{\delta})=p_{2}\int_{\mathbb{E}}\langle r^{*}(t,s,\underline{\delta}),\underline{f}^{1}_{t}(s)\rangle\mu_{t}(ds) and the function r(t,u,δ¯)r^{*}(t,u,\underline{\delta}) satisfies the equation r(t,u,δ¯)=max{δ¯δ,ft2(u)f¯t1(u),p2𝔼r(u,s,δ¯),f¯u1(s)μu(ds)}r^{*}(t,u,\underline{\delta})=\max\{\langle\frac{\underline{\delta}}{\delta},\frac{f^{2}_{t}(u)}{\underline{f}^{1}_{t}(u)}\rangle,p_{2}\int_{\mathbb{E}}\langle r^{*}(u,s,\underline{\delta}),\underline{f}^{1}_{u}(s)\rangle\mu_{u}(ds)\}. The value of the problem is equal

ηm=𝐄x(ηmm+1|m)=q1p1f¯Xm11(Xm)fXm10(Xm),Π¯¯m1Rρ(Xm1,Xm,Π¯mm),\eta_{m}={\bf E}_{x}(\eta_{m\,{m+1}}|{\mathcal{F}}_{m})=\frac{q_{1}}{p_{1}}\langle\frac{\underline{f}^{1}_{X_{m-1}}(X_{m})}{f^{0}_{X_{m-1}}(X_{m})},\overline{\underline{\Pi}}^{1}_{m}\rangle R_{\rho}^{\star}(X_{m-1},X_{m},\underline{\Pi}_{m\;m}),

where Rρ(t,u,δ¯)=max{ρδ¯δ,ft2(u)f¯t1(u),q2p2(1ρ)R(t,δ¯)}R_{\rho}^{\star}(t,u,\underline{\delta})=\max\{\rho\langle\frac{\underline{\delta}}{\delta},\frac{f^{2}_{t}(u)}{\underline{f}^{1}_{t}(u)}\rangle,\frac{q_{2}}{p_{2}}(1-\rho)R^{\star}(t,\underline{\delta})\}.

Based on the results of Lemma 5.3 and properties of the a posteriori process Πnm\Pi_{nm} we have the optimal second moment

σ^0={0if πρq1(1π)𝔼fx1(u)Rρ(x,u,δ¯)μx(du),σ0 otherwise.\hat{\sigma}_{0}^{\star}=\left\{\begin{array}[]{ll}0&\mbox{if $\pi\rho\geq q_{1}(1-\pi)\int_{\mathbb{E}}f_{x}^{1}(u)R^{\star}_{\rho}(x,u,\underline{\delta})\mu_{x}(du)$,}\\ \sigma_{0}^{\star}&\mbox{ otherwise.}\end{array}\right.

By lemmas 5.3 and 4.1 (formula (36)) the optimal stopping problem (43) has been transformed to the optimal stopping problem for the homogeneous Markov process W={(Xm1,Xm,Πm,Πm12,Υm),m,x𝔼}W=\{(X_{m-1},X_{m},{\overrightarrow{\Pi}}_{m},\Pi^{12}_{m},\Upsilon_{m}),\ m\in\mathbb{N},\ x\in\mathbb{E}\} with the reward function

f(t,u,α,υ)=q1p1f¯t1(u)ft0(u),α¯Rρ(t,u,δ¯).f(t,u,{\overrightarrow{\alpha}},\upsilon)=\frac{q_{1}}{p_{1}}\langle\frac{\underline{f}^{1}_{t}(u)}{f^{0}_{t}(u)},\bar{\alpha}\rangle R^{\star}_{\rho}(t,u,\underline{\delta}).
Theorem 5.4.

A solution of the optimal stopping problem (43) for n=1,2,n=1,2,\ldots has a form

τn=inf{kn:(Xk1,Xk,Πk,υk)B}\tau^{*}_{n}=\inf\{k\geq n:(X_{k-1},X_{k},{\overrightarrow{\Pi}}_{k},\upsilon_{k})\in B^{*}\} (45)

where B={(t,u,α,υ):α¯¯,ft2(u)f¯t1(u)Rρ(t,u,δ¯)p1𝔼v(u,s,α,υ)fu0(s)μu(ds)}B^{*}=\{(t,u,{\overrightarrow{\alpha}},\upsilon):\langle\bar{\underline{\alpha}},\frac{f^{2}_{t}(u)}{\underline{f}^{1}_{t}(u)}\rangle R^{\star}_{\rho}(t,u,\underline{\delta})\geq p_{1}\int_{\mathbb{E}}v^{*}(u,s,{\overrightarrow{\alpha}},\upsilon)f^{0}_{u}(s)\mu_{u}(ds)\}.

The function v(t,u,α,υ)=limnvn(t,u,α,υ)v^{*}(t,u,{\overrightarrow{\alpha}},\upsilon)=\lim_{n\rightarrow\infty}v_{n}(t,u,{\overrightarrow{\alpha}},\upsilon), where v0(t,u,α,υ)=Rρ(t,u,δ¯)v_{0}(t,u,{\overrightarrow{\alpha}},\upsilon)=R^{\star}_{\rho}(t,u,\underline{\delta}),

vn+1(t,u,α,υ)\displaystyle v_{n+1}(t,u,{\overrightarrow{\alpha}},\upsilon) =\displaystyle= max{α¯¯,ft2(u)ft1(u)Rρ(t,u,δ¯),\displaystyle\max\{\langle\bar{\underline{\alpha}},\frac{f^{2}_{t}(u)}{f^{1}_{t}(u)}\rangle R^{\star}_{\rho}(t,u,\underline{\delta}),
p1𝔼vn(u,s,α,υ)α¯¯,f¯u1(s)μu(ds)}.\displaystyle p_{1}\int_{\mathbb{E}}v_{n}(u,s,{\overrightarrow{\alpha}},\upsilon)\langle\bar{\underline{\alpha}},\underline{f}^{1}_{u}(s)\rangle\mu_{u}(ds)\}.

v(t,uα,υ)v^{*}(t,u{\overrightarrow{\alpha}},\upsilon) satisfies the equation

The value of the problem Vn=v(Xn1,Xn,Πn,Υn)V_{n}=v^{*}(X_{n-1},X_{n},\overrightarrow{\Pi}_{n},\Upsilon_{n}).

References

  • [Bayraktar and Dayanik(2006)] E. Bayraktar and S. Dayanik. Poisson disorder problem with exponential penalty for delay. Math. Oper. Res., 31(2):217–233, 2006. 10.1287/moor.1060.0190.
  • [Bayraktar and Poor(2007)] E. Bayraktar and H. Poor. Quickest detection of a minimum of two Poisson disorder times. SIAM J. Control Optim., 46(1):308–331, 2007. 10.1137/050630933.
  • [Bojdecki and Hosza(1984)] T. Bojdecki and J. Hosza. On a generalized disorder problem. Stochastic Processes Appl., 18:349–359, 1984.
  • [Brémaud(1981)] P. Brémaud. Point processes and queues. Springer-Verlag, New York-Berlin, 1981. ISBN 0-387-90536-7. Martingale dynamics, Springer Series in Statistics.
  • [Brodsky and Darkhovsky(1993)] B. Brodsky and B. Darkhovsky. Nonparametric Methods in Change-Point Problems. Mathematics and its Applications (Dordrecht). 243. Dordrecht: Kluwer Academic Publishers. 224 p., Dordrecht, 1993.
  • [Carlstein et al.(1994)Carlstein, Múller, and Siegmund] E. Carlstein, H. Múller, and D. Siegmund, editors. Change-point Problems, volume 23 of IMS Lecture Notes-Monograph Series, pages 78–92. Institute of Mathematical Statistics, Hayward, California, 1994.
  • [Chow et al.(1971)Chow, Robbins, and Siegmund] Y. Chow, H. Robbins, and D. Siegmund. Great expectations: The theory of optimal stopping. Houghton Mifflin Company, Boston MA, USA, 1971.
  • [Davis(1974)] M. Davis. A note on the Poisson disorder problem. In Proc. Int. Conf. on Control Theory, Zakopane, Poland, volume 1 of Banach Center Publications, pages 65–72, Warszawa, 1974. PWN.
  • [Dube and Mazumdar(2001)] P. Dube and R. Mazumdar. A framework for quickest detection of traffic anomalies in networks. Technical report, Electrical and Computer Engineering, Purdue University, November 2001. citeseer.ist.psu.edu/506551.html.
  • [Eidukjavicjus(1979)] R. Eidukjavicjus. Optimalna ostanovka markovskoj cepi dvumia momentami ostanovki. Lit. Mat. Sb., 13:181–183, 1979.
  • [Feng and Xiao(2000a)] Y. Feng and B. Xiao. Optimal policies of yield management with multiple predetermined prices. Oper. Res., 48(2):332–343, 2000a.
  • [Feng and Xiao(2000b)] Y. Feng and B. Xiao. A continuous-time yield management model with multiple prices and reversible price changes. Manage. Sci., 46(5):644–657, 2000b. ISSN 0025-1909.
  • [Fuh(2004)] C.-D. Fuh. Asymptotic operating characteristics of an optimal change point detection in hidden Markov models. Ann. Stat., 32(5):2305–2339, 2004. 10.1214/009053604000000580.
  • [Galčuk and Rozovskiĭ(1971)] L. I. Galčuk and B. L. Rozovskiĭ. The problem of “disorder” for a Poisson process. Teor. Verojatnost. i Primenen., 16:729–734, 1971. ISSN 0040-361x.
  • [Gapeev(2005)] P. V. Gapeev. The disorder problem for compound Poisson processes with exponential jumps. Ann. Appl. Probab., 15(1A):487–499, 2005. ISSN 1050-5164. 10.1214/105051604000000981.
  • [Girshick and Rubin(1952)] M. Girshick and H. Rubin. A Bayes approach to a quality control model. Ann. Math. Stat., 23:114–125, 1952.
  • [Haggstrom(1967)] G. Haggstrom. Optimal sequential procedures when more then one stop is required. Ann. Math. Statist., 38:1618–1626, 1967.
  • [Ivanoff and Merzbach(2010)] B. Ivanoff and E. Merzbach. Optimal detection of a change-set in a spatial Poisson process. Ann. Appl. Probab., 20(2):640–659, 2010. 10.1214/09-AAP629.
  • [Jacobsen(2006)] M. Jacobsen. Point process theory and applications. Marked point and picewise deterministic processes., volume 7 of Probability and Its Applications. Birkhäuser, Boston, 2006. ISBN 978-0-8176-4215-0; 0-8176-4215-3.
  • [Karpowicz and Szajowski(2007a)] A. Karpowicz and K. Szajowski. Double optimal stopping times and dynamic pricing problem: description of the mathematical model. Math. Methods Oper. Res., 66(2):235–253, 2007a. ISSN 1432-2994. 10.1007/s00186-006-0132-y.
  • [Karpowicz and Szajowski(2007b)] A. Karpowicz and K. Szajowski. Double optimal stopping of a risk process. GSSR Stochastics: An International Journal of Probability and Stochastic Processes, 79(1-2):155–167, 2007b. 10.1080/17442500601084204.
  • [Karpowicz and Szajowski(2012)] A. Karpowicz and K. Szajowski. Anglers’ fishing problem. In Advances in dynamic games, volume 12 of Ann. Internat. Soc. Dynam. Games, pages 327–349. Birkhäuser/Springer, New York, 2012.
  • [Kobylanski et al.(2010)Kobylanski, Quenez, and Rouy-Mironescu] M. Kobylanski, M.-C. Quenez, and E. Rouy-Mironescu. Optimal double stopping time problem. C. R., Math., Acad. Sci. Paris, 348(1-2):65–69, 2010. 10.1016/j.crma.2009.11.020.
  • [Moustakides(1998)] G. Moustakides. Quickest detection of abrupt changes for a class of random processes. IEEE Trans. Inf. Theory, 44(5):1965–1968, 1998.
  • [Nikolaev(1981)] M. Nikolaev. O kriterii optimalnosti obobshchennoĭ posledovatelnoĭ procedury. Litovskiĭ Matematicheskiĭ Sbornik, 21:75–82, 1981.
  • [Nikolaev(1998)] M. Nikolaev. Optimal multi-stopping rules. Obozr. Prikl. Prom. Mat., 5(2):309–348, 1998.
  • [Page(1954)] E. Page. Continuous inspection schemes. Biometrika, 41:100–115, 1954.
  • [Page(1955)] E. Page. A test for a change in a parameter occurring at an unknown point. Biometrika, 42:523–527, 1955.
  • [Peskir and Shiryaev(2006)] G. Peskir and A. Shiryaev. Optimal stopping and free-boundary problems. Lectures in Mathematics, ETH Zürich. Birkhäuser, Basel, 2006.
  • [Peskir and Shiryaev(2002)] G. Peskir and A. N. Shiryaev. Solving the Poisson disorder problem. In P. Schönbucher and K. Sandmann, editors, Advances in finance and stochastics. Essays in honour of Dieter Sondermann, pages 295–312. Springer-Verlag, Heidelberg, 2002.
  • [Sarnowski and Szajowski(2008)] W. Sarnowski and K. Szajowski. On-line detection of a part of a sequence with unspecified distribution. Stat. Probab. Lett., 78(15):2511–2516, 2008. doi:10.1016/j.spl.2008.02.040.
  • [Sarnowski and Szajowski(2011)] W. Sarnowski and K. Szajowski. Optimal detection of transition probability change in random sequence. Stochastics, 83(4-6):569–581, 2011. 10.1080/17442508.2010.540015.
  • [Shiryaev(1961)] A. Shiryaev. The problem of the most rapid detection of a disturbance in a stationary process. Sov. Math., Dokl., 2:795–799, 1961. translation from Dokl. Akad. Nauk SSSR 138, 1039-1042 (1961).
  • [Shiryaev(1963)] A. Shiryaev. On optimal methods in the quickest detection problems. Teor. Veroyatn. Primen., 8(1):26–51, 1963. translation from Teor. Veroyatn. Primen. 8, 26-51 (1963).
  • [Shiryaev(2006)] A. Shiryaev. From “disorder” to nonlinear filtering and martingale theory. In A. Bolibruch, Y. Osipov, Y. Sinai, V. I. Arnold, L. Faddeev, Y. I. Manin, V. Philippov, V. Tikhomirov, and A. M. Vershik, editors, Mathematical events of the twentieth century, pages 371–397. Springer, Berlin, 2006. Transl. from the Russian, publ. PHASIS Moscow.
  • [Shiryaev(2008)] A. N. Shiryaev. Optimal stopping rules, volume 8 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2008. ISBN 978-3-540-74010-0. Translated from the 1976 Russian second edition by A. B. Aries, Reprint of the 1978 translation.
  • [Stadje(1985)] W. Stadje. On multiple stopping rules. Optimization, 16:401–418, 1985.
  • [Szajowski(1992)] K. Szajowski. Optimal on-line detection of outside observation. J.Stat. Planning and Inference, 30:413–426, 1992.
  • [Szajowski(1996)] K. Szajowski. A two-disorder detection problem. Appl. Math., 24(2):231–241, 1996.
  • [Szajowski(2011)] K. Szajowski. Multi-variate quickest detection of significant change process. In J. S. Baras, J. Katz, and E. Altman, editors, Decision and Game Theory for Security. Second international conference, GameSec 2011, College Park, MD, Maryland, USA, November 14–15, 2011, volume 7037 of Lecture Notes in Computer Science, pages 56–66, Berlin, 2011. Springer. 10.1007/978-3-642-25280-8_7.
  • [Szajowski(2011)] K. Szajowski. On a random number of disorders. Probab. Math. Stat., 31(1):17–45, 2011. ISSN 0208-4147.
  • [Yakir(1994)] B. Yakir. Optimal detection of a change in distribution when the observations form a Markov chain with a finite state space. In E. Carlstein, H.-G. Müller, and D. Siegmund, editors, Change-point Problems. Papers from the AMS-IMS-SIAM Summer Research Conference held at Mt. Holyoke College, South Hadley, MA, USA, July 11–16, 1992, volume 23 of IMS Lecture Notes-Monograph Series, pages 346–358, Hayward, California, 1994. Institute of Mathematical Statistics.
  • [Yoshida(1983)] M. Yoshida. Probability maximizing approach for a quickest detection problem with complicated Markov chain. J. Inform. Optimization Sci., 4:127–145, 1983.

Received 31st of March 2019; revised 31st of January 2020.