This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A unified study for estimation of order restricted location/scale parameters under the generalized Pitman nearness criterion

Naresh Garg and Neeraj Misra
Department of Mathematics and Statistics
Indian Institute of Technology Kanpur
Kanpur-208016, Uttar Pradesh, India

Abstract

We consider component-wise estimation of order restricted location/scale parameters of a general bivariate location/scale distribution under the generalized Pitman nearness criterion (GPN). We develop some general results that, in many situations, are useful in finding improvements over location/scale equivariant estimators. In particular, under certain conditions, these general results provide improvements over the unrestricted Pitman nearest location/scale equivariant estimators and restricted maximum likelihood estimators. The usefulness of the obtained results is illustrated through their applications to specific probability models. A simulation study has been considered to compare how well different estimators perform under the GPN criterion with a specific loss function.
 
Keywords: Generalised Pitman Nearness (GPN) Criterion; Location equivariant estimator; Pitman Nearness (PN) Criterion; Restricted Parameter Space; Scale equivariant estimator; Unrestricted Parameter Space.

1. Introduction

The problem of estimating real-valued parameters of a set of distributions, when it is known apriori that these parameters follow certain order restrictions, is of great relevance. For example, in a clinical trial, where estimating average blood pressures of two groups of Hypertension patients, one treated with a standard drug and the other with a placebo, is of interest, it can be assumed that the average blood pressure of Hypertension patients treated with the standard drug is lower than the average blood pressure of Hypertension patients treated with the placebo. Such estimation problems have been extensively studied in the literature. For a detailed account of work carried out in this area, one may refer to Barlow et al. (1972), Robertson et al. (1988) and van Eeden (2006).

Early work on estimation of order restricted parameters deals with obtaining isotonic regression estimators or restricted maximum likelihood estimators (MLE) (see Brunk (1955), van Eeden (, , 1957, 1958) and Robertson et al. (1988)). Subsequently, a lot of work was carried out using the decision theoretic approach under different loss functions. Some of the key contributions in this direction are due to Katz (1963), Cohen and Sackrowitz (1970), Brewster-Zidek (1974), Lee (1981), Kumar and Sharma (1988, 1989,1992), Kelly (1989), Kushary and Cohen (1989), Kaur and Singh (1991), Gupta and Singh (1992), Vijayasree and Singh (1993), Hwang and Peddada (1994), Kubokawa and Saleh (1994), Vijayasree et al. (1995), Misra and Dhariyal (1995), Garren (2000), Misra et al. (2002, 2004), Peddada et al. (2005), Chang and Shinozaki (2015) and Patra (2017).

A popular alternative criterion to compare different estimators is the Pitman nearness criterion, due to Pitman (1937). This criterion of comparing two estimators is based on the probability that one estimator is closer to the estimand than the other estimator under the L1L_{1} distance (i.e., absolute error loss function). Rao (1981) pointed out advantages of the Pitman nearness (PN) criterion over the mean squared error criterion. Keating (1985) further advocated Rao’s findings through certain examples. Keating and Mason (1985) provided some practical examples where the PN criterion is more relevant than minimizing the risk function. Also, Peddada (1985) and Rao et al. (1986) extended the notion of PN criterion by defining the generalised Pitman Criterion (GPN) based on a general loss function (in place of the absolute error loss function). For a detailed study of the PN criterion, one may check out the monograph by Keating et al. (1993).

The PN criterion has been extensively used in the literature for different estimation problems (see Nayak (1990) and Keating (1993)). However, there are only a limited number of studies on use of the PN criterion in estimating order restricted parameters. For component-wise estimation of order restricted means of two independent normal distributions, having a common unknown variance, Gupta and Singh (1992) showed that restricted MLEs are nearer to the respective population means than unrestricted MLEs under the PN criterion. Analogous result was also proved for the estimation of the common variance, which is also considered in Misra et al. (2004). Chang et al. (2017, 2020) considered estimation of order restricted means of a bivariate normal distribution, having a known covariance matrix, and established that, under a modified PN criterion, the restricted MLEs are nearer to respective population means than some of the estimators proposed by Hwang and Peddada (1994) and Tan and Peddada (2000). Ma and Liu (2014) considered the problem of estimating order restricted scale parameters of two independent gamma distributions. Under the PN criterion, they have compared restricted MLEs of scale parameters with the best unbiased estimators. Some other studies in this direction are due to Misra and van der Meulen (1997), Misra et al. (2004), and Chang and Shinozaki (2015). Most of these studies are centered around specific probability distributions (mostly, normal and gamma) and the absolute error loss in the PN criterion. In this paper, we aim to unify these studies by considering the problem of estimating order restricted location/scale parameters of a general bivariate location/scale model under the GPN criterion with a general loss function. We will develop some general results that, in certain situations, are useful in finding improvements over location/scale equivariant estimators. As a consequence of these general results, we will obtain estimators improving upon the unrestricted Pitman nearest location/scale equivariant estimators (PNLEE/PNSEE). We also consider some applications of these general results to specific probability models and obtain improvements over the PNLEE/PNSEE and the restricted MLEs.

The rest of the paper is organized as follows. In Section 2, we introduce some useful notations, definitions, and results that are used later in the paper. Sections 3.1 and 3.2 (4.1 and 4.2), respectively, deal with estimating the smaller and larger location (scale) parameters under the criterion of GPN. In Section 5, we present a simulation study to compare performances of various competing estimators.

2. Some Useful Notations, Definitions and Results

The following notion of the Pitman nearness criterion was first introduced by Pitman (1937).
 
Definition 2.1 Let 𝕏\mathbb{X} be a random vector having a distribution involving an unknown parameter 𝜽Θ\boldsymbol{\theta}\in\Theta (𝜽\boldsymbol{\theta} may be vector valued). Let δ1\delta_{1} and δ2\delta_{2} be two estimators of a real-valued estimand τ(𝜽)\tau(\boldsymbol{\theta}). Then, the Pitman nearness (PN) of δ1\delta_{1} relative to δ2\delta_{2} is defined by

PN(δ1,δ2;𝜽)=P𝜽[|δ1τ(𝜽)|<|δ2τ(𝜽)|],𝜽Θ,PN(\delta_{1},\delta_{2};\boldsymbol{\theta})=P_{\boldsymbol{\theta}}[|\delta_{1}-\tau(\boldsymbol{\theta})|<|\delta_{2}-\tau(\boldsymbol{\theta})|],\;\;\boldsymbol{\theta}\in\Theta,

and the estimator δ1\delta_{1} is said to be nearer to τ(𝜽)\tau(\boldsymbol{\theta}) than δ2\delta_{2} if PN(δ1,δ2;𝜽)12,𝜽ΘPN(\delta_{1},\delta_{2};\boldsymbol{\theta})\geq\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta, with strict inequality for some 𝜽Θ\boldsymbol{\theta}\in\Theta.

Two drawbacks of the above criterion are that it does not take into account that estimators δ1\delta_{1} and δ2\delta_{2} may coincide over a subset of the sample space and that it is only based on the L1L_{1} distance (absolute error loss). To take care of these deficiencies, Nayak (1990) and Kubokawa (1991) modified the Pitman (1937) nearness criterion and defined the generalized Pitman nearness (GPN) criterion based on general loss function L(𝜽,δ).L(\boldsymbol{\theta},\delta).
 
Definition 2.2 Let 𝕏\mathbb{X} be a random vector having a distribution involving an unknown parameter 𝜽Θ\boldsymbol{\theta}\in\Theta and let τ(𝜽)\tau(\boldsymbol{\theta}) be a real-valued estimand. Let δ1\delta_{1} and δ2\delta_{2} be two estimators of the estimand τ(𝜽)\tau(\boldsymbol{\theta}). Also, let L(𝜽,a)L(\boldsymbol{\theta},a) be a specified loss function for estimating τ(𝜽)\tau(\boldsymbol{\theta}). Then, the generalized Pitman nearness (GPN) of δ1\delta_{1} relative to δ2\delta_{2} is defined by

GPN(δ1,δ2;𝜽)=P𝜽[L(𝜽,δ1)<L(𝜽,δ2)]+12P𝜽[L(𝜽,δ1)=L(𝜽,δ2)],𝜽Θ.GPN(\delta_{1},\delta_{2};\boldsymbol{\theta})=P_{\boldsymbol{\theta}}[L(\boldsymbol{\theta},\delta_{1})<L(\boldsymbol{\theta},\delta_{2})]+\frac{1}{2}P_{\boldsymbol{\theta}}[L(\boldsymbol{\theta},\delta_{1})=L(\boldsymbol{\theta},\delta_{2})],\;\;\boldsymbol{\theta}\in\Theta.

The estimator δ1\delta_{1} is said to be nearer to τ(𝜽)\tau(\boldsymbol{\theta}) than δ2\delta_{2}, under the GPN criterion, if GPN(δ1,δ2;𝜽)12,𝜽ΘGPN(\delta_{1},\delta_{2};\boldsymbol{\theta})\geq\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta, with strict inequality for some 𝜽Θ\boldsymbol{\theta}\in\Theta.
 
Definition 2.3 Let 𝒟\mathcal{D} be a class of estimators of a real-valued estimand τ(𝜽)\tau(\boldsymbol{\theta}). Let L(𝜽,a)L(\boldsymbol{\theta},a) be a given loss function. Then, an estimator δ\delta^{*} is said to be the Pitman nearest within the class 𝒟\mathcal{D}, if

GPN(δ,δ;𝜽)12,δ𝒟,𝜽Θ,GPN(\delta^{*},\delta;\boldsymbol{\theta})\geq\frac{1}{2},\;\forall\;\delta\in\mathcal{D},\;\boldsymbol{\theta}\in\Theta,

with strict inequality for some 𝜽Θ\boldsymbol{\theta}\in\Theta.

The following result, famously known as Chebyshev’s inequality, will be used in our study (see Marshall and Olkin (2007)).
 
Proposition 2.1 Let SS be random variable (r.v.) and let k1()k_{1}(\cdot) and k2()k_{2}(\cdot) be real-valued monotonic functions defined on the distributional support of the r.v. SS. If k1()k_{1}(\cdot) and k2()k_{2}(\cdot) are monotonic functions of the same (opposite) type, then

E[k1(S)k2(S)]()E[k1(S)]E[k2(S)],E[k_{1}(S)k_{2}(S)]\geq(\leq)E[k_{1}(S)]E[k_{2}(S)],

provided the above expectations exist.

3. Improved Estimators for Restricted Location Parameters

Let 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) be a random vector with a joint probability density function (p.d.f.)

(3.1) f𝜽(x1,x2)=f(x1θ1,x2θ2),(x1,x2)2,f_{\boldsymbol{\theta}}(x_{1},x_{2})=f(x_{1}-\theta_{1},x_{2}-\theta_{2}),\;\;\;(x_{1},x_{2})\in\Re^{2},

where f(,)f(\cdot,\cdot) is a specified Lebesgue p.d.f. on 2\Re^{2} and 𝜽=(θ1,θ2)Θ0={(t1,t2)2:t1t2}\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0}=\{(t_{1},t_{2})\in\Re^{2}:t_{1}\leq t_{2}\} is the vector of unknown restricted location parameters; here \Re denotes the real line and 2=×\Re^{2}=\Re\times\Re. Generally, 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) would be a minimal-sufficient statistic based on a bivariate random sample or two independent random samples, as the case may be.

Consider estimation of the location parameter θi\theta_{i} under the GPN criterion with a general loss function Li(𝜽,a)=W(aθi),𝜽Θ0,a𝒜,i=1,2,L_{i}(\boldsymbol{\theta},a)=W(a-\theta_{i}),\;\boldsymbol{\theta}\in\Theta_{0},\;a\in\mathcal{A},\;i=1,2, where 𝒜=\mathcal{A}=\Re and W:[0,)W:\Re\rightarrow[0,\infty) is a specified non-negative function such that W(0)=0W(0)=0, W(t)W(t) is strictly decreasing on (,0)(-\infty,0) and strictly increasing on (0,)(0,\infty). Throughout this section, whenever term ”general loss function” is used, it refers to a loss function having the above properties. Also, in this section, the GPN criterion is considered with a general loss function described above.

The problem of estimating restricted location parameter θi(i=1,2),\theta_{i}\,(i=1,2), under the GPN criterion, is invariant under the group of transformations 𝒢={gc:c},\mathcal{G}=\{g_{c}:\,c\in\Re\}, where gc(x1,x2)g_{c}(x_{1},x_{2}) =(x1+c,x2+c),(x1,x2)2,c.=(x_{1}+c,x_{2}+c),\;(x_{1},x_{2})\in\Re^{2},\;c\in\Re. Under the group of transformations 𝒢\mathcal{G}, any location equivariant estimator of θi\theta_{i} has the form

(3.2) δψ(𝕏)=Xiψ(D),\delta_{\psi}(\mathbb{X})=X_{i}-\psi(D),

for some function ψ:,i=1,2,\psi:\,\Re\rightarrow\Re\,,\;i=1,2, where D=X2X1D=X_{2}-X_{1}. Let fD(t|λ)f_{D}(t|\lambda) be the p.d.f. of r.v. D=X2X1D=X_{2}-X_{1}, where λ=θ2θ1[0,)\lambda=\theta_{2}-\theta_{1}\in[0,\infty). Note that the distribution of DD depends on 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0} only through λ=θ2θ1[0,)\lambda=\theta_{2}-\theta_{1}\in[0,\infty). Exploiting the prior information of order restriction on parameters θ1\theta_{1} and θ2\theta_{2} (θ1θ2\theta_{1}\leq\theta_{2}), we aim to obtain estimators that are Pitman nearer to θi,i=1,2\theta_{i},\,i=1,2.

The following lemma will be useful in proving the main results of this section (also see Nayak (1990) and Zhou and Nayak (2012)).
 
Lemma 3.1 Let YY be a r.v. having the Lebesgue p.d.f. and let mYm_{Y} be the median of YY. Let W:[0,)W:\Re\rightarrow[0,\infty) be a function such that W(0)=0W(0)=0, W(t)W(t) is strictly decreasing on (,0)(-\infty,0) and strictly increasing on (0,)(0,\infty). Then, for <c1<c2mY-\infty<c_{1}<c_{2}\leq m_{Y} or <mYc2<c1-\infty<m_{Y}\leq c_{2}<c_{1}, GPN=P[W(Yc2)<W(Yc1)]+12P[W(Yc2)=W(Yc1)]>12GPN=P[W(Y-c_{2})<W(Y-c_{1})]+\frac{1}{2}P[W(Y-c_{2})=W(Y-c_{1})]>\frac{1}{2}.

Proof.

. We have the following two cases:
 
Case 1: <c1<c2mY<-\infty<c_{1}<c_{2}\leq m_{Y}<\infty

In this case YmYYc2<Yc1Y-m_{Y}\leq Y-c_{2}<Y-c_{1} and, thus, YmY0Y-m_{Y}\geq 0 implies that W(Yc2)<W(Yc1)W(Y-c_{2})<W(Y-c_{1}). Consequently,

GPNP[W(Yc2)<W(Yc1)]>P[YmY]=12.GPN\geq P[W(Y-c_{2})<W(Y-c_{1})]>P[Y\geq m_{Y}]=\frac{1}{2}.

Case 2: <mYc2<c1<-\infty<m_{Y}\leq c_{2}<c_{1}<\infty

In this case YmYYc2>Yc1Y-m_{Y}\geq Y-c_{2}>Y-c_{1}. Thus, YmY0Y-m_{Y}\leq 0 implies that W(Yc2)<W(Yc1)W(Y-c_{2})<W(Y-c_{1}). Hence

GPNP[W(Yc2)<W(Yc1)]>P[YmY]=12.GPN\geq P[W(Y-c_{2})<W(Y-c_{1})]>P[Y\leq m_{Y}]=\frac{1}{2}.

Note that, in the unrestricted case (parameter space Θ=2\Theta=\Re^{2}), the problem of estimating θi,i=1,2,\theta_{i},\;i=1,2, under the GPN criterion is invariant under the group of transformations 𝒢0={gc1,c2:(c1,c2)2},\mathcal{G}_{0}=\{g_{c_{1},c_{2}}:\,(c_{1},c_{2})\in\Re^{2}\}, where gc1,c2(x1,x2)g_{c_{1},c_{2}}(x_{1},x_{2}) =(x1+c1,x2+c2),(x1,x2)2,(c1,c2)2.=(x_{1}+c_{1},x_{2}+c_{2}),\;(x_{1},x_{2})\in\Re^{2},\;(c_{1},c_{2})\in\Re^{2}. Any location equivariant estimator is of the form δi,c(𝕏)=Xic,c,i=1,2\delta_{i,c}(\mathbb{X})=X_{i}-c,\;c\in\Re,\;i=1,2. An immediate consequence of Lemma 3.1 is that, under the unrestricted parameter space Θ=2\Theta=\Re^{2}, the Pitman nearest location equivariant estimator (PNLEE) of θi\theta_{i}, under the GPN criterion, is δi,PNLEE(𝕏)=Xim0,i\delta_{i,PNLEE}(\mathbb{X})=X_{i}-m_{0,i}, where m0,im_{0,i} is the median of the r.v. Zi=Xiθi,i=1,2Z_{i}=X_{i}-\theta_{i},\;i=1,2.

3.1. Estimation of the Smaller Location Parameter θ1\theta_{1}

Let Z1=X1θ1Z_{1}=X_{1}-\theta_{1}, λ=θ2θ1\lambda=\theta_{2}-\theta_{1} and fD(t|λ)f_{D}(t|\lambda) be the p.d.f. of r.v. D=X2X1D=X_{2}-X_{1}. Let δξ(𝕏)=X1ξ(D)\delta_{\xi}(\mathbb{X})=X_{1}-\xi(D) and δψ(𝕏)=X1ψ(D)\delta_{\psi}(\mathbb{X})=X_{1}-\psi(D) be two location equivariant estimators of θ1\theta_{1}, where ξ\xi and ψ\psi are real-valued functions defined on \Re. Then, the GPN of δξ(𝕏)\delta_{\xi}(\mathbb{X}) relative to δψ(𝕏)\delta_{\psi}(\mathbb{X}) is given by

GPN(δξ,δψ;𝜽)\displaystyle GPN(\delta_{\xi},\delta_{\psi};\boldsymbol{\theta}) =P𝜽[W(Z1ξ(D))<W(Z1ψ(D))]\displaystyle=P_{\boldsymbol{\theta}}[W(Z_{1}-\xi(D))<W(Z_{1}-\psi(D))]
+12P𝜽[W(Z1ξ(D))=W(Z1ψ(D))],𝜽Θ0,\displaystyle\quad+\frac{1}{2}P_{\boldsymbol{\theta}}[W(Z_{1}-\xi(D))=W(Z_{1}-\psi(D))],\;\;\boldsymbol{\theta}\in\Theta_{0},
=g1,λ(ξ(t),ψ(t),t)fD(t|λ)𝑑t,λ0,\displaystyle=\int_{-\infty}^{\infty}g_{1,\lambda}(\xi(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\lambda\geq 0,

where, for λ0\lambda\geq 0 and fixed tt in the support of the distribution of r.v. DD,

g1,λ(ξ(t),ψ(t),t)\displaystyle g_{1,\lambda}(\xi(t),\psi(t),t) =P𝜽[W(Z1ξ(t))<W(Z1ψ(t))|D=t]\displaystyle=P_{\boldsymbol{\theta}}[W(Z_{1}-\xi(t))<W(Z_{1}-\psi(t))|D=t]
(3.3) +12P𝜽[W(Z1ξ(t))=W(Z1ψ(t))|D=t].\displaystyle\qquad+\frac{1}{2}P_{\boldsymbol{\theta}}[W(Z_{1}-\xi(t))\!=\!W(Z_{1}-\psi(t))|D=t].

For any fixed λ0\lambda\geq 0 and tt, let mλ(1)(t)m_{\lambda}^{(1)}(t) denote the median of the conditional distribution of Z1Z_{1} given D=tD=t. For any fixed tt, the conditional p.d.f. of Z1Z_{1} given D=tD=t is fλ(s|t)=f(s,s+tλ)fD(t|λ)f_{\lambda}(s|t)=\frac{f(s,s+t-\lambda)}{f_{D}(t|\lambda)} and fD(t|λ)=f(y,y+tλ)𝑑yf_{D}(t|\lambda)=\int_{-\infty}^{\infty}f(y,y+t-\lambda)dy, λ0\lambda\geq 0. Thus, mλ(1)(t)f(s,s+tλ)𝑑s=12f(s,s+tλ)𝑑s\int_{-\infty}^{m_{\lambda}^{(1)}(t)}\,f(s,s+t-\lambda)ds=\frac{1}{2}\int_{-\infty}^{\infty}f(s,s+t-\lambda)ds. For any fixed tt, using Lemma 3.1, we have g1,λ(ξ(t),ψ(t),t)>12,λ0g_{1,\lambda}(\xi(t),\psi(t),t)>\frac{1}{2},\;\forall\;\lambda\geq 0, if ψ(t)<ξ(t)mλ(1)(t),λ0\psi(t)<\xi(t)\leq m_{\lambda}^{(1)}(t),\;\forall\;\lambda\geq 0, or if mλ(1)(t)ξ(t)<ψ(t),λ0m_{\lambda}^{(1)}(t)\leq\xi(t)<\psi(t),\;\forall\;\lambda\geq 0. Also, note that, for any fixed tt, g1,λ(ψ(t),ψ(t),t)=12,λ0g_{1,\lambda}(\psi(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 0. These observations, along with Lemma 3.1, yield the following result.

Theorem 3.1.1. Let δψ(𝕏)=X1ψ(D)\delta_{\psi}(\mathbb{X})=X_{1}-\psi(D) be a location equivariant estimator of θ1\theta_{1}, where ψ:\psi:\Re\rightarrow\Re. Let l(1)(t)l^{(1)}(t) and u(1)(t)u^{(1)}(t) be functions such that l(1)(t)mλ(1)(t)u(1)(t),λ0l^{(1)}(t)\leq m_{\lambda}^{(1)}(t)\leq u^{(1)}(t),\;\forall\;\lambda\geq 0 and any tt. For any fixed tt, define ψ(t)=max{l(1)(t),min{ψ(t),u(1)(t)}}\psi^{*}(t)\!=\!\max\{l^{(1)}(t),\min\{\psi(t),u^{(1)}(t)\}\}. Then, under the GPN criterion with a general loss function, the estimator δψ(𝕏)=X1ψ(D)\delta_{\psi^{*}}(\mathbb{X})\!=\!X_{1}-\psi^{*}(D) is Pitman nearer to θ1\theta_{1} than the estimator δψ(𝕏)=X1ψ(D)\delta_{\psi}(\mathbb{X})=X_{1}-\psi(D), for all 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0}, provided P𝜽[l(1)(D)ψ(D)u(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(1)}(D)\leq\psi(D)\leq u^{(1)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

Proof.

​​​. The GPN of the estimator δψ(𝕏)=X1ψ(D)\delta_{\psi^{*}}(\mathbb{X})=X_{1}-\psi^{*}(D) relative to δψ(𝕏)=X1ψ(D)\delta_{\psi}(\mathbb{X})=X_{1}-\psi(D) can be written as

GPN(δψ,δψ;𝜽)=g1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t,λ0,\displaystyle GPN(\delta_{\psi^{*}},\delta_{\psi};\boldsymbol{\theta})=\int_{-\infty}^{\infty}g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\lambda\geq 0,

where g1,λ(,,)g_{1,\lambda}(\cdot,\cdot,\cdot) is defined by (3.3).

Let A={t:ψ(t)<l(1)(t)}A=\{t:\psi(t)<l^{(1)}(t)\}, B={t:l(1)(t)ψ(t)u(1)(t)}B=\{t:l^{(1)}(t)\leq\psi(t)\leq u^{(1)}(t)\} and C={t:ψ(t)>u(1)(t)}C=\{t:\psi(t)>u^{(1)}(t)\}. Clearly

ψ(t)={l(1)(t),tAψ(t),tBu(1)(t),tC.\psi^{*}(t)=\begin{cases}l^{(1)}(t),&t\in A\\ \psi(t),&t\in B\\ u^{(1)}(t),&t\in C\end{cases}.

Since l(1)(t)mλ(1)(t)u(1)(t),λ0l^{(1)}(t)\leq m_{\lambda}^{(1)}(t)\leq u^{(1)}(t),\;\forall\;\lambda\geq 0 and tt, using Lemma 3.1, we have g1,λ(ψ(t),ψ(t),t)>12,λ0g_{1,\lambda}(\psi^{*}(t),\psi(t),t)>\frac{1}{2},\;\forall\;\lambda\geq 0, provided tACt\in A\cup C. Also, for tBt\in B, g1,λ(ψ(t),ψ(t),t)=12,λ0g_{1,\lambda}(\psi^{*}(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 0. Since Pθ¯(AC)>0,𝜽Θ0P_{\underline{\theta}}(A\cup C)>0,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, we have
 
GPN(δψ,δψ;𝜽)GPN(\delta_{\psi^{*}},\delta_{\psi};\boldsymbol{\theta})

=Ag1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t+Bg1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t\displaystyle\!=\!\int_{A}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt\!+\!\int_{B}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt\!
+Cg1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\!\int_{C}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt
>12,𝜽Θ0.\displaystyle>\frac{1}{2},\;\;\boldsymbol{\theta}\in\Theta_{0}.

Using arguments similar to those used in the proof of Theorem 3.1.1, one can, in fact, obtain a class of estimators improving over an arbitrary location equivariant estimator, under certain conditions. The proof of the following corollary is contained in the proof of Theorem 3.1.1, and hence skipped.
 
Corollary 3.1.1. Let δψ(𝕏)=X1ψ(D)\delta_{\psi}(\mathbb{X})=X_{1}-\psi(D) be a location equivariant estimator of θ1\theta_{1} such that P𝜽[l(1)(D)ψ(D)u(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(1)}(D)\leq\psi(D)\leq u^{(1)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, where l(1)()l^{(1)}(\cdot) and u(1)()u^{(1)}(\cdot) are as defined in Theorem 3.1.1. Let ψ1,0:\psi_{1,0}:\Re\rightarrow\Re be such that ψ(t)<ψ1,0(t)l(1)(t)\psi(t)<\psi_{1,0}(t)\leq l^{(1)}(t), whenever ψ(t)<l(1)(t)\psi(t)<l^{(1)}(t), and u(1)(t)ψ1,0(t)<ψ(t)u^{(1)}(t)\leq\psi_{1,0}(t)<\psi(t), whenever u(1)(t)<ψ(t)u^{(1)}(t)<\psi(t). Also let ψ1,0(t)=ψ(t)\psi_{1,0}(t)=\psi(t), whenever l(1)(t)ψ(t)u(1)(t)l^{(1)}(t)\leq\psi(t)\leq u^{(1)}(t). Let δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D). Then, GPN(δψ1,0,δψ;𝜽)>12,𝜽Θ0GPN(\delta_{\psi_{1,0}},\delta_{\psi};\boldsymbol{\theta})>\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

Ironically, the result stated in Corollary 3.1.1 is general than the one stated in Theorem 3.1.1. However, among all the improved estimators provided through Corollary 3.1.1, the maximum improvement is provided by the one covered under Theorem 3.1.1.

The following corollary provides improvements over the unrestricted PNLEE δ1,PNLEE(𝕏)=X1m0,1\delta_{1,PNLEE}(\mathbb{X})=X_{1}-m_{0,1}, under the restricted parameter space Θ0\Theta_{0}.
 
Corollary 3.1.2. Let l(1)(t)l^{(1)}(t) and u(1)(t)u^{(1)}(t) be as defined in Theorem 3.1.1. Suppose that P𝜽[l(1)(D)m0,1u(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(1)}(D)\leq m_{0,1}\leq u^{(1)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}. Define, for any fixed tt, ξ(t)=max{l(1)(t),min{m0,1,u(1)(t)}}\xi^{*}(t)\!=\!\max\{l^{(1)}(t),\min\{m_{0,1},u^{(1)}(t)\}\}. Then, for every 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0}, the estimator δξ(𝕏)=X1ξ(D)\delta_{\xi^{*}}(\mathbb{X})\!=\!X_{1}-\xi^{*}(D) is Pitman nearer to θ1\theta_{1} than the PNLEE δ1,PNLEE(𝕏)=X1m0,1\delta_{1,PNLEE}(\mathbb{X})=X_{1}-m_{0,1}, under the GPN criterion.

In order to identify l(1)(t)l^{(1)}(t) and u(1)(t)u^{(1)}(t), satisfying l(1)(t)mλ(1)(t)u(1)(t),λ0 and tl^{(1)}(t)\leq m_{\lambda}^{(1)}(t)\leq u^{(1)}(t),\;\forall\;\lambda\geq 0\text{ and }t, the following lemma will be useful.
 
Lemma 3.1.1. If, for every fixed λ0\lambda\geq 0 and tt, f(s,s+tλ)/f(s,s+t)f(s,s+t-\lambda)/f(s,s+t) is increasing (decreasing) in ss (wherever the ratio is not of the form 0/00/0), then, for every fixed tt, mλ(1)(t)m_{\lambda}^{(1)}(t) is an increasing (decreasing) function of λ[0,)\lambda\in[0,\infty).

Proof.

​​​. Let us fix tt, λ1\lambda_{1} and λ2\lambda_{2}, such that 0λ1<λ2<0\leq\lambda_{1}<\lambda_{2}<\infty. Then, the hypothesis of the lemma implies that fλ2(s|t)/fλ1(s|t)f_{\lambda_{2}}(s|t)/f_{\lambda_{1}}(s|t) is increasing (decreasing) in ss. Take k1(s)=I(,mλ2(1)(t))(s)k_{1}(s)=I_{(-\infty,m_{\lambda_{2}}^{(1)}(t))}(s) and k2(s)=fλ2(s|t)/fλ1(s|t)k_{2}(s)=f_{\lambda_{2}}(s|t)/f_{\lambda_{1}}(s|t), where IA()I_{A}(\cdot) denotes the indicator function of set AA\subseteq\Re. Here k1(s)k_{1}(s) is decreasing in ss and k2(s)k_{2}(s) is increasing (decreasing) in ss. Using Proposition 2.1, we get

12=\displaystyle\frac{1}{2}= k1(s)k2(s)fλ1(s|t)𝑑s()(k1(s)fλ1(s|t)𝑑s)(k2(s)fλ1(s|t)𝑑s)\displaystyle\int_{-\infty}^{\infty}k_{1}(s)k_{2}(s)f_{\lambda_{1}}(s|t)ds\leq(\geq)\left(\int_{-\infty}^{\infty}k_{1}(s)f_{\lambda_{1}}(s|t)ds\right)\left(\int_{-\infty}^{\infty}k_{2}(s)f_{\lambda_{1}}(s|t)ds\right)
\displaystyle\implies mλ2(1)(t)fλ2(s|t)𝑑s=mλ1(1)(t)fλ1(s|t)𝑑s=12()mλ2(1)(t)fλ1(s|t)𝑑s\displaystyle\int_{-\infty}^{m_{\lambda_{2}}^{(1)}(t)}f_{\lambda_{2}}(s|t)ds=\int_{-\infty}^{m_{\lambda_{1}}^{(1)}(t)}f_{\lambda_{1}}(s|t)ds=\frac{1}{2}\leq(\geq)\int_{-\infty}^{m_{\lambda_{2}}^{(1)}(t)}f_{\lambda_{1}}(s|t)ds
\displaystyle\implies mλ1(1)(t)()mλ2(1)(t),\displaystyle m_{\lambda_{1}}^{(1)}(t)\leq(\geq)\,m_{\lambda_{2}}^{(1)}(t),

establishing the assertion. ∎

Under the assumptions of Lemma 3.1.1, for any fixed tt, one may take

(3.4) l(1)(t)\displaystyle l^{(1)}(t) =infλ0mλ(1)(t)=m0(1)(t)(=limλmλ(1)(t))\displaystyle=\inf_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=m_{0}^{(1)}(t)\;(=\lim_{\lambda\to\infty}m_{\lambda}^{(1)}(t))
(3.5) andu(1)(t)\displaystyle\;\text{and}\;\;u^{(1)}(t) =supλ0mλ(1)(t)=limλmλ(1)(t)(=m0(1)(t)),\displaystyle=\sup_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=\lim_{\lambda\to\infty}m_{\lambda}^{(1)}(t)\;(=m_{0}^{(1)}(t)),

while applying Theorem 3.1.1 and Corollary 3.1.1.

While applying Theorem 3.1.1 and Corollaries 3.1.1-3.12, for any fixed tt, the commonly used choice for (l(1)(t),u(1)(t))(l^{(1)}(t),u^{(1)}(t)) is given by l(1)(t)=infλ0mλ(1)(t)l^{(1)}(t)=\inf_{\lambda\geq 0}m_{\lambda}^{(1)}(t) and u(1)(t)=supλ0mλ(1)(t)u^{(1)}(t)=\sup_{\lambda\geq 0}m_{\lambda}^{(1)}(t). Now we will provide some applications of Theorem 3.1.1 and Corollaries 3.1.1-3.1.2.
 
Example 3.1.1. Let 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) follow a bivariate normal distribution with joint p.d.f. (3.1), where, for known positive real numbers σ1\sigma_{1} and σ2\sigma_{2} and known ρ(1,1),\rho\in(-1,1), the joint p.d.f. of =(Z1,Z2)=(X1θ1,X2θ2)\mathbb{Z}=(Z_{1},Z_{2})=(X_{1}-\theta_{1},X_{2}-\theta_{2}) is

f(z1,z2)=12πσ1σ21ρ2e12(1ρ2)[z12σ122ρz1z2σ1σ2+z22σ22],𝕫=(z1,z2)2.f(z_{1},z_{2})=\frac{1}{2\pi\sigma_{1}\sigma_{2}\sqrt{1-\rho^{2}}}e^{-\frac{1}{2(1-\rho^{2})}\left[\frac{z_{1}^{2}}{\sigma_{1}^{2}}-2\rho\,\frac{z_{1}z_{2}}{\sigma_{1}\sigma_{2}}+\frac{z_{2}^{2}}{\sigma_{2}^{2}}\right]},\;\;\;\mathbb{z}=(z_{1},z_{2})\in\Re^{2}.

Consider estimation of θ1\theta_{1} under the GPN criterion with a general loss function (i.e., L1(𝜽,a)=W(aθ1),𝜽Θ0,aL_{1}(\boldsymbol{\theta},a)=W(a-\theta_{1}),\;\boldsymbol{\theta}\in\Theta_{0},\;a\in\Re, where W(0)=0W(0)=0, W(t)W(t) is strictly decreasing on (,0)(-\infty,0) and strictly increasing on (0,)(0,\infty)). In this case, the PNLEE is δ1,PNLEE(𝕏)=X1\delta_{1,PNLEE}(\mathbb{X})=X_{1}. Also, for any fixed tt\in\Re and λ0\lambda\geq 0, the conditional distribution of Z1Z_{1} given D=tD=t is N((1α)(λt),(1ρ2)σ12σ22τ2), where τ2=σ12+σ222ρσ1σ2N\left((1-\alpha)(\lambda-t),\frac{(1-\rho^{2})\sigma_{1}^{2}\sigma_{2}^{2}}{\tau^{2}}\right),\text{ where }\tau^{2}=\sigma_{1}^{2}+\sigma_{2}^{2}-2\rho\sigma_{1}\sigma_{2} and α=σ2(σ2ρσ1)τ2\alpha=\frac{\sigma_{2}(\sigma_{2}-\rho\sigma_{1})}{\tau^{2}}.

Here the restricted MLE is δ1,RMLE(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\}. Hwang and Peddada (1994) and Tan and Peddada (2000) proposed alternative estimators for θ1\theta_{1} as δ1,HP(𝕏)=min{X1,αX1+(1α)X2}=X1max{0,(α1)D}\delta_{1,HP}(\mathbb{X})=\min\{X_{1},\alpha X_{1}+(1-\alpha)X_{2}\}=X_{1}-\max\{0,(\alpha-1)D\} and δ1,PDT(𝕏)=min{X1,β(α)X1+(1β(α))X2}=X1(1β(α))max{0,D}\delta_{1,PDT}(\mathbb{X})=\min\{X_{1},\beta(\alpha)X_{1}+(1-\beta(\alpha))X_{2}\}=X_{1}-(1-\beta(\alpha))\max\big{\{}0,-D\big{\}}, receptively, where β(α)=min{1,max{0,α}},α\beta(\alpha)=\min\{1,\max\{0,\alpha\}\},\;\alpha\in\Re.

The median of the conditional conditional distribution of random variable Z1Z_{1}, given D=tD=t, is mλ(1)(t)=(1α)(λt),t,λ0.m_{\lambda}^{(1)}(t)=(1-\alpha)(\lambda-t),\;t\in\Re,\;\lambda\geq 0. Clearly, mλ(1)(t)m_{\lambda}^{(1)}(t) is increasing in λ[0,)\lambda\in[0,\infty), if α<1\alpha<1 and decreasing in λ[0,)\lambda\in[0,\infty), if α>1\alpha>1. Thus, for tt\in\Re, as in (3.4) and (3.5), we may take

l(1)(t)\displaystyle l^{(1)}(t) =infλ0mλ(1)(t)={(1α)t,α1,α>1\displaystyle=\inf_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=\begin{cases}-(1-\alpha)t,\;\;&\alpha\leq 1\\ -\infty,\;\;&\alpha>1\end{cases}
andu(1)(t)\displaystyle\text{and}\quad u^{(1)}(t) =supλ0mλ(1)(t)={,α<1(1α)t,α1.\displaystyle=\sup_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=\begin{cases}\infty,\;\;&\alpha<1\\ -(1-\alpha)t,\;\;&\alpha\geq 1\end{cases}.

Consider the following cases:

Case-I: 0α<10\leq\alpha<1

In this case the Hwang and Peddada (1994) estimator δ1,HP(𝕏)\delta_{1,HP}(\mathbb{X}), Tan and Peddada (2000) estimator δ1,PDT(𝕏)\delta_{1,PDT}(\mathbb{X}) and the restricted MLE δ1,RMLE(𝕏)\delta_{1,RMLE}(\mathbb{X}) are the same and no improvement is possible over these estimators using our results.

We have l(1)(t)=(1α)tandu(1)(t)=,t.l^{(1)}(t)=-(1-\alpha)t\;\text{and}\;u^{(1)}(t)=\infty,\;t\in\Re. Use of Theorem 3.1.1 and Corollary 3.1.1 leads us to following conclusions:

(i) The estimator δ1,RMLE(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\} (=δ1,HP(𝕏)=δ1,PDT(𝕏)=\delta_{1,HP}(\mathbb{X})=\delta_{1,PDT}(\mathbb{X})) is nearer to θ1\theta_{1} than the estimator δ1,PNLEE(𝕏)=X1\delta_{1,PNLEE}(\mathbb{X})=X_{1}.

(ii) Let ψ1,0()\psi_{1,0}(\cdot) be such that 0<ψ1,0(t)(1α)t,t<00<\psi_{1,0}(t)\leq-(1-\alpha)t,\;\forall\;t<0, and ψ1,0(t)=0,t0\psi_{1,0}(t)=0,\;\forall\;t\geq 0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}). In particular, for the choice,

ψ1,ν(t)={(1ν)t,t00,t>0,αν<1,\psi_{1,\nu}(t)=\begin{cases}-(1-\nu)t,\;\;&t\leq 0\\ 0,\;\;&t>0\end{cases},\quad\alpha\leq\nu<1,

the estimator δψ1,ν(𝕏)=X1ψ1,ν(D)\delta_{\psi_{1,\nu}}(\mathbb{X})=X_{1}-\psi_{1,\nu}(D) is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}).

Case-II: α1=1\alpha_{1}=1

In this case δ1,RMLE(𝕏)=δ1,HP(𝕏)=δ1,PDT(𝕏)=δ1,PNLEE(𝕏)=X1\delta_{1,RMLE}(\mathbb{X})=\delta_{1,HP}(\mathbb{X})=\delta_{1,PDT}(\mathbb{X})=\delta_{1,PNLEE}(\mathbb{X})=X_{1}. Also l(1)(t)=u(1)(t)=0,tl^{(1)}(t)=u^{(1)}(t)=0,\;\forall\;t\in\Re. Each of the above estimators are nearer to θ1\theta_{1} than any other location equivariant estimator.

Case-III α>1\alpha>1

In this case, δ1,RMLE(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\}, δ1,HP(𝕏)=X1(α1)max{0,D}\delta_{1,HP}(\mathbb{X})=X_{1}-(\alpha-1)\max\{0,D\} and δ1,PDT(𝕏)=δ1,PNLEE(𝕏)=X1\delta_{1,PDT}(\mathbb{X})=\delta_{1,PNLEE}(\mathbb{X})=X_{1}. Also, l(1)(t)=andu(1)(t)=(1α)t,t.l^{(1)}(t)=-\infty\;\text{and}\;u^{(1)}(t)=-(1-\alpha)t,\;\forall\;t\in\Re. We have the following consequences of Theorem 3.1.1 and Corollary 3.1.1:

(i) Estimators δ1,RMLE(𝕏)\delta_{1,RMLE}(\mathbb{X}) and δ1,HP(𝕏)\delta_{1,HP}(\mathbb{X}) are nearer to θ1\theta_{1} than the estimator δ1,PDT(𝕏)(=δ1,PNLEE(𝕏))\delta_{1,PDT}(\mathbb{X})\;(=\delta_{1,PNLEE}(\mathbb{X})).

(ii) The Estimator δ1,HP(𝕏)=αX1+(1α)X2\delta_{1,HP}^{*}(\mathbb{X})=\alpha X_{1}+(1-\alpha)X_{2} is nearer to θ1\theta_{1} than the estimator δ1,HP(𝕏)\delta_{1,HP}(\mathbb{X}).

(iii) Let ψ1,0()\psi_{1,0}(\cdot) be such that (1α)tψ1,0(t)<0,t<0-(1-\alpha)t\leq\psi_{1,0}(t)<0,\;\forall\;t<0, and ψ1,0(t)=0,t0\psi_{1,0}(t)=0,\;\forall\;t\geq 0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}). In particular, for the choice,

ψ1,ν(t)={(1ν)t,t00,t0,1<να,\psi_{1,\nu}(t)=\begin{cases}-(1-\nu)t,\;\;&t\leq 0\\ 0,\;\;&t\geq 0\end{cases},\quad 1<\nu\leq\alpha,

the estimator δψ1,ν(𝕏)=X1ψ1,ν(D)\delta_{\psi_{1,\nu}}(\mathbb{X})=X_{1}-\psi_{1,\nu}(D) is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}).

(iv) Let ψ1,0()\psi_{1,0}(\cdot) be such that (1α)tψ1,0(t)0,t<0-(1-\alpha)t\leq\psi_{1,0}(t)\leq 0,\;\forall\;t<0, and ψ1,0(t)=(1α)t,t0\psi_{1,0}(t)=-(1-\alpha)t,\;\forall\;t\geq 0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than δ1,HP(𝕏)\delta_{1,HP}(\mathbb{X}). In particular, for the choice,

ψ1,ν(t)={(1ν)t,t0(1α)t,t>0,1<να,\psi_{1,\nu}(t)=\begin{cases}-(1-\nu)t,\;\;&t\leq 0\\ -(1-\alpha)t,\;\;&t>0\end{cases},\quad 1<\nu\leq\alpha,

the estimator

δψ1,ν(𝕏)={νX1+(1ν)X2,X1X2αX1+(1α)X2,X1<X2\delta_{\psi_{1,\nu}}(\mathbb{X})=\begin{cases}\nu X_{1}+(1-\nu)X_{2},\;\;&X_{1}\geq X_{2}\\ \alpha X_{1}+(1-\alpha)X_{2},\;\;&X_{1}<X_{2}\end{cases}

is nearer to θ1\theta_{1} than δ1,HP(𝕏)\delta_{1,HP}(\mathbb{X}).

Case-IV α<0\alpha<0

Here δ1,RMLE(𝕏)=δ1,HP(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}(\mathbb{X})=\delta_{1,HP}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\} and δ1,PDT(𝕏)=X1max{0,D}\delta_{1,PDT}(\mathbb{X})=X_{1}-\max\{0,-D\}. Also, we have l(1)(t)=(1α)tandu(1)(t)=,t.l^{(1)}(t)=-(1-\alpha)t\;\text{and}\;u^{(1)}(t)=\infty,\;\forall\;t\in\Re. The following observations are evident from Theorem 3.1.1 and Corollary 3.1.1:

(i) Estimators δ1,RMLE(𝕏)(=δ1,HP(𝕏))\delta_{1,RMLE}(\mathbb{X})\;(=\delta_{1,HP}(\mathbb{X})) and δ1,PDT(𝕏)\delta_{1,PDT}(\mathbb{X}) are nearer to θ1\theta_{1} than the estimator δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}).

(ii) The estimator δ1,RMLE(𝕏)(=δ1,HP(𝕏))\delta_{1,RMLE}(\mathbb{X})\;(=\delta_{1,HP}(\mathbb{X})) is nearer to θ1\theta_{1} than δ1,PDT(𝕏)\delta_{1,PDT}(\mathbb{X}).

(iii) Let ψ1,0()\psi_{1,0}(\cdot) be such that 0<ψ1,0(t)(1α)t,t<00<\psi_{1,0}(t)\leq-(1-\alpha)t,\;\forall\;t<0, and ψ1,0(t)=0,t0\psi_{1,0}(t)=0,\;\forall\;t\geq 0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)=X1\delta_{1,PNLEE}(\mathbb{X})=X_{1}. In particular, for the choice,

ψ1,ν(t)={(1ν)t,t00,t0,αν<0,\psi_{1,\nu}(t)=\begin{cases}-(1-\nu)t,\;\;&t\leq 0\\ 0,\;\;&t\geq 0\end{cases},\quad\alpha\leq\nu<0,

the estimator

δψ1,ν(𝕏)={νX1+(1ν)X2,X1>X2X1,X1X2\delta_{\psi_{1,\nu}}(\mathbb{X})=\begin{cases}\nu X_{1}+(1-\nu)X_{2},\;\;&X_{1}>X_{2}\\ X_{1},\;\;&X_{1}\leq X_{2}\end{cases}

is nearer to θ1\theta_{1} than δ1,PNLEE(𝕏)=X1\delta_{1,PNLEE}(\mathbb{X})=X_{1}.

(iv) Let ψ1,0()\psi_{1,0}(\cdot) be such that tψ1,0(t)(1α)t,t<0-t\leq\psi_{1,0}(t)\leq-(1-\alpha)t,\;\forall\;t<0, and ψ1,0(t)=0,t0\psi_{1,0}(t)=0,\;\forall\;t\geq 0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than δ1,PDT(𝕏)\delta_{1,PDT}(\mathbb{X}). In particular, for the choice,

ψ1,ν(t)={(1ν)t,t00,t0,αν<0,\psi_{1,\nu}(t)=\begin{cases}-(1-\nu)t,\;\;&t\leq 0\\ 0,\;\;&t\geq 0\end{cases},\quad\alpha\leq\nu<0,

the estimator

δψ1,ν(𝕏)={νX1+(1ν)X2,X1>X2X1,X1X2\delta_{\psi_{1,\nu}}(\mathbb{X})=\begin{cases}\nu X_{1}+(1-\nu)X_{2},\;\;&X_{1}>X_{2}\\ X_{1},\;\;&X_{1}\leq X_{2}\end{cases}

is nearer to θ1\theta_{1} than δ1,PDT(𝕏)\delta_{1,PDT}(\mathbb{X}).

For the bivariate normal model, some of the above results have also been reported in Chang et al. (2017, 2020) under specific W(t)=|t|,tW(t)=|t|,\;t\in\Re. The findings reported in the above example hold for any general W()W(\cdot) such that W(0)=0W(0)=0, W(t)W(t) is increasing in (0,)(0,\infty) and decreasing in (,0)(-\infty,0).

Example 3.1.2. Let X1X_{1} and X2X_{2} be independent random variables with a joint p.d.f. f(x1θ1,x2θ2)=1σ1σ2ex1θ1σ1ex2θ2σ2,x1>θ1,x2>θ2,(θ1,θ2)Θ0f(x_{1}-\theta_{1},x_{2}-\theta_{2})=\frac{1}{\sigma_{1}\sigma_{2}}\,e^{-\frac{x_{1}-\theta_{1}}{\sigma_{1}}}e^{-\frac{x_{2}-\theta_{2}}{\sigma_{2}}},\;x_{1}>\theta_{1},\;x_{2}>\theta_{2},\;(\theta_{1},\theta_{2})\in\Theta_{0}, where σ1\sigma_{1} and σ2\sigma_{2} are known positive constants. Here the PNLEE is δ1,PNLEE(𝕏)=X1σ1ln(2)\delta_{1,PNLEE}(\mathbb{X})=X_{1}-\sigma_{1}\ln(2) and the restricted MLE is δ1,RMLE(𝕏)=min{X1,X2}=X1max{0,D}\delta_{1,RMLE}(\mathbb{X})=\min\{X_{1},X_{2}\}=X_{1}-\max\{0,-D\}.

For any fixed tt\in\Re, the conditional p.d.f. of Z1Z_{1} given D=tD=t is

fλ(s|t)\displaystyle f_{\lambda}(s|t) =σ1+σ2σ1σ2e(1σ1+1σ2)(smax{t+λ,0}),smax{t+λ,0}\displaystyle=\frac{\sigma_{1}+\sigma_{2}}{\sigma_{1}\sigma_{2}}\,e^{-\left(\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\right)(s-\max\{-t+\lambda,0\})},\;s\geq\max\{-t+\lambda,0\}
andmλ(1)(t)\displaystyle\text{and}\;\;\;\;m_{\lambda}^{(1)}(t) =max{t+λ,0}+σ1σ2σ1+σ2ln(2),t.\displaystyle=\max\{-t+\lambda,0\}+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;\;t\in\Re.

Clearly, for every fixed tt\in\Re, the median mλ(1)(t)m_{\lambda}^{(1)}(t) is an increasing function of λ[0,)\lambda\in[0,\infty) (this also follows from Lemma 3.1.1 as, for every fixed λ0\lambda\geq 0 and tt\in\Re, f(s,s+tλ)/f(s,s+t)f(s,s+t-\lambda)/f(s,s+t) is increasing in s(0,)s\in(0,\infty)). Thus, we may take l(1)(t)=infλ0mλ(1)(t)=max{0,t}+σ1σ2σ1+σ2ln(2) and u(1)(t)=supλ0mλ(1)(t)=,t.l^{(1)}(t)=\!\inf_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=\max\{0,-t\}+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2)\text{ and }u^{(1)}(t)\!=\sup_{\lambda\geq 0}m_{\lambda}^{(1)}(t)=\infty,\;t\in\Re.

The following conclusions immediately follow from Theorem 3.1.1 and Corollary 3.1.1:
 
(i) The estimator δ1,PNLEE(𝕏)=min{X2σ1σ2σ1+σ2ln(2),X1σ1ln(2)}\delta_{1,PNLEE}^{*}(\mathbb{X})=\min\big{\{}X_{2}-\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),X_{1}-\sigma_{1}\ln(2)\big{\}} is nearer to θ1\theta_{1} than the PNLEE δ1,PNLEE(𝕏)\delta_{1,PNLEE}(\mathbb{X}).
 
(ii) The estimator δ1,RMLE(𝕏)=min{X1,X2}σ1σ2σ1+σ2ln(2)\delta_{1,RMLE}^{*}(\mathbb{X})=\min\{X_{1},X_{2}\}-\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2) is nearer to θ1\theta_{1} than the restricted MLE δ1,RMLE\delta_{1,RMLE}.
 
(iii) Let ψ1,0(t)\psi_{1,0}(t) be such that σ1ln(2)<ψ1,0(t)t+σ1σ2σ1+σ2ln(2),tσ12σ1+σ2ln(2)\sigma_{1}\ln(2)<\psi_{1,0}(t)\leq-t+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;\forall\;t\leq\frac{-\sigma_{1}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2), and ψ1,0(t)=σ1ln(2),t>σ12σ1+σ2ln(2)\psi_{1,0}(t)=\sigma_{1}\ln(2),\;\forall\;t>\frac{-\sigma_{1}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2). Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than the PNLEE δ1,PNLEE(𝕏)=X1σ1ln(2)\delta_{1,PNLEE}(\mathbb{X})=X_{1}-\sigma_{1}\ln(2).
 
(iv) Let ψ1,0(t)\psi_{1,0}(t) be such that tψ1,0(t)t+σ1σ2σ1+σ2ln(2),t0-t\leq\psi_{1,0}(t)\leq-t+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;\forall\;t\leq 0, and 0ψ1,0(t)σ1σ2σ1+σ2ln(2),t>00\leq\psi_{1,0}(t)\leq\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;\forall\;t>0. Then the estimator δψ1,0(𝕏)=X1ψ1,0(D)\delta_{\psi_{1,0}}(\mathbb{X})=X_{1}-\psi_{1,0}(D) is nearer to θ1\theta_{1} than the restricted MLE δ1,RMLE(𝕏)=min{X1,X2}\delta_{1,RMLE}(\mathbb{X})=\min\{X_{1},X_{2}\}.

It is worth mentioning that findings of Theorem 3.1.1 and Corollaries 3.1.1-3.1.2, and hence those of Examples 3.1.1 and 3.1.2, hold under any general loss function L1(𝜽,a)=W(aθ1),𝜽Θ0,a𝒜=,L_{1}(\boldsymbol{\theta},a)=W(a-\theta_{1}),\;\boldsymbol{\theta}\in\Theta_{0},\;a\in\mathcal{A}=\Re, where W:[0,)W:\Re\rightarrow[0,\infty) is such that W(0)=0W(0)=0, W(t)W(t) is strictly decreasing on (,0)(-\infty,0) and strictly increasing on (0,)(0,\infty).

3.2. Estimation of the Larger Location Parameter θ2\theta_{2}

Consider estimation of the larger location parameter θ2\theta_{2} under the GPN criterion with the general loss function L2(𝜽,a)=W(aθ2),𝜽Θ0,a𝒜=L_{2}(\boldsymbol{\theta},a)=W(a-\theta_{2}),\;\boldsymbol{\theta}\in\Theta_{0},\;a\in\mathcal{A}=\Re, when it is known apriori that 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0}. The form of any location equivariant estimator of θ2\theta_{2} is δψ(𝕏)=X2ψ(D),\delta_{\psi}(\mathbb{X})=X_{2}-\psi(D), for some function ψ:\psi:\,\Re\rightarrow\Re, where D=X2X1D=X_{2}-X_{1}.

Let Z2=X2θ2Z_{2}=X_{2}-\theta_{2}, λ=θ2θ1\lambda=\theta_{2}-\theta_{1} and fD(t|λ)f_{D}(t|\lambda) be the p.d.f. of r.v. D=X2X1D=X_{2}-X_{1}. Let δξ(𝕏)=X2ξ(D)\delta_{\xi}(\mathbb{X})=X_{2}-\xi(D) and δψ(𝕏)=X2ψ(D)\delta_{\psi}(\mathbb{X})=X_{2}-\psi(D) be two location equivariant estimators of θ2\theta_{2}. Then, the GPN of δξ(𝕏)\delta_{\xi}(\mathbb{X}) relative to δψ(𝕏)\delta_{\psi}(\mathbb{X}) is given by

GPN(δξ,δψ;𝜽)=g2,λ(ξ(t),ψ(t),t)fD(t|λ)𝑑t,λ0,\displaystyle GPN(\delta_{\xi},\delta_{\psi};\boldsymbol{\theta})=\int_{-\infty}^{\infty}g_{2,\lambda}(\xi(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\lambda\geq 0,

where, for λ0\lambda\geq 0,

g2,λ(ξ(t),ψ(t),t)\displaystyle g_{2,\lambda}(\xi(t),\psi(t),t) =P𝜽[W(Z2ξ(t))<W(Z2ψ(t))|D=t]\displaystyle=P_{\boldsymbol{\theta}}[W(Z_{2}-\xi(t))<W(Z_{2}-\psi(t))|D=t]
+12P𝜽[W(Z2ξ(t))=W(Z2ψ(t))|D=t].\displaystyle\qquad+\frac{1}{2}P_{\boldsymbol{\theta}}[W(Z_{2}-\xi(t))\!=\!W(Z_{2}-\psi(t))|D=t].

Let mλ(2)(t)m_{\lambda}^{(2)}(t) denote the median of the conditional distribution of Z2Z_{2} given D=tD=t, where λ0\lambda\geq 0 and tt belongs to the support of r.v. DD. For any fixed tt\in\Re, the conditional p.d.f. of Z2Z_{2} given D=tD=t is fλ(s|t)=f(st+λ,s)fD(t|λ)f_{\lambda}(s|t)=\frac{f(s-t+\lambda,s)}{f_{D}(t|\lambda)} and fD(t|λ)=f(yt+λ,y)𝑑yf_{D}(t|\lambda)=\int_{-\infty}^{\infty}f(y-t+\lambda,y)dy, λ0\lambda\geq 0. Thus mλ(2)(t)f(st+λ,s)𝑑s=12f(st+λ,s)𝑑s\int_{-\infty}^{m_{\lambda}^{(2)}(t)}\,f(s-t+\lambda,s)ds=\frac{1}{2}\int_{-\infty}^{\infty}f(s-t+\lambda,s)ds. For any fixed tt and λ0\lambda\geq 0, using Lemma 3.1, we have g2,λ(ξ(t),ψ(t),t)>12g_{2,\lambda}(\xi(t),\psi(t),t)>\frac{1}{2}, provided ψ(t)<ξ(t)mλ(2)(t)\psi(t)<\xi(t)\leq m_{\lambda}^{(2)}(t) or if mλ(2)(t)ξ(t)<ψ(t)m_{\lambda}^{(2)}(t)\leq\xi(t)<\psi(t). Also, for any fixed tt, g2,λ(ψ(t),ψ(t),t)=12,λ0g_{2,\lambda}(\psi(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 0. Now using arguments similar to the ones used in proving Theorem 3.1.1, we get the following results.

Theorem 3.2.1. Let δψ(𝕏)=X2ψ(D)\delta_{\psi}(\mathbb{X})=X_{2}-\psi(D) be a location equivariant estimator of θ2\theta_{2}. Let l(2)(t)l^{(2)}(t) and u(2)(t)u^{(2)}(t) be functions such that l(2)(t)mλ(2)(t)u(2)(t),λ0l^{(2)}(t)\leq m_{\lambda}^{(2)}(t)\leq u^{(2)}(t),\;\forall\;\lambda\geq 0 and any tt. For any fixed tt, define ψ(t)=max{l(2)(t),min{ψ(t),u(2)(t)}}\psi^{*}(t)\!=\!\max\{l^{(2)}(t),\min\{\psi(t),u^{(2)}(t)\}\}. Let δψ(𝕏)=X2ψ(D)\delta_{\psi^{*}}(\mathbb{X})\!=\!X_{2}-\psi^{*}(D). Then, GPN(δψ,δψ;𝜽)>12,𝜽Θ0,GPN(\delta_{\psi^{*}},\delta_{\psi};\boldsymbol{\theta})>\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, provided P𝜽[l(2)(D)ψ(D)u(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(2)}(D)\leq\psi(D)\leq u^{(2)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

Corollary 3.2.1. Let δψ(𝕏)=X2ψ(D)\delta_{\psi}(\mathbb{X})=X_{2}-\psi(D) be a location equivariant estimator of θ2\theta_{2} such that P𝜽[l(2)(D)ψ(D)u(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(2)}(D)\leq\psi(D)\leq u^{(2)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, where l(2)()l^{(2)}(\cdot) and u(2)()u^{(2)}(\cdot) are as defined in Theorem 3.2.1. Let ψ2,0:\psi_{2,0}:\Re\rightarrow\Re be such that ψ(t)<ψ2,0(t)l(2)(t)\psi(t)<\psi_{2,0}(t)\leq l^{(2)}(t), whenever ψ(t)<l(2)(t)\psi(t)<l^{(2)}(t), or u(2)(t)ψ2,0(t)<ψ(t)u^{(2)}(t)\leq\psi_{2,0}(t)<\psi(t), whenever u(2)(t)<ψ(t)u^{(2)}(t)<\psi(t). Also let ψ2,0(t)=ψ(t)\psi_{2,0}(t)=\psi(t), whenever l(2)(t)ψ(t)u(2)(t)l^{(2)}(t)\leq\psi(t)\leq u^{(2)}(t). Let δψ2,0(𝕏)=X2ψ2,0(D)\delta_{\psi_{2,0}}(\mathbb{X})=X_{2}-\psi_{2,0}(D). Then, GPN(δψ2,0,δψ;𝜽)>12,𝜽Θ0GPN(\delta_{\psi_{2,0}},\delta_{\psi};\boldsymbol{\theta})>\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

The following corollary provides improvements over the PNLEE δ2,PNLEE(𝕏)=X2m0,2\delta_{2,PNLEE}(\mathbb{X})=X_{2}-m_{0,2}, under the restricted parameter space Θ0\Theta_{0}
 
Corollary 3.2.2. Let ξ(t)=max{l(2)(t),min{m0,2,u(2)(t)}},t\xi^{*}(t)\!=\!\max\{l^{(2)}(t),\min\{m_{0,2},u^{(2)}(t)\}\},\;t\in\Re, where l(2)()l^{(2)}(\cdot) and u(2)()u^{(2)}(\cdot) are as defined in Theorem 3.2.1. Let δξ(𝕏)=X2ξ(D)\delta_{\xi^{*}}(\mathbb{X})\!=\!X_{2}-\xi^{*}(D). Then, GPN(δξ,δ2,PNLEE;𝜽)>12,𝜽Θ0,GPN(\delta_{\xi^{*}},\delta_{2,PNLEE};\boldsymbol{\theta})>\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, provided P𝜽[l(2)(D)m0,2u(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}[l^{(2)}(D)\leq m_{0,2}\leq u^{(2)}(D)]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

The following lemma describes the behaviour of mλ(2)(t)m_{\lambda}^{(2)}(t), for any fixed tt. The proof of the lemma, being similar to the proof of Lemma 3.1.1, is skipped.

Lemma 3.2.1. If, for every fixed λ0\lambda\geq 0 and tt, f(st+λ,s)/f(st,s)f(s-t+\lambda,s)/f(s-t,s) is increasing (decreasing) in ss (wherever the ratio is not of the form 0/00/0), then, for every fixed tt, mλ(2)(t)m_{\lambda}^{(2)}(t) is an increasing (decreasing) function of λ[0,)\lambda\in[0,\infty).

Under the assumptions of Lemma 3.2.1, one may take, for any fixed tt,

(3.6) l(2)(t)\displaystyle l^{(2)}(t) =infλ0mλ(2)(t)=m0(2)(t)(=limλmλ(2)(t))\displaystyle=\inf_{\lambda\geq 0}m_{\lambda}^{(2)}(t)=m_{0}^{(2)}(t)\;(=\lim_{\lambda\to\infty}m_{\lambda}^{(2)}(t))
(3.7) andu(2)(t)\displaystyle\;\text{and}\;\;u^{(2)}(t) =supλ0mλ(2)(t)=limλmλ(2)(t)(=m0(2)(t)),\displaystyle=\sup_{\lambda\geq 0}m_{\lambda}^{(2)}(t)=\lim_{\lambda\to\infty}m_{\lambda}^{(2)}(t)\;(=m_{0}^{(2)}(t)),

while applying Theorem 3.2.1 and Corollary 3.2.1.

As in Section 3.1, we will now apply Theorem 3.2.1 and Corollaries 3.2.1-3.2.2 to estimation of the larger location parameter θ2\theta_{2} in probability models considered in Examples 3.1.1-3.1.2.
 
Example 3.2.1. Let 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) have a bivariate normal distribution as described in Example 3.1.1. Consider estimation of θ2\theta_{2} under the GPN criterion with a general loss function L2(𝜽,a)=W(aθ2),𝜽Θ0,aL_{2}(\boldsymbol{\theta},a)=W(a-\theta_{2}),\;\boldsymbol{\theta}\in\Theta_{0},\;a\in\Re, where W(0)=0W(0)=0, W(t)W(t) is strictly decreasing on (,0)(-\infty,0) and strictly increasing on (0,)(0,\infty). Here, for any fixed tt\in\Re, Z2|D=tN(α(tλ),(1ρ2)σ12σ22τ2), where τ2=σ12+σ222ρσ1σ2Z_{2}|D=t\sim N\left(\alpha(t-\lambda),\frac{(1-\rho^{2})\sigma_{1}^{2}\sigma_{2}^{2}}{\tau^{2}}\right),\text{ where }\tau^{2}=\sigma_{1}^{2}+\sigma_{2}^{2}-2\rho\sigma_{1}\sigma_{2} and α=σ2(σ2ρσ1)τ2\alpha=\frac{\sigma_{2}(\sigma_{2}-\rho\sigma_{1})}{\tau^{2}}. Thus, for λ0\lambda\geq 0, mλ(2)(t)=α(tλ),t,m_{\lambda}^{(2)}(t)=\alpha(t-\lambda),\;t\in\Re, and as in (3.6) and (3.7), we may take

l(2)(t)={αt,α0,α>0andu(2)(t)={,α0αt,α>0.\displaystyle l^{(2)}(t)=\begin{cases}\alpha t,\;\;&\alpha\leq 0\\ -\infty,\;\;&\alpha>0\end{cases}\;\;\text{and}\quad u^{(2)}(t)=\begin{cases}\infty,\;\;&\alpha\leq 0\\ \alpha t,\;\;&\alpha>0\end{cases}.

The unrestricted PNLEE of θ2\theta_{2} is δ2,PNLEE(𝕏)=X2\delta_{2,PNLEE}(\mathbb{X})=X_{2} (as m0,2=0m_{0,2}=0) and the restricted MLE of θ2\theta_{2} is δ2,RMLE(𝕏)=X2+αmax{0,D}\delta_{2,RMLE}(\mathbb{X})=X_{2}+\alpha\max\{0,-D\}. Hwang and Peddada (1994) and Tan and Peddada (2000) proposed alternative estimators for θ2\theta_{2} as δ2,HP(𝕏)=X2+max{0,αD}\delta_{2,HP}(\mathbb{X})=X_{2}+\max\{0,-\alpha D\} and δ2,PDT(𝕏)=X2+β(α)max{0,D}\delta_{2,PDT}(\mathbb{X})=X_{2}+\beta(\alpha)\max\big{\{}0,-D\big{\}}, respectively, where β(α)=min{1,max{0,α}},α\beta(\alpha)=\min\{1,\max\{0,\alpha\}\},\;\alpha\in\Re.

Using Corollary 3.2.2, we conclude that, under the GPN criterion, the estimator

δ2,PNLEE(𝕏)=δ2,RMLE(𝕏)={min{X2,αX1+(1α)X2},α0max{X2,αX1+(1α)X2},α>0\delta_{2,PNLEE}^{*}(\mathbb{X})=\delta_{2,RMLE}^{*}(\mathbb{X})=\begin{cases}\min\Big{\{}X_{2},\alpha X_{1}+(1-\alpha)X_{2}\Big{\}},&\alpha\leq 0\\ \leavevmode\nobreak\ \\ \max\Big{\{}X_{2},\alpha X_{1}+(1-\alpha)X_{2}\Big{\}},&\alpha>0\end{cases}

is nearer to θ2\theta_{2} than δ2,PNLEE(𝕏)=X2.\delta_{2,PNLEE}(\mathbb{X})=X_{2}. In the same line as in Example 3.1.1, estimators dominating over δ2,HP(𝕏)\delta_{2,HP}(\mathbb{X}) and δ2,PDT(𝕏)\delta_{2,PDT}(\mathbb{X}) can be obtained in certain situations.

Example 3.2.2. Let X1X_{1} and X2X_{2} be independent exponential random variables as considered in Example 3.1.2. Consider estimation of θ2\theta_{2} under the GPN criterion. Here the PNLEE of θ2\theta_{2} is δ2,PNLEE(𝕏)=X2σ2ln(2)\delta_{2,PNLEE}(\mathbb{X})=X_{2}-\sigma_{2}\ln(2) and the restricted MLE of θ2\theta_{2} is δ2,RMLE(𝕏)=X2\delta_{2,RMLE}(\mathbb{X})=X_{2}. Also, for any fixed λ0\lambda\geq 0 and tt\in\Re, the conditional p.d.f. of Z2Z_{2} given D=tD=t is

fλ(s|t)=σ1+σ2σ1σ2e(1σ1+1σ2)(smax{tλ,0}), ifsmax{tλ,0}.f_{\lambda}(s|t)=\frac{\sigma_{1}+\sigma_{2}}{\sigma_{1}\sigma_{2}}\,e^{-\left(\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\right)(s-\max\{t-\lambda,0\})},\text{ if}\;s\geq\max\{t-\lambda,0\}.

Consequently,

mλ(2)(t)=max{0,tλ}+σ1σ2σ1+σ2ln(2),t,λ0,m_{\lambda}^{(2)}(t)=\max\{0,t-\lambda\}+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;t\in\Re,\;\;\lambda\geq 0,

and, as in (3.6) and (3.7), we may take l(2)(t)=σ1σ2σ1+σ2ln(2),t,l^{(2)}(t)=\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;t\in\Re, and u(2)(t)=max{t,0}+σ1σ2σ1+σ2ln(2),t.u^{(2)}(t)=\max\{t,0\}+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),\;t\in\Re.

The following conclusions are evident from Theorem 3.2.1 and Corollary 3.2.1:
 
(i) The estimator δ2,RMLE(𝕏)=X2σ1σ2σ1+σ2ln(2)\delta_{2,RMLE}^{*}(\mathbb{X})=X_{2}-\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2) is nearer to θ2\theta_{2} than the restricted MLE δ2,RMLE(𝕏)=X2\delta_{2,RMLE}(\mathbb{X})=X_{2}.
 
(ii) The estimator

δ2,PNLEE(𝕏)={X2σ1σ2σ1+σ2ln(2),if X2<X1X1σ1σ2σ1+σ2ln(2),if X1X2<X1+σ22σ1+σ2ln(2)X2σ2ln(2),if X2X1+σ22σ1+σ2ln(2)\displaystyle\delta_{2,PNLEE}^{*}(\mathbb{X})=\begin{cases}X_{2}-\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),&\text{if }X_{2}<X_{1}\\ X_{1}-\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2),&\text{if }X_{1}\leq X_{2}<X_{1}+\frac{\sigma_{2}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2)\\ X_{2}-\sigma_{2}\ln(2),&\text{if }X_{2}\geq X_{1}+\frac{\sigma_{2}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2)\end{cases}

is nearer to θ2\theta_{2} than δ2,PNLEE(𝕏)\delta_{2,PNLEE}(\mathbb{X}).
 
(iii) Let ψ2,0(t)\psi_{2,0}(t) be such that max{0,t}+σ1σ2σ1+σ2ln(2)ψ2,0(t)<σ2ln(2),tσ22σ1+σ2ln(2)\max\{0,t\}+\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\ln(2)\leq\psi_{2,0}(t)<\sigma_{2}\ln(2),\;\forall\;t\leq\frac{\sigma_{2}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2), and ψ2,0(t)=σ2ln(2),t>σ22σ1+σ2ln(2)\psi_{2,0}(t)=\sigma_{2}\ln(2),\;\forall\;t>\frac{\sigma_{2}^{2}}{\sigma_{1}+\sigma_{2}}\ln(2). Then the estimator δψ2,0(𝕏)=X2ψ2,0(D)\delta_{\psi_{2,0}}(\mathbb{X})=X_{2}-\psi_{2,0}(D) is nearer to θ2\theta_{2} than the PNLEE δ2,PNLEE(𝕏)=X2σ2ln(2)\delta_{2,PNLEE}(\mathbb{X})=X_{2}-\sigma_{2}\ln(2).

4. Improved Estimators for Restricted Scale Parameters

Let 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) be a random vector having a joint p.d.f.

(4.1) f𝜽(x1,x2)=1θ1θ2f(x1θ1,x2θ2),(x1,x2)2,f_{\boldsymbol{\theta}}(x_{1},x_{2})=\frac{1}{\theta_{1}\theta_{2}}f\left(\frac{x_{1}}{\theta_{1}},\frac{x_{2}}{\theta_{2}}\right),\;\;\;(x_{1},x_{2})\in\Re^{2},

where f(,)f(\cdot,\cdot) is a specified Lebesgue p.d.f. and, for ++=(0,)×(0,)\Re_{++}=(0,\infty)\times(0,\infty), 𝜽=(θ1,θ2)Θ0={(t1,t2)++2:t1t2}\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0}=\{(t_{1},t_{2})\in\Re_{++}^{2}:t_{1}\leq t_{2}\} is the vector of unknown restricted scale parameters. For the sake of simplicity, throughout this section, we assume that the distributional support of 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) is a subset of ++2\Re_{++}^{2}. Generally, 𝕏=(X1,X2)\mathbb{X}=(X_{1},X_{2}) would be a minimal-sufficient statistic based on a bivariate random sample or two independent random samples.

For estimation of θi\theta_{i}, under the restricted parameter space Θ0\Theta_{0}, we consider the GPN criterion with the loss function Li(𝜽,a)=W(aθi),𝜽Θ0,a𝒜=++=(0,),i=1,2,L_{i}(\boldsymbol{\theta},a)=W(\!\frac{a}{\;\theta_{i}}\!),\;\boldsymbol{\theta}\in\Theta_{0},\;\;a\in\mathcal{A}=\Re_{++}=(0,\infty),\;i=1,2, where W:++[0,)W:\Re_{++}\rightarrow[0,\infty) is a function such that W(1)=0W(1)=0, W(t)W(t) is strictly decreasing on (0,1)(0,1) and strictly increasing on (1,)(1,\infty). Every time the word ”general loss function” is used in this section, it refers to the loss function as defined above.

The problem of estimating θi\theta_{i}, under the restricted parameter space Θ0\Theta_{0} and under the GPN criterion with a general loss function defined above, is invariant under the group of transformations 𝒢={gb:b(0,)},\mathcal{G}\!=\!\{g_{b}:\,b\in(0,\infty)\}, where gb(x1,x2)=(bx1,bx2),(x1,x2)2,b(0,)g_{b}(x_{1},x_{2})\!=\!(b\,x_{1},b\,x_{2}),\;(x_{1},x_{2})\in\Re^{2},\;b\in(0,\infty). Any scale equivariant estimator of θi\theta_{i} has the form

δψ(𝕏)=ψ(D)Xi,\delta_{\psi}(\mathbb{X})\!=\!\psi(D)X_{i},

for some function ψ:++++,i=1,2,\psi:\,\Re_{++}\rightarrow\Re_{++}\,,\;i=1,2, where D=X2X1D=\frac{X_{2}}{X_{1}}. Let fD(t|λ)f_{D}(t|\lambda) be the p.d.f. of r.v. D=X2X1D=\frac{X_{2}}{X_{1}}, where λ=θ2θ1[1,)\lambda=\frac{\theta_{2}}{\theta_{1}}\in[1,\infty). Note that the distribution of DD depends on 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0} only through λ=θ2θ1[1,)\lambda=\frac{\theta_{2}}{\theta_{1}}\in[1,\infty). Exploiting the prior information of order restriction on parameters θ1\theta_{1} and θ2\theta_{2} (θ1θ2\theta_{1}\leq\theta_{2}), our aim is to obtain estimators that are nearer to θi,i=1,2\theta_{i},\,i=1,2.

The following lemma, whose proof is similar to that of Lemma 3.1, will play an important role in proving the main results of this section.
 
Lemma 4.1 Let YY be a positive r.v. (Pr(Y>0)=1Pr(Y>0)=1) having the Lebesgue p.d.f. and let mY>0m_{Y}>0 be the median of YY. Let W:++[0,)W:\Re_{++}\rightarrow[0,\infty) be a function such that W(1)=0W(1)=0, W(t)W(t) is strictly decreasing on (0,1)(0,1) and strictly increasing on (1,)(1,\infty). Then, for 0<c1<c2mY0<c_{1}<c_{2}\leq m_{Y} or 0<mYc2<c10<m_{Y}\leq c_{2}<c_{1}, GPN=P[W(Yc2)<W(Yc1)]+12P[W(Yc2)=W(Yc1)]>12GPN=P\left[W\left(\frac{Y}{c_{2}}\right)<W\left(\frac{Y}{c_{1}}\right)\right]+\frac{1}{2}P\left[W\left(\frac{Y}{c_{2}}\right)=W\left(\frac{Y}{c_{1}}\right)\right]>\frac{1}{2}.

Note that, in the unrestricted case (parameter space Θ=++2\Theta\!=\!\Re_{++}^{2}), the problem of estimating the scale parameter θi,i=1,2\theta_{i},\,i=1,2, under the GPN criterion with a general loss function, is invariant under the multiplicative group of transformations 𝒢0={gc1,c2:(c1,c2)++2},\mathcal{G}_{0}=\{g_{c_{1},c_{2}}:\,(c_{1},c_{2})\in\Re_{++}^{2}\}, where gc1,c2(x1,x2)=(c1x1,c2x2),(x1,x2)2,(c1,c2)++2g_{c_{1},c_{2}}(x_{1},x_{2})=(c_{1}x_{1},c_{2}x_{2}),\;(x_{1},x_{2})\in\Re^{2},\;(c_{1},c_{2})\in\Re_{++}^{2}. Any scale equivariant estimator is of the form δi,c(𝕏)=cXi,c++,i=1,2.\delta_{i,c}(\mathbb{X})=cX_{i},\;c\in\Re_{++},\;i=1,2. Using Lemma 4.1, the unrestricted PNSEE of θi\theta_{i} is δi,PNSEE(𝕏)=Xim0,i\delta_{i,PNSEE}(\mathbb{X})=\frac{X_{i}}{m_{0,i}}, where m0,i>0m_{0,i}>0 is the median of the r.v. Zi=Xiθi,i=1,2Z_{i}=\frac{X_{i}}{\theta_{i}},\,i=1,2.

In the following subsections, we consider component-wise estimation of order restricted scale parameters θ1\theta_{1} and θ2\theta_{2}, under the GPN criterion with a general loss function, and derive some general results. Applications of main results are illustrated through various examples dealing with specific probability models.

4.1. Estimation of The Smaller Scale Parameter θ1\theta_{1}

Define Z1=X1θ1Z_{1}=\frac{X_{1}}{\theta_{1}} and λ=θ2θ1\lambda=\frac{\theta_{2}}{\theta_{1}}, so that λ1\lambda\geq 1. Let fD(t|λ)f_{D}(t|\lambda) be the p.d.f. of r.v. D=X2X1D=\frac{X_{2}}{X_{1}}. Let δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1} and δξ(𝕏)=ξ(D)X1\delta_{\xi}(\mathbb{X})=\xi(D)X_{1} be two scale equivariant estimators of θ1\theta_{1}, where ψ:++++\psi:\,\Re_{++}\rightarrow\Re_{++} and ξ:++++\xi:\,\Re_{++}\rightarrow\Re_{++} are specified functions. Then, the GPN of δξ(𝕏)=ξ(D)X1\delta_{\xi}(\mathbb{X})=\xi(D)X_{1} relative to δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1} is given by

GPN(δξ,δψ;𝜽)=0g1,λ(ξ(t),ψ(t),t)fD(t|λ)𝑑t,λ1,\displaystyle GPN(\delta_{\xi},\delta_{\psi};\boldsymbol{\theta})=\int_{0}^{\infty}g_{1,\lambda}(\xi(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\lambda\geq 1,

where, for λ1\lambda\geq 1 and fixed tt in support of r.v. DD,

g1,λ(ξ(t),ψ(t),t)\displaystyle g_{1,\lambda}(\xi(t),\psi(t),t) =P𝜽[W(ξ(t)Z1)<W(ψ(t)Z1)|D=t]\displaystyle=P_{\boldsymbol{\theta}}[W(\xi(t)Z_{1})<W(\psi(t)Z_{1})|D=t]
(4.2) +12P𝜽[W(ξ(t)Z1)=W(ψ(t)Z1)|D=t].\displaystyle\qquad+\frac{1}{2}P_{\boldsymbol{\theta}}[W(\xi(t)Z_{1})\!=\!W(\psi(t)Z_{1})|D=t].

For any fixed λ1\lambda\geq 1 and tt, let mλ(1)(t)m_{\lambda}^{(1)}(t) denote the median of the conditional distributional of Z1Z_{1} given D=tD=t. For any fixed tt, the conditional p.d.f. of Z1Z_{1} given D=tD=t is fλ(s|t)=sλf(s,stλ)fD(t|λ)f_{\lambda}(s|t)=\frac{\frac{s}{\lambda}f(s,\frac{st}{\lambda})}{f_{D}(t|\lambda)} and fD(t|λ)=0yλf(y,ytλ)𝑑yf_{D}(t|\lambda)=\int_{0}^{\infty}\frac{y}{\lambda}f(y,\frac{yt}{\lambda})dy, λ1\lambda\geq 1. Then 0mλ(1)(t)sf(s,stλ)𝑑s=120sf(s,stλ)𝑑s\int_{0}^{m_{\lambda}^{(1)}(t)}\,sf(s,\frac{st}{\lambda})ds=\frac{1}{2}\int_{0}^{\infty}sf(s,\frac{st}{\lambda})ds. It follows from Lemma 4.1 that, for any fixed tt and λ1\lambda\geq 1, g1,λ(ξ(t),ψ(t),t)>12g_{1,\lambda}(\xi(t),\psi(t),t)>\frac{1}{2}, provided mλ(1)(t)1ξ(t)<1ψ(t)m_{\lambda}^{(1)}(t)\leq\frac{1}{\xi(t)}<\frac{1}{\psi(t)} or 1ψ(t)<1ξ(t)mλ(1)(t)\frac{1}{\psi(t)}<\frac{1}{\xi(t)}\leq m_{\lambda}^{(1)}(t). Also, for any fixed tt, g1,λ(ψ(t),ψ(t),t)=12,λ1g_{1,\lambda}(\psi(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 1.

On similar lines as in Theorem 3.1.1, under certain conditions, the following theorem provides shrinkage type improvements over an arbitrary scale equivariant estimator under the GPN criterion with a general loss function.

Theorem 4.1.1. Let δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1} be a scale equivariant estimator of θ1\theta_{1}. Let l(1)(t)l^{(1)}(t) and u(1)(t)u^{(1)}(t) be functions such that 0<l(1)(t)mλ(1)(t)u(1)(t),λ10<l^{(1)}(t)\leq m_{\lambda}^{(1)}(t)\leq u^{(1)}(t),\;\forall\;\lambda\geq 1 and any tt. For any fixed tt, define ψ(t)=max{1u(1)(t),min{ψ(t),1l(1)(t)}}\psi^{*}(t)\!=\!\max\{\frac{1}{u^{(1)}(t)},\min\{\psi(t),\frac{1}{l^{(1)}(t)}\}\}. Then, under the GPN criterion, the estimator δψ(𝕏)=ψ(D)X1\delta_{\psi^{*}}(\mathbb{X})\!=\!\psi^{*}(D)X_{1} is nearer to θ1\theta_{1} than the estimator δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1}, provided P𝜽[1u(1)(D)ψ(D)1l(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[\frac{1}{u^{(1)}(D)}\leq\psi(D)\leq\frac{1}{l^{(1)}(D)}\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

Proof.

​​​. The GPN of the estimator δψ(𝕏)=ψ(D)X1\delta_{\psi^{*}}(\mathbb{X})=\psi^{*}(D)X_{1} relative to δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1} can be written as

GPN(δψ,δψ;𝜽)=0g1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t,λ1,\displaystyle GPN(\delta_{\psi^{*}},\delta_{\psi};\boldsymbol{\theta})=\int_{0}^{\infty}g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\lambda\geq 1,

where, for λ1\lambda\geq 1 and tt in support of r.v. DD, g1,λ(,,)g_{1,\lambda}(\cdot,\cdot,\cdot) is defined by (4.2).

Let A={t:ψ(t)<1u(1)(t)}A=\{t:\psi(t)<\frac{1}{u^{(1)}(t)}\}, B={t:1u(1)(t)ψ(t)1l(1)(t)}B=\{t:\frac{1}{u^{(1)}(t)}\leq\psi(t)\leq\frac{1}{l^{(1)}(t)}\} and C={t:ψ(t)>1l(1)(t)}C=\{t:\psi(t)>\frac{1}{l^{(1)}(t)}\}. Then

ψ(t)={1u(1)(t),tAψ(t),tB1l(1)(t),tC.\psi^{*}(t)=\begin{cases}\frac{1}{u^{(1)}(t)},&t\in A\\ \psi(t),&t\in B\\ \frac{1}{l^{(1)}(t)},&t\in C\end{cases}.

Since l(1)(t)mλ(1)(t)u(1)(t)(or 1u(1)(t)1mλ(1)(t)1l(1)(t)),λ1l^{(1)}(t)\leq m_{\lambda}^{(1)}(t)\leq u^{(1)}(t)\;(\text{or }\frac{1}{u^{(1)}(t)}\leq\frac{1}{m_{\lambda}^{(1)}(t)}\leq\frac{1}{l^{(1)}(t)}),\;\forall\;\lambda\geq 1 and tt, using Lemma 4.1, we have g1,λ(ψ(t),ψ(t),t)>12,λ1g_{1,\lambda}(\psi^{*}(t),\psi(t),t)>\frac{1}{2},\;\forall\;\lambda\geq 1, whenever tACt\in A\cup C. Also, for tBt\in B, g1,λ(ψ(t),ψ(t),t)=12,λ1g_{1,\lambda}(\psi^{*}(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 1. Since P𝜽(AC)>0,𝜽Θ0P_{\boldsymbol{\theta}}(A\cup C)>0,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, we conclude that
 
GPN(δψ,δψ;𝜽)GPN(\delta_{\psi^{*}},\delta_{\psi};\boldsymbol{\theta})

=Ag1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t+Bg1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t\displaystyle\!=\!\int_{A}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt\!+\!\int_{B}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt\!
+Cg1,λ(ψ(t),ψ(t),t)fD(t|λ)𝑑t\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\!\int_{C}\!g_{1,\lambda}(\psi^{*}(t),\psi(t),t)f_{D}(t|\lambda)dt
>12,𝜽Θ0.\displaystyle>\frac{1}{2},\;\;\boldsymbol{\theta}\in\Theta_{0}.

The proof of the following corollary is contained in the proof of Theorem 4.1.1, and hence skipped.
 
Corollary 4.1.1. Let δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1} be a scale equivariant estimator of θ1\theta_{1}. Let ψ1,0:++++\psi_{1,0}:\Re_{++}\rightarrow\Re_{++} be such that ψ(t)<ψ1,0(t)1u(1)(t)\psi(t)<\psi_{1,0}(t)\leq\frac{1}{u^{(1)}(t)}, whenever ψ(t)<1u(1)(t)\psi(t)<\frac{1}{u^{(1)}(t)}, and 1l(1)(t)ψ1,0(t)<ψ(t)\frac{1}{l^{(1)}(t)}\leq\psi_{1,0}(t)<\psi(t), whenever 1l(1)(t)<ψ(t)\frac{1}{l^{(1)}(t)}<\psi(t), where l(1)()l^{(1)}(\cdot) and u(1)()u^{(1)}(\cdot) are as defined in Theorem 4.1.1. Also let ψ1,0(t)=ψ(t)\psi_{1,0}(t)=\psi(t), whenever 1u(1)(t)ψ(t)1l(1)(t)\frac{1}{u^{(1)}(t)}\leq\psi(t)\leq\frac{1}{l^{(1)}(t)}. Then, the estimator δψ1,0(𝕏)=ψ1,0(D)X1\delta_{\psi_{1,0}}(\mathbb{X})=\psi_{1,0}(D)X_{1} is nearer to θ1\theta_{1} than δψ(𝕏)=ψ(D)X1\delta_{\psi}(\mathbb{X})=\psi(D)X_{1}, provided P𝜽[1u(1)(D)ψ(D)1l(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[\frac{1}{u^{(1)}(D)}\leq\psi(D)\leq\frac{1}{l^{(1)}(D)}\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

The following corollary provides improvements over the PNSEE δ1,PNSEE(𝕏)=X1m0,1\delta_{1,PNSEE}(\mathbb{X})=\frac{X_{1}}{m_{0,1}}, under the restricted parameter space Θ0\Theta_{0}.
 
Corollary 4.1.2. Let ξ(t)=max{1u(1)(t),min{1m0,1,1l(1)(t)}}\xi^{*}(t)\!=\!\max\{\frac{1}{u^{(1)}(t)},\min\{\frac{1}{m_{0,1}},\frac{1}{l^{(1)}(t)}\}\}, where l(1)()l^{(1)}(\cdot) and u(1)()u^{(1)}(\cdot) are as defined in Theorem 4.1.1. Then, under the GPN criterion, the estimator δψ(𝕏)=ξ(D)X1\delta_{\psi^{*}}(\mathbb{X})\!=\!\xi^{*}(D)X_{1} is nearer to θ1\theta_{1} than the PNSEE δ1,PNSEE(𝕏)=X1m0,1\delta_{1,PNSEE}(\mathbb{X})=\frac{X_{1}}{m_{0,1}}, provided P𝜽[l(1)(D)m0,1u(1)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[l^{(1)}(D)\leq m_{0,1}\leq u^{(1)}(D)\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

In various applications of Theorem 4.1.1 and Corollaries 4.1.1-4.1.2, a common choice for (l(1)(t),u(1)(t))(l^{(1)}(t),u^{(1)}(t)) is given by l(1)(t)=infλ1mλ(1)(t)l^{(1)}(t)=\inf_{\lambda\geq 1}m_{\lambda}^{(1)}(t) and u(1)(t)=supλ1mλ(1)(t)u^{(1)}(t)=\sup_{\lambda\geq 1}m_{\lambda}^{(1)}(t).

In order to identify the behaviour of function mλ(1)(t)m_{\lambda}^{(1)}(t), for any fixed tt, the following lemma be useful in many situations. Since the proof of the lemma is on the same lines as in Lemma 3.1.1, it is skipped.
 
Lemma 4.1.1. If, for every fixed λ1\lambda\geq 1 and tt, f(s,stλ)/f(s,st)f(s,\frac{st}{\lambda})/f(s,st) is increasing (decreasing) in ss (wherever the ratio is not of the form 0/00/0), then, for every fixed tt, mλ(1)(t)m_{\lambda}^{(1)}(t) is an increasing (decreasing) function of λ[1,)\lambda\in[1,\infty).

Under the assumptions of Lemma 4.1.1, one may take, for any fixed tt,

(4.3) l(1)(t)\displaystyle l^{(1)}(t) =infλ1mλ(1)(t)=m1(1)(t)(=limλmλ(1)(t))\displaystyle=\inf_{\lambda\geq 1}m_{\lambda}^{(1)}(t)=m_{1}^{(1)}(t)\;(=\lim_{\lambda\to\infty}m_{\lambda}^{(1)}(t))
(4.4) andu(1)(t)\displaystyle\;\text{and}\;\;u^{(1)}(t) =supλ1mλ(1)(t)=limλmλ(1)(t)(=m1(1)(t)),\displaystyle=\sup_{\lambda\geq 1}m_{\lambda}^{(1)}(t)=\lim_{\lambda\to\infty}m_{\lambda}^{(1)}(t)\;(=m_{1}^{(1)}(t)),

while applying Theorem 4.1.1 and Corollary 4.1.1.

Now we will consider some applications of Theorem 4.1.1 and Corollaries 4.1.1-4.1.2 to specific probability models.

Example 4.1.1. Let X1X_{1} and X2X_{2} be independent gamma random variables with joint p.d.f. (4.1), where, f(z1,z2)=z1α11z2α21ez1ez2Γ(α1)Γ(α2),(z1,z2)++2f(z_{1},z_{2})=\frac{z_{1}^{\alpha_{1}-1}z_{2}^{\alpha_{2}-1}e^{-z_{1}}e^{-z_{2}}}{\Gamma(\alpha_{1})\Gamma(\alpha_{2})},\;(z_{1},z_{2})\in\Re_{++}^{2}, for known positive constants α1\alpha_{1} and α2.\alpha_{2}.

Consider estimation of the smaller scale parameter θ1\theta_{1}, under the GPN criterion with a general loss function, when it is known apriori that 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0}. Here the restricted MLE of θ1\theta_{1} is δ1,RMLE(𝕏)=min{X1α1,X1+X2α1+α2}=X1ψ1,RMLE(D)\delta_{1,RMLE}(\mathbb{X})\!=\!\min\big{\{}\!\frac{X_{1}}{\alpha_{1}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\!\big{\}}=X_{1}\psi_{1,RMLE}(D), where ψ1,RMLE(D)=min{1α1,1+Dα1+α2}\psi_{1,RMLE}(D)=\min\big{\{}\frac{1}{\alpha_{1}},\frac{1+D}{\alpha_{1}+\alpha_{2}}\} and the unrestricted PNSEE of θ1 is δ1,PNSEE(𝕏)=X1m0,1\theta_{1}\text{ is }\delta_{1,PNSEE}(\mathbb{X})=\frac{X_{1}}{m_{0,1}}, where m0,1m_{0,1} is such that 1Γ(α1)0m0,1tα11et𝑑t=12\frac{1}{\Gamma(\alpha_{1})}\int_{0}^{m_{0,1}}t^{\alpha_{1}-1}e^{-t}\,dt=\frac{1}{2}.
 
For t++t\in\Re_{++} and λ1\lambda\geq 1, the conditional p.d.f. of Z1Z_{1} given D=tD=t is

fλ(s|t)={(1+tλ)α1+α2sα1+α21e(1+tλ)sΓ(α1+α2), if  0<s<0, otherwise.\displaystyle f_{\lambda}(s|t)=\begin{cases}\frac{(1+\frac{t}{\lambda})^{\alpha_{1}+\alpha_{2}}\,s^{\alpha_{1}+\alpha_{2}-1}\,e^{-(1+\frac{t}{\lambda})s}}{\Gamma{(\alpha_{1}+\alpha_{2})}},&\text{ if}\;\;0<s<\infty\\ 0,&\text{ otherwise}\end{cases}.

For t(0,)t\in(0,\infty) and λ1\lambda\geq 1, let mλ(1)(t)m_{\lambda}^{(1)}(t) be the median of the p.d.f. fλ(s|t),t(0,)f_{\lambda}(s|t),\;t\in(0,\infty) and λ1.\lambda\geq 1. For α>0\alpha>0, let ν(α)\nu(\alpha) denote the median of Gamma(α\alpha,1) distribution, i.e. 1Γ(α)0ν(α)tα1et𝑑t=12\frac{1}{\Gamma(\alpha)}\int_{0}^{\nu(\alpha)}t^{\alpha-1}e^{-t}\,dt=\frac{1}{2}. Then, m0,1=ν(α1)m_{0,1}=\nu(\alpha_{1}) and, for any t>0t>0 and λ1\lambda\geq 1, (λ+tλ)mλ(1)(t)=ν(α1+α2)\left(\frac{\lambda+t}{\lambda}\right)m_{\lambda}^{(1)}(t)=\nu(\alpha_{1}+\alpha_{2}), m1(1)(t)=ν(α1+α2)1+tm_{1}^{(1)}(t)=\frac{\nu(\alpha_{1}+\alpha_{2})}{1+t} and limλmλ(t)=ν(α1+α2)\lim_{\lambda\to\infty}m_{\lambda}(t)=\nu(\alpha_{1}+\alpha_{2}). From Chen and Rubin (1986), we have α1+α213<ν(α1+α2)<α1+α2\alpha_{1}+\alpha_{2}-\frac{1}{3}<\nu(\alpha_{1}+\alpha_{2})<\alpha_{1}+\alpha_{2}.

Thus, as in (4.3) and (4.4), we may take l(1)(t)=m1(1)(t)=ν(α1+α2)1+t and u(1)(t)=limλmλ(1)(t)=ν(α1+α2).l^{(1)}(t)=m_{1}^{(1)}(t)=\frac{\nu(\alpha_{1}+\alpha_{2})}{1+t}\text{ and }u^{(1)}(t)\!=\lim\limits_{\lambda\to\infty}m_{\lambda}^{(1)}(t)=\nu(\alpha_{1}+\alpha_{2}).

The following conclusions are immediate from Theorem 4.1.1 and Corollaries 4.1.1-4.1.2:
 
(i) The estimator δ1,RMLE(𝕏)=max{X1ν(α1+α2),min{X1α1,X1+X2α1+α2}}\delta_{1,RMLE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})},\min\big{\{}\frac{X_{1}}{\alpha_{1}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\big{\}}\big{\}} is nearer to θ1\theta_{1} than the restricted MLE δ1,RMLE(𝕏)=min{X1α1,X1+X2α1+α2}\delta_{1,RMLE}(\mathbb{X})=\min\{\frac{X_{1}}{\alpha_{1}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\}. Clearly, for ν(α1+α2)α1\nu(\alpha_{1}+\alpha_{2})\geq\alpha_{1},

δ1,RMLE(𝕏)={X1ν(α1+α2), if 0<X2X1α1+α2ν(α1+α2)1X1+X2α1+α2, if α1+α2ν(α1+α2)1<X2X1α2α1X1α1, if X2X1>α2α1\delta_{1,RMLE}^{*}(\mathbb{X})=\begin{cases}\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})},&\text{ if }0<\frac{X_{2}}{X_{1}}\leq\frac{\alpha_{1}+\alpha_{2}}{\nu(\alpha_{1}+\alpha_{2})}-1\\ \frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}},&\text{ if }\frac{\alpha_{1}+\alpha_{2}}{\nu(\alpha_{1}+\alpha_{2})}-1<\frac{X_{2}}{X_{1}}\leq\frac{\alpha_{2}}{\alpha_{1}}\\ \frac{X_{1}}{\alpha_{1}},&\text{ if }\frac{X_{2}}{X_{1}}>\frac{\alpha_{2}}{\alpha_{1}}\end{cases}

and, for ν(α1+α1)<α1\nu(\alpha_{1}+\alpha_{1})<\alpha_{1}, δ1,RMLE(𝕏)=X1ν(α1+α2)\delta_{1,RMLE}^{*}(\mathbb{X})=\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})}. Since ν(α1+α2)>α1+α213\nu(\alpha_{1}+\alpha_{2})>\alpha_{1}+\alpha_{2}-\frac{1}{3}, we have ν(α1+α2)>α1\nu(\alpha_{1}+\alpha_{2})>\alpha_{1}, if α213\alpha_{2}\geq\frac{1}{3}.
 
(ii) The estimator δ1,PNSEE(𝕏)=max{X1ν(α1+α2),min{X1ν(α1),X1+X2ν(α1+α2)}}\delta_{1,PNSEE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})},\min\big{\{}\frac{X_{1}}{\nu(\alpha_{1})},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}}\big{\}} is nearer to θ1\theta_{1} than δ1,PNSEE(𝕏)=X1ν(α1)\delta_{1,PNSEE}(\mathbb{X})=\frac{X_{1}}{\nu(\alpha_{1})}. Clearly

δ1,PNSEE(𝕏)={X1+X2ν(α1+α2), if 0<X2X1ν(α1+α2)ν(α1)1X1ν(α1), if X2X1>ν(α1+α2)ν(α1)1.\delta_{1,PNSEE}^{*}(\mathbb{X})=\begin{cases}\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})},&\text{ if }0<\frac{X_{2}}{X_{1}}\leq\frac{\nu(\alpha_{1}+\alpha_{2})}{\nu(\alpha_{1})}-1\\ \frac{X_{1}}{\nu(\alpha_{1})},&\text{ if }\frac{X_{2}}{X_{1}}>\frac{\nu(\alpha_{1}+\alpha_{2})}{\nu(\alpha_{1})}-1\end{cases}.

(iii) The estimator δ1,UE(𝕏)=max{X1ν(α1+α2),min{X1α1,X1+X2ν(α1+α2)}}\delta_{1,UE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})},\min\big{\{}\frac{X_{1}}{\alpha_{1}},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}}\big{\}} is nearer to θ1\theta_{1} than the unbiased estimator δ1,UE(𝕏)=X1α1\delta_{1,UE}(\mathbb{X})=\frac{X_{1}}{\alpha_{1}}. Clearly, for ν(α1+α2)α1\nu(\alpha_{1}+\alpha_{2})\geq\alpha_{1},

δ1,UE(𝕏)={X1+X2ν(α1+α2), if 0X2X1<ν(α1+α2)α11X1α1, if X2X1ν(α1+α2)α11\delta_{1,UE}^{*}(\mathbb{X})=\begin{cases}\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})},&\text{ if }0\leq\frac{X_{2}}{X_{1}}<\frac{\nu(\alpha_{1}+\alpha_{2})}{\alpha_{1}}-1\\ \frac{X_{1}}{\alpha_{1}},&\text{ if }\frac{X_{2}}{X_{1}}\geq\frac{\nu(\alpha_{1}+\alpha_{2})}{\alpha_{1}}-1\end{cases}

and, for ν(α1+α1)<α1\nu(\alpha_{1}+\alpha_{1})<\alpha_{1}, δ1,UE(𝕏)=X1ν(α1+α2)\delta_{1,UE}^{*}(\mathbb{X})=\frac{X_{1}}{\nu(\alpha_{1}+\alpha_{2})}.

(iv) For ν(α1+α2)α1\nu(\alpha_{1}+\alpha_{2})\leq\alpha_{1}, the estimator δ1,UE(𝕏)=X1α1\delta_{1,UE}(\mathbb{X})=\frac{X_{1}}{\alpha_{1}} is nearer to θ1\theta_{1} than the restricted MLE δ1,RMLE(𝕏)=min{X1α1,X1+X2α1+α2}\delta_{1,RMLE}(\mathbb{X})=\min\{\frac{X_{1}}{\alpha_{1}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\}. Ma and Liu (2014) proved a similar result under the GPN criterion with a specific loss function L1(𝜽,a)=|aθ11|,𝜽Θ0,a>0L_{1}(\boldsymbol{\theta},a)=|\frac{a}{\theta_{1}}-1|,\;\boldsymbol{\theta}\in\Theta_{0},\;a>0. In fact several results reported in Ma and Liu (2014) can be obtained as particular cases of Corollary 4.1.1.

Example 4.1.2. Let X1X_{1} and X2X_{2} be independent random variables with joint p.d.f. (4.1), where 𝜽=(θ1,θ2)Θ0\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0} and f(z1,z2)=α1α2z1α11z2α21, if 0<z1<1, 0<z2<1;=0, otherwisef\left(z_{1},z_{2}\right)=\alpha_{1}\alpha_{2}z_{1}^{\alpha_{1}-1}z_{2}^{\alpha_{2}-1},\text{ if }0<z_{1}<1,\,0<z_{2}<1;=0,\text{ otherwise}, for known positive constants α1\alpha_{1} and α2\alpha_{2}.

Consider estimation of θ1\theta_{1} under the GPN criterion with general loss function. Here the unrestricted PNSEE of θ1 is δ1,PNSEE(𝕏)=21α1X1\theta_{1}\text{ is }\delta_{1,PNSEE}(\mathbb{X})=2^{\frac{1}{\alpha_{1}}}X_{1}. Also, for t++t\in\Re_{++}, the conditional p.d.f. of Z1Z_{1} given D=tD=t is

fλ(s|t)=(α1+α2)sα1+α21(min{1,λt})α1+α2, if 0<s<min{1,λt},λ1.\displaystyle f_{\lambda}(s|t)=\frac{(\alpha_{1}+\alpha_{2})s^{\alpha_{1}+\alpha_{2}-1}}{\left(\min\{1,\frac{\lambda}{t}\}\right)^{\alpha_{1}+\alpha_{2}}},\text{ if}\;0<s<\min\bigg{\{}1,\frac{\lambda}{t}\bigg{\}},\;\lambda\geq 1.

The median of the above density is mλ(1)(t)=21α1+α2min{1,λt},t++m_{\lambda}^{(1)}(t)=2^{\frac{-1}{\alpha_{1}+\alpha_{2}}}\min\big{\{}1,\frac{\lambda}{t}\big{\}},\;\forall\;t\in\Re_{++} and λ1.\lambda\geq 1. Clearly, for t++t\in\Re_{++}, mλ(1)(t)m_{\lambda}^{(1)}(t) is increasing in λ[1,)\lambda\in[1,\infty). As in (4.3) and (4.4), we take l(1)(t)=m1(1)(t)=21α1+α2min{1,1t} and u(1)(t)=limλmλ(1)(t)=21α1+α2.l^{(1)}(t)=m_{1}^{(1)}(t)=2^{\frac{-1}{\alpha_{1}+\alpha_{2}}}\min\big{\{}1,\frac{1}{t}\big{\}}\text{ and }u^{(1)}(t)\!=\lim\limits_{\lambda\to\infty}m_{\lambda}^{(1)}(t)=2^{\frac{-1}{\alpha_{1}+\alpha_{2}}}.

The following conclusions immediately follow from Theorem 4.1.1 and Corollary 4.1.1:
 
(i) Define δ1,PNSEE(𝕏)=max{21α1+α2,min{21α1+α2max{1,t},21α1}}X1=\delta_{1,PNSEE}^{*}(\mathbb{X})=\max\!\big{\{}\!2^{\frac{1}{\alpha_{1}+\alpha_{2}}},\min\{2^{\frac{1}{\alpha_{1}+\alpha_{2}}}\max\{1,t\},2^{\frac{1}{\alpha_{1}}}\}\big{\}}X_{1}= min{21α1+α2max{1,t},21α1}X1\min\{2^{\frac{1}{\alpha_{1}+\alpha_{2}}}\max\{1,t\},2^{\frac{1}{\alpha_{1}}}\}X_{1}. Then the estimator δ1,PNSEE(𝕏)\delta_{1,PNSEE}^{*}(\mathbb{X}) is nearer to θ1\theta_{1} than the PNSEE δ1,PNSEE(𝕏)\delta_{1,PNSEE}(\mathbb{X}). It is easy to verify that

δ1,PNSEE(𝕏)={21α1+α2X1,if 0<X2X1<121α1+α2X2,if 1X2X1<2α2α1(α1+α2)21α1X1,if X2X12α2α1(α1+α2).\delta_{1,PNSEE}^{*}(\mathbb{X})=\begin{cases}2^{\frac{1}{\alpha_{1}+\alpha_{2}}}X_{1},&\text{if }0<\frac{X_{2}}{X_{1}}<1\\ 2^{\frac{1}{\alpha_{1}+\alpha_{2}}}X_{2},&\text{if }1\leq\frac{X_{2}}{X_{1}}<2^{\frac{\alpha_{2}}{\alpha_{1}(\alpha_{1}+\alpha_{2})}}\\ 2^{\frac{1}{\alpha_{1}}}X_{1},&\text{if }\frac{X_{2}}{X_{1}}\geq 2^{\frac{\alpha_{2}}{\alpha_{1}(\alpha_{1}+\alpha_{2})}}\end{cases}.


(ii) Let ψ1,0(t)\psi_{1,0}(t) be such that 21α1+α2max{1,t}ψ1,0(t)<21α1,t2α2α1(α1+α2)2^{\frac{1}{\alpha_{1}+\alpha_{2}}}\max\{1,t\}\leq\psi_{1,0}(t)<2^{\frac{1}{\alpha_{1}}},\;\forall\;t\leq 2^{\frac{\alpha_{2}}{\alpha_{1}(\alpha_{1}+\alpha_{2})}}, and ψ1,0(t)=21α1,t>2α2α1(α1+α2)\psi_{1,0}(t)=2^{\frac{1}{\alpha_{1}}},\;\forall\;t>2^{\frac{\alpha_{2}}{\alpha_{1}(\alpha_{1}+\alpha_{2})}}. Then the estimator δψ1,0(𝕏)=ψ1,0(D)X1\delta_{\psi_{1,0}}(\mathbb{X})=\psi_{1,0}(D)X_{1} is nearer to θ1\theta_{1} than the PNSEE δ1,PNSEE(𝕏)\delta_{1,PNSEE}(\mathbb{X}).

4.2. Estimation of The Larger Scale Parameter θ2\theta_{2}

In this section, we consider estimation of the larger scale parameter θ2\theta_{2} under the GPN criterion with a general loss function L2(𝜽,a)=W(aθ2)L_{2}(\boldsymbol{\theta},a)=W(\frac{a}{\theta_{2}}), when it is known that 0<θ1θ2<0<\theta_{1}\leq\theta_{2}<\infty (i.e., 𝜽Θ0\boldsymbol{\theta}\in\Theta_{0}). Here W:++++W:\Re_{++}\rightarrow\Re_{++} is such that W(1)=0W(1)=0, W(t)W(t) is strictly decreasing in (0,1)(0,1) and strictly increasing in (1,)(1,\infty). Here, any scale equivariant estimator of θ2\theta_{2} is of the form δψ(𝕏)=ψ(D)X2,\delta_{\psi}(\mathbb{X})=\psi(D)X_{2}, for some function ψ:++++\psi:\,\Re_{++}\rightarrow\Re_{++}, where D=X2X1D=\frac{X_{2}}{X_{1}}. Define Z2=X2θ2Z_{2}=\frac{X_{2}}{\theta_{2}}, λ=θ2θ1\lambda=\frac{\theta_{2}}{\theta_{1}} and fD(t|λ)f_{D}(t|\lambda) the p.d.f. of r.v. D=X2X1D=\frac{X_{2}}{X_{1}}. Let δξ(𝕏)=ξ(D)X2\delta_{\xi}(\mathbb{X})=\xi(D)X_{2} and δψ(𝕏)=ψ(D)X2\delta_{\psi}(\mathbb{X})=\psi(D)X_{2} be two scale equivariant estimators of θ2\theta_{2}. Then, the GPN of δξ(𝕏)=ξ(D)X2\delta_{\xi}(\mathbb{X})=\xi(D)X_{2} relative to δψ(𝕏)=ψ(D)X2\delta_{\psi}(\mathbb{X})=\psi(D)X_{2} can be written as

GPN(δξ,δψ;𝜽)\displaystyle GPN(\delta_{\xi},\delta_{\psi};\boldsymbol{\theta}) =0g2,λ(ξ(t),ψ(t),t)fD(t|λ)𝑑t,𝜽Θ0,\displaystyle=\int_{0}^{\infty}g_{2,\lambda}(\xi(t),\psi(t),t)f_{D}(t|\lambda)dt,\;\;\boldsymbol{\theta}\in\Theta_{0},

where, for λ1\lambda\geq 1, g2,λ(ξ(t),ψ(t),t)=P𝜽[W(ξ(t)Z2)<W(ψ(t)Z2)|D=t]+12P𝜽[W(ξ(t)Z2)g_{2,\lambda}(\xi(t),\psi(t),t)=P_{\boldsymbol{\theta}}[W(\xi(t)Z_{2})<W(\psi(t)Z_{2})|D=t]+\frac{1}{2}P_{\boldsymbol{\theta}}[W(\xi(t)Z_{2}) =W(ψ(t)Z2)|D=t].=W(\psi(t)Z_{2})|D=t]. For any fixed λ1\lambda\geq 1 and tt, let mλ(2)(t)m_{\lambda}^{(2)}(t) denote the median of the conditional distributional of Z2Z_{2} given D=tD=t. For any fixed tt, the conditional p.d.f. of Z2Z_{2} given D=tD=t is fλ(s|t)=λst2f(λst,s)fD(t|λ)f_{\lambda}(s|t)=\frac{\frac{\lambda s}{t^{2}}f(\frac{\lambda s}{t},s)}{f_{D}(t|\lambda)} and fD(t|λ)=0λyt2f(λyt,y)𝑑yf_{D}(t|\lambda)=\int_{0}^{\infty}\frac{\lambda y}{t^{2}}f(\frac{\lambda y}{t},y)dy, λ1\lambda\geq 1. Thus 0mλ(2)(t)sf(λst,s)𝑑s=120sf(λst,s)𝑑s\int_{0}^{m_{\lambda}^{(2)}(t)}sf(\frac{\lambda s}{t},s)ds=\frac{1}{2}\int_{0}^{\infty}sf(\frac{\lambda s}{t},s)ds. Using Lemma 4.1, we have, for any fixed tt and λ1\lambda\geq 1, g2,λ(ξ(t),ψ(t),t)>12g_{2,\lambda}(\xi(t),\psi(t),t)>\frac{1}{2}, provided mλ(2)(t)1ξ(t)<1ψ(t)m_{\lambda}^{(2)}(t)\leq\frac{1}{\xi(t)}<\frac{1}{\psi(t)} or 1ψ(t)<1ξ(t)mλ(2)(t)\frac{1}{\psi(t)}<\frac{1}{\xi(t)}\leq m_{\lambda}^{(2)}(t). Moreover, for any fixed tt, g2,λ(ψ(t),ψ(t),t)=12,λ1g_{2,\lambda}(\psi(t),\psi(t),t)=\frac{1}{2},\;\forall\;\lambda\geq 1. These arguments lead to the following results.

Theorem 4.2.1. Let δψ(𝕏)=ψ(D)X2\delta_{\psi}(\mathbb{X})=\psi(D)X_{2} be a scale equivariant estimator of θ2\theta_{2}. Let l(2)(t)l^{(2)}(t) and u(2)(t)u^{(2)}(t) be functions such that 0<l(2)(t)mλ(2)(t)u(2)(t),λ10<l^{(2)}(t)\leq m_{\lambda}^{(2)}(t)\leq u^{(2)}(t),\;\forall\;\lambda\geq 1 and any tt. For any fixed tt, define ψ(t)=max{1u(2)(t),min{ψ(t),1l(2)(t)}}\psi^{*}(t)\!=\!\max\{\frac{1}{u^{(2)}(t)},\min\{\psi(t),\frac{1}{l^{(2)}(t)}\}\}. Then, under the GPN criterion, the estimator δψ(𝕏)=ψ(D)X2\delta_{\psi^{*}}(\mathbb{X})\!=\!\psi^{*}(D)X_{2} is nearer to θ2\theta_{2} than the estimator δψ(𝕏)=ψ(D)X2\delta_{\psi}(\mathbb{X})=\psi(D)X_{2}, provided P𝜽[1u(2)(D)ψ(D)1l(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[\frac{1}{u^{(2)}(D)}\leq\psi(D)\leq\frac{1}{l^{(2)}(D)}\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

Corollary 4.2.1. Let δψ(𝕏)=ψ(D)X2\delta_{\psi}(\mathbb{X})=\psi(D)X_{2} be a scale equivariant estimator of θ2\theta_{2}. Let ψ2,0:++++\psi_{2,0}:\Re_{++}\rightarrow\Re_{++} be such that ψ(t)<ψ2,0(t)1u(2)(t)\psi(t)<\psi_{2,0}(t)\leq\frac{1}{u^{(2)}(t)}, whenever ψ(t)<1u(2)(t)\psi(t)<\frac{1}{u^{(2)}(t)}, and 1l(2)(t)ψ2,0(t)<ψ(t)\frac{1}{l^{(2)}(t)}\leq\psi_{2,0}(t)<\psi(t), whenever 1l(2)(t)<ψ(t)\frac{1}{l^{(2)}(t)}<\psi(t), where l(2)()l^{(2)}(\cdot) and u(2)()u^{(2)}(\cdot) are as defined in Theorem 4.2.1. Also let ψ2,0(t)=ψ(t)\psi_{2,0}(t)=\psi(t), whenever 1u(2)(t)ψ(t)1l(2)(t)\frac{1}{u^{(2)}(t)}\leq\psi(t)\leq\frac{1}{l^{(2)}(t)}. Then, GPN(δψ2,0,δψ;𝜽)>12,𝜽Θ0,GPN(\delta_{\psi_{2,0}},\delta_{\psi};\boldsymbol{\theta})>\frac{1}{2},\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, provided P𝜽[1u(2)(D)ψ(D)1l(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[\frac{1}{u^{(2)}(D)}\leq\psi(D)\leq\frac{1}{l^{(2)}(D)}\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}, where δψ2,0(𝕏)=ψ2,0(D)X2\delta_{\psi_{2,0}}(\mathbb{X})=\psi_{2,0}(D)X_{2}.

Note that, in the unrestricted case Θ=++2\Theta\!=\!\Re_{++}^{2}, the PNSEE of θ2\theta_{2} is δ2,PNSEE(𝕏)=X2m0,2\delta_{2,PNSEE}(\mathbb{X})=\frac{X_{2}}{m_{0,2}}, where m0,2>0m_{0,2}>0 is the median of the r.v. Z2=X2θ2Z_{2}=\frac{X_{2}}{\theta_{2}}. The following corollary provides improvements over the PNSEE under the restricted parameter space.
 
Corollary 4.2.2. Let ξ(t)=max{1u(2)(t),min{1m0,2,1l(2)(t)}}\xi^{*}(t)\!=\!\max\{\frac{1}{u^{(2)}(t)},\min\{\frac{1}{m_{0,2}},\frac{1}{l^{(2)}(t)}\}\}, where l(2)()l^{(2)}(\cdot) and u(2)()u^{(2)}(\cdot) are as defined in Theorem 4.2.1. Then, under the GPN criterion, the estimator δξ(𝕏)=ξ(D)X2\delta_{\xi^{*}}(\mathbb{X})\!=\!\xi^{*}(D)X_{2} is nearer to θ2\theta_{2} than the PNSEE δ2,PNSEE(𝕏)=X2m0,2\delta_{2,PNSEE}(\mathbb{X})=\frac{X_{2}}{m_{0,2}}, provided P𝜽[l(2)(D)m0,2u(2)(D)]<1,𝜽Θ0P_{\boldsymbol{\theta}}\left[l^{(2)}(D)\leq m_{0,2}\leq u^{(2)}(D)\right]<1,\;\forall\;\boldsymbol{\theta}\in\Theta_{0}.

In order to identify the behaviour of mλ(2)(t),m_{\lambda}^{(2)}(t), for any fixed tt, we have the following lemma on the lines of Lemma 4.1.1.
 
Lemma 4.2.1. If, for every fixed λ1\lambda\geq 1 and tt, f(λst,s)/f(st,s)f(\frac{\lambda s}{t},s)/f(\frac{s}{t},s) is increasing (decreasing) in ss (wherever the ratio is not of the form 0/00/0), then, for every fixed tt, mλ(2)(t)m_{\lambda}^{(2)}(t) is an increasing (decreasing) function of λ[1,)\lambda\in[1,\infty).

Under the assumptions of Lemma 4.2.1, one may take, for any fixed tt,

(4.5) l(2)(t)\displaystyle l^{(2)}(t) =infλ1mλ(2)(t)=m1(2)(t)(=limλmλ(2)(t))\displaystyle=\inf_{\lambda\geq 1}m_{\lambda}^{(2)}(t)=m_{1}^{(2)}(t)\;(=\lim_{\lambda\to\infty}m_{\lambda}^{(2)}(t))
(4.6) andu(2)(t)\displaystyle\;\text{and}\;\;u^{(2)}(t) =supλ1mλ(2)(t)=limλmλ(2)(t)(=m1(2)(t)),\displaystyle=\sup_{\lambda\geq 1}m_{\lambda}^{(2)}(t)=\lim_{\lambda\to\infty}m_{\lambda}^{(2)}(t)\;(=m_{1}^{(2)}(t)),

while applying Theorem 4.2.1 and Corollary 4.2.1.

As in Section 4.1, we will now apply Theorem 4.2.1 and Corollaries 4.2.1-4.2.2 to estimation of the larger scale parameter θ2\theta_{2} in scale probability models considered in Examples 4.1.1-4.1.2.

Example 4.2.1. Let X1X_{1} and X2X_{2} be independent gamma random variables as defined in Example 4.1.1. Consider estimation of θ2\theta_{2}, under the GPN criterion with a general loss function.
 
For t++t\in\Re_{++} and λ1\lambda\geq 1, the conditional p.d.f. of Z2Z_{2} given D=tD=t is

fλ(s|t)={(1+λt)α1+α2sα1+α21e(1+λt)sΓ(α1+α2), if  0<s<0, otherwise.\displaystyle f_{\lambda}(s|t)=\begin{cases}\frac{(1+\frac{\lambda}{t})^{\alpha_{1}+\alpha_{2}}\,s^{\alpha_{1}+\alpha_{2}-1}\,e^{-(1+\frac{\lambda}{t})s}}{\Gamma{(\alpha_{1}+\alpha_{2})}},&\text{ if}\;\;0<s<\infty\\ 0,&\text{ otherwise}\end{cases}.

Let ν(α)\nu(\alpha) denote the median of Gamma(α\alpha,1) distribution. Then mλ(2)(t)=(tλ+t)ν(α1+α2)m_{\lambda}^{(2)}(t)=\left(\frac{t}{\lambda+t}\right)\nu(\alpha_{1}+\alpha_{2}), λ1,t>0\lambda\geq 1,\;t>0, and, as in (4.5) and (4.6), we may take l(2)(t)=0,t>0l^{(2)}(t)=0,\;t>0 and u(2)(t)=t1+tν(α1+α2),t>0u^{(2)}(t)=\frac{t}{1+t}\,\nu(\alpha_{1}+\alpha_{2}),\;t>0. Also m0,2=ν(α2)m_{0,2}=\nu(\alpha_{2}) and the PNSEE of θ2\theta_{2} is δ2,PNSEE(𝕏)=X2ν(α2)\delta_{2,PNSEE}(\mathbb{X})=\frac{X_{2}}{\nu(\alpha_{2})}. The restricted MLE of θ2\theta_{2} is δ2,RMLE(𝕏)=max{X2α2,X1+X2α1+α2}=ψ2,R(D)X2\delta_{2,RMLE}(\mathbb{X})=\max\{\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\}=\psi_{2,R}(D)X_{2}, where ψ2,R(t)=max{1α2,1+tt(α1+α2)},t>0\psi_{2,R}(t)=\max\{\frac{1}{\alpha_{2}},\frac{1+t}{t(\alpha_{1}+\alpha_{2})}\},\;t>0. Using Theorem 4.2.2 and Corollary 4.2.1, the following conclusions are evident:
 
(i) The estimator δ2,RMLE(𝕏)=max{X2α2,X1+X2ν(α1+α2)}\delta_{2,RMLE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}} is nearer to θ2\theta_{2} than the restricted MLE δ2,RMLE(𝕏)=max{X2α2,X1+X2α1+α2}\delta_{2,RMLE}(\mathbb{X})=\max\{\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\}.
 
(ii) The estimator δ2,PNSEE(𝕏)=max{X2ν(α2),X1+X2ν(α1+α2)}\delta_{2,PNSEE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\nu(\alpha_{2})},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}} is nearer to θ2\theta_{2} than the PNSEE δ2,PNSEE(𝕏)=X2ν(α2)\delta_{2,PNSEE}(\mathbb{X})=\frac{X_{2}}{\nu(\alpha_{2})}.
 
(iii) The restricted MLE δ2,RMLE(𝕏)=max{X2α2,X1+X2α1+α2}\delta_{2,RMLE}(\mathbb{X})=\max\{\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\} is nearer to θ2\theta_{2} than the unbiased estimator δ2,UE(𝕏)=X2α2\delta_{2,UE}(\mathbb{X})=\frac{X_{2}}{\alpha_{2}}. This result, under a specific loss function L2(𝜽,a)=|aθ21|,𝜽Θ0,a>0L_{2}(\boldsymbol{\theta},a)=|\frac{a}{\theta_{2}}-1|,\;\boldsymbol{\theta}\in\Theta_{0},\;a>0, is proved in Ma and Liu (2014).

Example 4.2.2. Let X1X_{1} and X2X_{2} be independent random variables as described in Example 4.1.2. Consider estimation of θ2\theta_{2} under the GPN criterion with a general error loss function. Here the PNSEE of θ2 is δ2,PNSEE(𝕏)=21α2X2\theta_{2}\text{ is }\delta_{2,PNSEE}(\mathbb{X})=2^{\frac{1}{\alpha_{2}}}X_{2}. Also, for λ1\lambda\geq 1 and t[0,)t\in[0,\infty), the conditional p.d.f. of Z2Z_{2} given D=tD=t is

fλ(s|t)=(α1+α2)sα1+α21(min{1,tλ})α1+α2, if 0<s<min{1,tλ}.\displaystyle f_{\lambda}(s|t)=\frac{(\alpha_{1}+\alpha_{2})s^{\alpha_{1}+\alpha_{2}-1}}{\left(\min\{1,\frac{t}{\lambda}\}\right)^{\alpha_{1}+\alpha_{2}}},\text{ if}\;0<s<\min\bigg{\{}1,\frac{t}{\lambda}\bigg{\}}.

The median of the above density is mλ(2)(t)=21α1+α2min{1,tλ},t++m_{\lambda}^{(2)}(t)=2^{\frac{-1}{\alpha_{1}+\alpha_{2}}}\min\big{\{}1,\frac{t}{\lambda}\big{\}},\;\forall\;t\in\Re_{++} and λ1.\lambda\geq 1. Clearly, for t++t\in\Re_{++}, mλ(2)(t)m_{\lambda}^{(2)}(t) is decreasing in λ[1,)\lambda\in[1,\infty). As in (4.5) and (4.6), we may take l(2)(t)=limλmλ(2)(t)=0 and u(2)(t)=m1(2)(t)=21α1+α2min{1,t}.l^{(2)}(t)=\lim\limits_{\lambda\to\infty}m_{\lambda}^{(2)}(t)=0\text{ and }u^{(2)}(t)\!=m_{1}^{(2)}(t)=2^{\frac{-1}{\alpha_{1}+\alpha_{2}}}\min\{1,t\}.

The following conclusions immediately follow from Theorem 4.2.1 and Corollary 4.2.1:
 
(i) The estimator δ2,PNSEE(𝕏)=max{21α1+α2X1,21α2X2}\delta_{2,PNSEE}^{*}(\mathbb{X})=\max\!\big{\{}\!2^{\frac{1}{\alpha_{1}+\alpha_{2}}}X_{1},2^{\frac{1}{\alpha_{2}}}X_{2}\big{\}} is nearer to θ1\theta_{1} than the PNSEE δ2,PNSEE(𝕏)=21α2X2\delta_{2,PNSEE}(\mathbb{X})=2^{\frac{1}{\alpha_{2}}}X_{2}.
 
(ii) Let ψ2,0(t)\psi_{2,0}(t) be such that 21α2<ψ2,0(t)21α1+α2max{1,1t},t2α1α2(α1+α2)2^{\frac{1}{\alpha_{2}}}<\psi_{2,0}(t)\leq 2^{\frac{1}{\alpha_{1}+\alpha_{2}}}\max\big{\{}1,\frac{1}{t}\big{\}},\;\forall\;t\leq 2^{\frac{-\alpha_{1}}{\alpha_{2}(\alpha_{1}+\alpha_{2})}}, and ψ2,0(t)=21α2,t>2α1α2(α1+α2)\psi_{2,0}(t)=2^{\frac{1}{\alpha_{2}}},\;\forall\;t>2^{\frac{-\alpha_{1}}{\alpha_{2}(\alpha_{1}+\alpha_{2})}}. Then the estimator δψ2,0(𝕏)=ψ2,0(D)X2\delta_{\psi_{2,0}}(\mathbb{X})=\psi_{2,0}(D)X_{2} is nearer to θ2\theta_{2} than the PNSEE δ2,PNSEE(𝕏)=21α2X2\delta_{2,PNSEE}(\mathbb{X})=2^{\frac{1}{\alpha_{2}}}X_{2}.

5. Simulation Study

5.1. For Smaller Location Parameter θ1\theta_{1}

In Example 3.1.1, we considered a bivariate normal distribution with unknown means θ1\theta_{1} and θ2\theta_{2} (<θ1θ2<-\infty<\theta_{1}\leq\theta_{2}<\infty), known variances σ12>0\sigma_{1}^{2}>0 and σ22>0\sigma_{2}^{2}>0, and known correlation coefficient ρ\rho (1<ρ<1-1<\rho<1). For estimation of θ1\theta_{1} under GPN criterion, we considered various estimators. To further evaluate the performances of these estimators, in this section, we compare these estimators under the GPN criterion with the absolute error loss (i.e., W(t)=|t|,tW(t)=|t|,\;t\in\Re), numerically. For simulations, 10,000 random samples of size 1 were generated from the relevant bivariate normal distribution. For various configurations of α=σ2(σ2ρσ1)σ12+σ222ρσ1σ2\alpha=\frac{\sigma_{2}(\sigma_{2}-\rho\sigma_{1})}{\sigma_{1}^{2}+\sigma_{2}^{2}-2\rho\sigma_{1}\sigma_{2}}, using the Monte Carlo simulations, we obtained the GPN values of the restricted MLE (δ1,RMLE(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}^{*}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\}) relative to the PNLEE (δ1,PNLEE(𝕏)=X1\delta_{1,PNLEE}(\mathbb{X})=X_{1}), of the improved Hwang and Peddada (HP) estimator (δ1,HP(𝕏)=αX1+(1α)X2\delta_{1,HP}^{*}(\mathbb{X})=\alpha X_{1}+(1-\alpha)X_{2}) relative to the Hwang and Peddada (HP) estimator (δ1,HP(𝕏)=X1max{0,(α1)D}\delta_{1,HP}(\mathbb{X})=X_{1}-\max\{0,(\alpha-1)D\}) and of the restricted MLE (δ1,RMLE(𝕏)=X1(1α)max{0,D}\delta_{1,RMLE}(\mathbb{X})=X_{1}-(1-\alpha)\max\{0,-D\}) relative to the Tan and Peddada (PDT) estimator (δ1,PDT(𝕏)=X1max{0,D}\delta_{1,PDT}(\mathbb{X})=X_{1}-\max\{0,-D\}). These values are tabulated in Tables 1-3. The following observations are evident from Tables 1-3:

(i) All the GPN values are greater than 0.50.5, which is in conformity with the theoretical findings of Example 3.1.1.
(ii) From Table 1, we can observe that, when σ1\sigma_{1} is relatively larger than σ2\sigma_{2}, we get higher GPN values. Also, for negative ρ\rho, the GPN values are higher.
(iii) From Table 2, we can see that, as the value of ρσ2σ1\rho\sigma_{2}-\sigma_{1} (>0>0) increases, the GPN value also increases. Also, from Table 3, we observe that as the value of ρσ1σ2\rho\sigma_{1}-\sigma_{2} (>0>0) increases, the GPN value also increases.

Table 1. The GPN values of the restricted MLE (δ1,RMLE\delta_{1,RMLE}) relative to the PNLEE (δ1,PNLEE\delta_{1,PNLEE})
(σ1,σ2,ρ\sigma_{1},\sigma_{2},\rho)
θ2θ1\theta_{2}-\theta_{1} (3,0.5,-0.9) (0.5,5,-0.5) (1,1,0) (15,2,0.2) (1,30,0.5) (30,1,0.9)
0.0 0.743 0.557 0.560 0.708 0.549 0.740
0.5 0.713 0.577 0.609 0.718 0.537 0.749
1.0 0.693 0.548 0.580 0.722 0.545 0.740
1.5 0.662 0.558 0.547 0.719 0.535 0.740
2.0 0.626 0.566 0.540 0.717 0.546 0.753
2.5 0.611 0.570 0.516 0.721 0.557 0.741
3.0 0.584 0.575 0.507 0.709 0.556 0.732
Table 2. The GPN values of the improved HP (δ1,HP\delta_{1,HP}^{*}) relative to the HP (δ1,HP\delta_{1,HP}) when α>1\alpha>1 (i.e. σ1<ρσ2\sigma_{1}<\rho\sigma_{2})
(σ1,σ2,ρ\sigma_{1},\sigma_{2},\rho)
θ2θ1\theta_{2}-\theta_{1} (0.1,5,0.2) (1,25,0.2) (0.5,2,0.5) (5,15,0.5) (0.5,5,0.9) (2,15,0.9)
0 0.520 0.502 0.524 0.512 0.620 0.619
0.5 0.515 0.514 0.523 0.521 0.631 0.620
1 0.516 0.508 0.532 0.518 0.633 0.621
1.5 0.520 0.502 0.528 0.520 0.639 0.625
2 0.524 0.511 0.521 0.514 0.626 0.634
2.5 0.520 0.520 0.518 0.513 0.621 0.630
3 0.513 0.515 0.510 0.516 0.609 0.629
Table 3. The GPN values of the restricted MLE (δ1,RMLE\delta_{1,RMLE}) relative to the PDT (δ1,PDT\delta_{1,PDT}) when α<0\alpha<0 (i.e. σ2<ρσ1\sigma_{2}<\rho\sigma_{1})
(σ1,σ2,ρ\sigma_{1},\sigma_{2},\rho)
θ2θ1\theta_{2}-\theta_{1} (5,0.1,0.2) (25,1,0.2) (2,0.5,0.5) (15,5,0.5) (5,0.5,0.9) (15,2,0.9)
0 0.512 0.516 0.522 0.516 0.617 0.619
0.5 0.733 0.613 0.650 0.526 0.723 0.679
1 0.706 0.677 0.640 0.550 0.708 0.712
1.5 0.692 0.713 0.598 0.577 0.683 0.722
2 0.672 0.729 0.569 0.588 0.667 0.720
2.5 0.653 0.730 0.539 0.597 0.644 0.710
3 0.633 0.725 0.522 0.611 0.630 0.706

5.2. For Larger Scale Parameter θ2\theta_{2}

In this section, for estimation of the larger scale parameter θ2\theta_{2}, under the GPN criterion with loss function L2(θ¯,a)=|aθ21|,a𝒜=(0,),θ¯Θ0L_{2}(\underline{\theta},a)=|\frac{a}{\theta_{2}}-1|,\;a\in\mathcal{A}=(0,\infty),\;\underline{\theta}\in\Theta_{0}, we numerically compare various estimators considered in Example 4.2.1. For simulations, 10,000 random samples of size 1 were generated from relevant gamma distributions. Using the Monte Carlo simulations, we obtained the GPN values of the improved restricted MLE (δ2,RMLE(𝕏)=max{X2α2,X1+X2ν(α1+α2)}\delta_{2,RMLE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}}) relative to the restricted MLE (δ2,RMLE(𝕏)=max{X2α2,X1+X2α1+α2}\delta_{2,RMLE}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\big{\}}), of the improved PNSEE (δ2,PNSEE(𝕏)=max{X2ν(α2),X1+X2ν(α1+α2)}\delta_{2,PNSEE}^{*}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\nu(\alpha_{2})},\frac{X_{1}+X_{2}}{\nu(\alpha_{1}+\alpha_{2})}\big{\}}) relative to the PNSEE (δ2,PNSEE(𝕏)=X2ν(α2)\delta_{2,PNSEE}(\mathbb{X})=\frac{X_{2}}{\nu(\alpha_{2})}) and of the restricted MLE (δ2,RMLE(𝕏)=max{X2α2,X1+X2α1+α2}\delta_{2,RMLE}(\mathbb{X})=\max\big{\{}\frac{X_{2}}{\alpha_{2}},\frac{X_{1}+X_{2}}{\alpha_{1}+\alpha_{2}}\big{\}}) relative to the unbiased estimator (δ2,UE(𝕏)=X2α2\delta_{2,UE}(\mathbb{X})=\frac{X_{2}}{\alpha_{2}}), as given in Table 4, Table 5 and Table 6, receptively. The following observations are evident from Tables 4-6:

(i) All the GPN values are greater than 0.50.5, confirming our theoretical findings of Example 4.2.1.
(ii) When α1\alpha_{1} is relatively larger than α2\alpha_{2}, we get higher GPN values.

Table 4. The GPN values of the modified restricted MLE (δ2,RMLE\delta_{2,RMLE}^{*}) relative to the restricted MLE (δ2,RMLE\delta_{2,RMLE})
(α1,α2)(\alpha_{1},\alpha_{2})
θ2/θ1\theta_{2}/\theta_{1} (0.5,0.2) (0.2,0.8) (1,1) (5,2) (1,30) (30,1)
1 0.552 0.538 0.518 0.515 0.508 0.503
1.5 0.613 0.567 0.603 0.640 0.519 0.736
2 0.664 0.580 0.627 0.635 0.522 0.699
2.5 0.692 0.592 0.634 0.605 0.522 0.666
3 0.715 0.600 0.635 0.586 0.522 0.643
3.5 0.731 0.595 0.627 0.568 0.519 0.622
4 0.740 0.606 0.623 0.554 0.515 0.610
Table 5. The GPN values of the modified PNSEE (δ2,PNSEE\delta_{2,PNSEE}^{*}) relative to the PNSEE (δ2,PNSEE\delta_{2,PNSEE})
((α1,α2)(\alpha_{1},\alpha_{2}))
θ2/θ1\theta_{2}/\theta_{1} (0.5,0.2) (0.2,0.8) (1,1) (5,2) (1,30) (30,1)
1 0.573 0.522 0.568 0.600 0.514 0.695
1.5 0.612 0.543 0.595 0.631 0.522 0.686
2 0.632 0.545 0.601 0.599 0.519 0.648
2.5 0.645 0.548 0.596 0.575 0.514 0.620
3 0.650 0.551 0.585 0.558 0.510 0.602
3.5 0.659 0.549 0.581 0.544 0.508 0.591
4 0.656 0.549 0.571 0.537 0.506 0.580
Table 6. The GPN values of the restricted MLE (δ2,RMLE\delta_{2,RMLE}) relative to the unbiased estimator (δ2,UE\delta_{2,UE})
(α1,α2)(\alpha_{1},\alpha_{2})
θ2/θ1\theta_{2}/\theta_{1} (0.5,0.2) (0.2,0.8) (1,1) (5,2) (1,30) (30,1)
1 0.702 0.569 0.628 0.648 0.523 0.758
1.5 0.736 0.579 0.642 0.668 0.529 0.744
2 0.745 0.579 0.641 0.627 0.522 0.698
2.5 0.756 0.575 0.633 0.598 0.516 0.664
3 0.751 0.578 0.616 0.575 0.512 0.640
3.5 0.748 0.573 0.610 0.559 0.509 0.625
4 0.744 0.572 0.597 0.550 0.506 0.611

Funding

This work was supported by the [Council of Scientific and Industrial Research (CSIR)] under Grant [number 09/092(0986)/2018].

References

  • Barlow et al., (1972) Barlow, R. E., Bartholomew, D. J., Bremner, J. M., and Brunk, H. D. (1972). Statistical inference under order restrictions: The theory and application of isotonic regression. John Wiley & Sons.
  • Brewster and Zidek, (1974) Brewster, J. F. and Zidek, J. V. (1974). Improving on equivariant estimators. Ann. Statist., 2:21–38.
  • Brunk, (1955) Brunk, H. D. (1955). Maximum likelihood estimates of monotone parameters. Ann. Math. Statist., 26:607–616.
  • Chang et al., (2017) Chang, Y.-T., Fukuda, K., and Shinozaki, N. (2017). Estimation of two ordered normal means when a covariance matrix is known. Statistics, 51(5):1095–1104.
  • Chang and Shinozaki, (2015) Chang, Y.-T. and Shinozaki, N. (2015). Estimation of two ordered normal means under modified pitman nearness criterion. Annals of the Institute of Statistical Mathematics, 67(5):863–883.
  • Chang et al., (2020) Chang, Y.-T., Shinozaki, N., and Strawderman, W. E. (2020). Pitman closeness domination in predictive density estimation for two-ordered normal means under α\alpha-divergence loss. Japanese Journal of Statistics and Data Science, 3(1):1–21.
  • Chen and Rubin, (1986) Chen, J. and Rubin, H. (1986). Bounds for the difference between median and mean of gamma and Poisson distributions. Statist. Probab. Lett., 4(6):281–283.
  • Cohen and Sackrowitz, (1970) Cohen, A. and Sackrowitz, H. B. (1970). Estimation of the last mean of a monotone sequence. Ann. Math. Statist., 41:2021–2034.
  • Garren, (2000) Garren, S. T. (2000). On the improved estimation of location parameters subject to order restrictions in location-scale families. Sankhyā Ser. B, 62(2):189–201.
  • Gupta and Singh, (1992) Gupta, R. D. and Singh, H. (1992). Pitman nearness comparisons of estimates of two ordered normal means. Austral. J. Statist., 34(3):407–414.
  • Hwang and Peddada, (1994) Hwang, J. T. G. and Peddada, S. D. (1994). Confidence interval estimation subject to order restrictions. Ann. Statist., 22(1):67–93.
  • Katz, (1963) Katz, M. W. (1963). Estimating ordered probabilities. Ann. Math. Statist., 34:967–972.
  • Kaur and Singh, (1991) Kaur, A. and Singh, H. (1991). On the estimation of ordered means of two exponential populations. Ann. Inst. Statist. Math., 43(2):347–356.
  • Keating, (1985) Keating, J. P. (1985). More on Rao’s phenomenon. Sankhyā Ser. B, 47(1):18–21.
  • Keating and Mason, (1985) Keating, J. P. and Mason, R. L. (1985). Practical relevance of an alternative criterion in estimation. The American Statistician, 39(3):203–205.
  • Keating et al., (1993) Keating, J. P., Mason, R. L., and Sen, P. K. (1993). Pitman’s measure of closeness: a comparison of statistical estimators. SIAM.
  • Kelly, (1989) Kelly, R. E. (1989). Stochastic reduction of loss in estimating normal means by isotonic regression. Ann. Statist., 17(2):937–940.
  • Kubokawa, (1991) Kubokawa, T. (1991). Equivariant estimation under the pitman closeness criterion. Communications in Statistics-Theory and Methods, 20(11):3499–3523.
  • Kubokawa and Saleh, (1994) Kubokawa, T. and Saleh, A. K. M. E. (1994). Estimation of location and scale parameters under order restrictions. J. Statist. Res., 28(1-2):41–51.
  • Kumar and Sharma, (1988) Kumar, S. and Sharma, D. (1988). Simultaneous estimation of ordered parameters. Comm. Statist. Theory Methods, 17(12):4315–4336.
  • Kumar and Sharma, (1989) Kumar, S. and Sharma, D. (1989). On the Pitman estimator of ordered normal means. Comm. Statist. Theory Methods, 18(11):4163–4175.
  • Kumar and Sharma, (1992) Kumar, S. and Sharma, D. (1992). An inadmissibility result for affine equivariant estimators. Statist. Decisions, 10(1-2):87–97.
  • Kushary and Cohen, (1989) Kushary, D. and Cohen, A. (1989). Estimating ordered location and scale parameters. Statist. Decisions, 7(3):201–213.
  • Lee, (1981) Lee, C. I. C. (1981). The quadratic loss of isotonic regression under normality. Ann. Statist., 9(3):686–688.
  • Ma and Liu, (2014) Ma, T. F. and Liu, S. (2014). Pitman closeness of the class of isotonic estimators for ordered scale parameters of two Gamma distributions. Statist. Papers, 55(3):615–625.
  • Marshall and Olkin, (2007) Marshall, A. W. and Olkin, I. (2007). Characterizations of distributions through coincidences of semiparametric families. J. Statist. Plann. Inference, 137(11):3618–3625.
  • Misra and Dhariyal, (1995) Misra, N. and Dhariyal, I. D. (1995). Some inadmissibility results for estimating ordered uniform scale parameters. Comm. Statist. Theory Methods, 24(3):675–685.
  • Misra et al., (2002) Misra, N., Dhariyal, I. D., and Kundu, D. (2002). Natural estimators for the larger of two exponential location parameters with a common unknown scale parameter. Statist. Decisions, 20(1):67–80.
  • Misra et al., (2004) Misra, N., Iyer, S. K., and Singh, H. (2004). The LINEX risk of maximum likelihood estimators of parameters of normal populations having order restricted means. Sankhyā, 66(4):652–677.
  • Misra and van der Meulen, (1997) Misra, N. and van der Meulen, E. C. (1997). On estimation of the common mean of k(2)k\;(\geq 2) normal populations with order restricted variances. Statistics & probability letters, 36(3):261–267.
  • Nayak, (1990) Nayak, T. K. (1990). Estimation of location and scale parameters using generalized pitman nearness criterion. Journal of Statistical Planning and Inference, 24(2):259–268.
  • Patra and Kumar, (2017) Patra, L. K. and Kumar, S. (2017). Estimating ordered means of a bivariate normal distribution. American Journal of Mathematical and Management Sciences, 36(2):118–136.
  • Peddada, (1985) Peddada, S. D. (1985). A short note on Pitman’s measure of nearness. Amer. Statist., 39(4, part 1):298–299.
  • Peddada et al., (2005) Peddada, S. D., Dunson, D. B., and Tan, X. (2005). Estimation of order-restricted means from correlated data. Biometrika, 92(3):703–715.
  • Pitman, (1937) Pitman, E. J. (1937). The “closest” estimates of statistical parameters. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 33, pages 212–222. Cambridge University Press.
  • Rao, (1981) Rao, C. R. (1981). Some comments on the minimum mean square error as a criterion of estimation. In Statistics and related topics (Ottawa, Ont., 1980), pages 123–143. North-Holland, Amsterdam-New York.
  • Rao et al., (1986) Rao, C. R., Keating, J. P., and Mason, R. L. (1986). The Pitman nearness criterion and its determination. Comm. Statist. A—Theory Methods, 15(11):3173–3191.
  • Robertson et al., (1988) Robertson, T., Wright, F. T., and Dykstra, R. L. (1988). Order restricted statistical inference. John Wiley & Sons.
  • Tan and Peddada, (2000) Tan, X. and Peddada, S. (2000). Asymptotic distribution of some estimators for parameters subject to order restrictions. Stat Appl, 2:7–25.
  • (40) van Eeden, C. (1956a). Maximum likelihood estimation of ordered probabilities. Nederl. Akad. Wetensch. Proc. Ser. A. 59 Indag. Math., 18:444–455.
  • (41) van Eeden, C. (1956b). Maximum likelihood estimation of partially or completely ordered parameters. Statist. Afdeling. Rep. S 207 (VP 9). Math. Centrum Amsterdam.
  • van Eeden, (1957) van Eeden, C. (1957). Maximum likelihood estimation of partially or completely ordered parameters. II. Nederl. Akad. Wetensch. Proc. Ser. A. 60 Indag. Math., 19:201–211.
  • van Eeden, (1958) van Eeden, C. (1958). Testing and estimating ordered parameters of probability distributions. Mathematical Centre, Amsterdam.
  • van Eeden, (2006) van Eeden, C. (2006). Restricted parameter space estimation problems. Admissibility and minimaxity properties, volume 188 of Lecture Notes in Statistics. Springer, New York.
  • Vijayasree et al., (1995) Vijayasree, G., Misra, N., and Singh, H. (1995). Componentwise estimation of ordered parameters of k(2)k\;(\geq 2) exponential populations. Ann. Inst. Statist. Math., 47(2):287–307.
  • Vijayasree and Singh, (1993) Vijayasree, G. and Singh, H. (1993). Mixed estimators of two ordered exponential means. J. Statist. Plann. Inference, 35(1):47–53.
  • Zhou and Nayak, (2012) Zhou, H. and Nayak, T. K. (2012). Pitman closest equivariant estimators and predictors under location–scale models. Journal of Statistical Planning and Inference, 142(6):1367–1377.