This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On extropy of past lifetime distribution

Osman Kamari
University of Human Development
Sulaymaniyah, Iraq
   Francesco Buono
Università di Napoli Federico II
Italy
Abstract

Recently Qiu et al. (2017) have introduced residual extropy as measure of uncertainty in residual lifetime distributions analogues to residual entropy (1996). Also, they obtained some properties and applications of that. In this paper, we study the extropy to measure the uncertainty in a past lifetime distribution. This measure of uncertainty is called past extropy. Also it is showed a characterization result about the past extropy of largest order statistics.

Keywords: Reversed residual lifetime, Past extropy, Characterization, Order statistics.

AMS Subject Classification: 94A17, 62B10, 62G30

1 Introduction

The concept of Shannon entropy as a seminal measure of uncertainty for a random variable was proposed by Shannon (1948). Shannon entropy H(f)H(f) for a non-negative and absolutely continuous random variable XX is defined as follows:

H(f)=𝔼[logf(x)]=0+f(x)logf(x)dx,H\left(f\right)=-\mathbb{E}[\log f(x)]=-\int_{0}^{+\infty}f\left(x\right)\log f\left(x\right)\ \mathrm{d}x, (1)

where F and f are cumulative distribution function (CDF) and probability density function (pdf), respectively. There are huge literatures devoted to the applications, generalizations and properties of Shannon entropy (see, e.g. Cover and Thomas, 2006).

Recently, a new measure of uncertainty was proposed by Lad et al. (2015) called extyopy as a complement dual of Shannon entropy (1948). For a non-negative random variable XX the extropy is defined as below:

J(X)=120+f2(x)dx.J\left(X\right)=-\frac{1}{2}\int_{0}^{+\infty}f^{2}(x)\mathrm{d}x. (2)

It’s obvious that J(X)0.J\left(X\right)\leq 0.

One of the statistical applications of extropy is to score the forecasting distributions using the total log scoring rule.

Study of duration is an interest subject in many fields of science such as reliability, survival analysis, and forensic science. In these areas, the additional life time given that the component or system or a living organism has survived up to time tt is termed the residual life function of the component. If XX is the life of a component, then Xt=(Xt|X>t)X_{t}=\left(X-t|X>t\right) is called the residual life function. If a component is known to have survived to age tt then extropy is no longer useful to measure the uncertainty of remaining lifetime of the component.

Therefore, Ebrahimi (1996) defined the entropy for resudial lifetime Xt=(Xt|X>t)X_{t}=\left(X-t|X>t\right) as a dynamic form of uncertainty called the residual entropy at time tt and defined as

H(X;t)=t+f(x)F¯(t)logf(x)F¯(t)dx,H(X;t)=-\int_{t}^{+\infty}\frac{f(x)}{\overline{F}(t)}\log\frac{f(x)}{\overline{F}(t)}\ \mathrm{d}x,

where F¯(t)=(X>t)=1F(t)\overline{F}(t)=\mathbb{P}(X>t)=1-F(t) is the survival (reliability) funltion of XX.

Analogous to residual entropy, Qiu et al. (2017) defined the extropy for residual lifetime XtX_{t} called the residual extropy at time tt and defined as

J(Xt)=120+fXt2(x)dx=12F¯2(t)t+f2(x)dx.J\left(X_{t}\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{X_{t}}^{2}(x)\mathrm{d}x=-\frac{1}{2\overline{F}^{2}(t)}\int_{t}^{+\infty}f^{2}(x)\mathrm{d}x. (3)

In many situations, uncertainty can relate to the past. Suppose the random variable XX is the lifetime of a component, system or a living organism, having an absolutely continuous distribution function FX(t)F_{X}(t) and the density function fX(t)f_{X}(t). For t>0t>0, let the random variable Xt=(tX|X<t){}_{t}X=(t-X|X<t) be the time elapsed after failure till time tt, given that the component has already failed at time tt. We denote the random variable Xt{}_{t}X, the reversed residual life (past lifetime). For instance, at time tt, one has under gone a medical test to check for a certain disease. Suppose that the test result is positive. If XX is the age when the patient was infected, then it is known that X<tX<t. Now the question is, how much time has elapsed since the patient has been infected by this disease? Based on this idea, Di Crescenzo and Longobardi (2002) introduced the entropy of the reversed residual lifetime Xt{}_{t}X as a dynamic measure of uncertainty called past entropy as follows:

H(X;[t])=0tf(x)F(t)logf(x)F(t)dx.H\left(X;[t]\right)=-\int_{0}^{t}\frac{f(x)}{F(t)}\log\frac{f(x)}{F(t)}\ \mathrm{d}x.

This measure is dual of residual entropy introduced by Ebrahimi (1996).

In this paper, we study the extropy for Xt{}_{t}X as dual of residual extropy that is called past extropy and it is defined as below (see also Krishnan et al. (2020)):

J(Xt)=120+fXt2(x)dx=12F2(t)0tf2(x)dx,J\left({}_{t}X\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{{}_{t}X}^{2}(x)\mathrm{d}x=-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x, (4)

where fXt(x)=f(tx)F(t)f_{{}_{t}X}(x)=\frac{f(t-x)}{F(t)}, for x(0,t)x\in(0,t). It can be seen that for t0t\geq 0, J(Xt)J\left({}_{t}X\right) possesses all the properties of J(X)J(X).

Remark 1.

It’s clear that J(X+)=J(X)J\left({}_{+\infty}X\right)=J(X).

Past extropy has applications in the context of information theory, reliability and survival analysis, insurance, forensic science and other related fields beceuse in that a lifetime distribution truncated above is of utmost importance.

The paper is organized as follows: in section 2, an approach to measure of uncertainty in the past lifetime distribution is proposed. Then it is studied a characterization result with the reversed failure rate. Following a characterization result is given based on past extropy of the largest order statistics in section 3.

2 Past extropy and some characterizations

Analogous to residual extropy (Qiu et al. (2017)), the extropy for Xt{}_{t}X is called past extropy and for a non-negative random variable XX is as below:

J(Xt)=120+fXt2(x)dx=12F2(t)0tf2(x)dx,J\left({}_{t}X\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{{}_{t}X}^{2}(x)\mathrm{d}x=-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x, (5)

where fXt(x)=f(tx)F(t)f_{{}_{t}X}(x)=\frac{f(t-x)}{F(t)}, for x(0,t)x\in(0,t) is the density function of Xt{}_{t}X. It’s clear that J(Xt)0J({{}_{t}}X)\leq 0 while the residual entropy of a continuous distribution may take any value on [,+][-\infty,+\infty]. Also, J(X+)=J(X)J\left({}_{+\infty}X\right)=J(X).

Example 1.
  • a)

    If XExp(λ)X\sim Exp(\lambda), then J(Xt)=λ41+eλt1eλtJ\left({}_{t}X\right)=-\frac{\lambda}{4}\frac{1+\mathrm{e}^{-\lambda t}}{1-\mathrm{e}^{-\lambda t}} for t>0t>0. This shows that the past extropy of exponential distribution is an increasing function of t.

  • b)

    If XU(0,b)X\sim U(0,b), then J(Xt)=12tJ\left({}_{t}X\right)=-\frac{1}{2t}.

  • c)

    If XX has power distribution with parameter α>0\alpha>0, i.e. f(x)=αx(α1)f(x)=\alpha x^{(\alpha-1)}, 0<x<10<x<1, then J(Xt)=α22(2α1)tJ\left({}_{t}X\right)=\frac{-\alpha^{2}}{2(2\alpha-1)t}.

  • d)

    If XX has Pareto distribution with parameters θ>0,x0>0\theta>0,x_{0}>0, i.e. f(x)=θx0x0θ+1xθ+1f(x)=\frac{\theta}{x_{0}}\frac{x_{0}^{\theta+1}}{x^{\theta+1}}, x>x0x>x_{0}, then J(Xt)=θ22(2θ+1)(tθx0θ)2[x02θtt2θx0]J\left({}_{t}X\right)=\frac{\theta^{2}}{2(2\theta+1)(t^{\theta}-x_{0}^{\theta})^{2}}\left[\frac{x_{0}^{2\theta}}{t}-\frac{t^{2\theta}}{x_{0}}\right].

There is a functional relation between past extropy and residual extropy as follows:

J(X)=F2(t)J(Xt)+F¯2(t)J(Xt),t>0.J(X)=F^{2}(t)J\left({}_{t}X\right)+\overline{F}^{2}(t)J\left(X_{t}\right),\forall t>0.

In fact

F2(t)J(Xt)+F¯2(t)J(Xt)\displaystyle F^{2}(t)J\left({}_{t}X\right)+\overline{F}^{2}(t)J\left(X_{t}\right) =\displaystyle= 12t+f2(x)dx120tf2(x)dx\displaystyle-\frac{1}{2}\int_{t}^{+\infty}f^{2}(x)\mathrm{d}x-\frac{1}{2}\int_{0}^{t}f^{2}(x)\mathrm{d}x
=\displaystyle= 120+f2(x)dx=J(X).\displaystyle-\frac{1}{2}\int_{0}^{+\infty}f^{2}(x)\mathrm{d}x=J(X).

From (4) we can rewrite the following expression for the past extropy:

J(Xt)=τ2(t)2f2(t)0tf2(x)dx,J\left({}_{t}X\right)=\frac{-\tau^{2}(t)}{2f^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x,

where τ(t)=f(t)F(t)\tau(t)=\frac{f(t)}{F(t)} is the reversed failure rate.

Definition 1.

A random variable is said to be increasing (decreasing) in past extropy if J(Xt)J\left({}_{t}X\right) is an increasing (decreasing) function of tt.

Theorem 2.1.

J(Xt)J\left({}_{t}X\right) is increasing (decreasing) if and only if J(Xt)()14τ(t)J\left({}_{t}X\right)\leq(\geq)\frac{-1}{4}\tau(t).

Proof.

From (5) we get

ddtJ(Xt)=2τ(t)J(Xt)12τ2(t).\frac{\mathrm{d}}{\mathrm{d}t}J\left({}_{t}X\right)=-2\tau(t)J\left({}_{t}X\right)-\frac{1}{2}\tau^{2}(t).

Then J(Xt)J\left({}_{t}X\right) is increasing if and only if

2τ(t)J(Xt)+12τ2(t)0,2\tau(t)J\left({}_{t}X\right)+\frac{1}{2}\tau^{2}(t)\leq 0,

but τ(t)0\tau(t)\geq 0 so this is equivalent to

J(Xt)14τ(t).J\left({}_{t}X\right)\leq-\frac{1}{4}\tau(t).

Theorem 2.2.

The past extropy J(Xt)J\left({}_{t}X\right) of XX is uniquely determined by τ(t)\tau(t).

Proof.

From (5) we get

ddtJ(Xt)=2τ(t)J(Xt)12τ2(t).\frac{\mathrm{d}}{\mathrm{d}t}J\left({}_{t}X\right)=-2\tau(t)J\left({}_{t}X\right)-\frac{1}{2}\tau^{2}(t).

So we have a linear differential equation of order one and it can be solved in the following way

J(Xt)=e2t0tτ(s)ds[J(Xt0)t0t12τ2(s)e2t0sτ(y)dyds],J\left({}_{t}X\right)=\mathrm{e}^{-2\int_{t_{0}}^{t}\tau(s)\mathrm{d}s}\left[J\left({}_{t_{0}}X\right)-\int_{t_{0}}^{t}\frac{1}{2}\tau^{2}(s)\mathrm{e}^{2\int_{t_{0}}^{s}\tau(y)\mathrm{d}y}\mathrm{d}s\right],

where we can use the boundary condition J(X+)=J(X)J\left({}_{+\infty}X\right)=J(X), so we get

J(Xt)=e2t+τ(s)ds[J(X)+t+12τ2(s)e2s+τ(y)dyds].J\left({}_{t}X\right)=\mathrm{e}^{2\int_{t}^{+\infty}\tau(s)\mathrm{d}s}\left[J\left(X\right)+\int_{t}^{+\infty}\frac{1}{2}\tau^{2}(s)\mathrm{e}^{-2\int_{s}^{+\infty}\tau(y)\mathrm{d}y}\mathrm{d}s\right]. (6)

Example 2.

Let XExp(λ)X\sim Exp(\lambda), with reversed failure rate τ(t)=λeλt1eλt\tau(t)=\frac{\lambda\mathrm{e}^{-\lambda t}}{1-\mathrm{e}^{-\lambda t}}. It follows from (6) that

J(Xt)\displaystyle J\left({}_{t}X\right) =\displaystyle= e2t+λeλs1eλsds[J(X)+t+12λ2e2λs(1eλs)2e2s+λeλy1eλydyds]\displaystyle\mathrm{e}^{2\int_{t}^{+\infty}\frac{\lambda\mathrm{e}^{-\lambda s}}{1-\mathrm{e}^{-\lambda s}}\mathrm{d}s}\left[J\left(X\right)+\int_{t}^{+\infty}\frac{1}{2}\frac{\lambda^{2}\mathrm{e}^{-2\lambda s}}{\left(1-\mathrm{e}^{-\lambda s}\right)^{2}}\mathrm{e}^{-2\int_{s}^{+\infty}\frac{\lambda\mathrm{e}^{-\lambda y}}{1-\mathrm{e}^{-\lambda y}}\mathrm{d}y}\mathrm{d}s\right]
=\displaystyle= (1eλt)2[J(X)+12t+λ2e2λsds]\displaystyle\left(1-\mathrm{e}^{-\lambda t}\right)^{-2}\left[J(X)+\frac{1}{2}\int_{t}^{+\infty}\lambda^{2}\mathrm{e}^{-2\lambda s}\mathrm{d}s\right]
=\displaystyle= λ4e2λt1(1eλt)2=λ41+eλt1eλt,\displaystyle\frac{\lambda}{4}\frac{\mathrm{e}^{-2\lambda t}-1}{\left(1-\mathrm{e}^{-\lambda t}\right)^{2}}=-\frac{\lambda}{4}\frac{1+\mathrm{e}^{-\lambda t}}{1-\mathrm{e}^{-\lambda t}},

so we find again the same result of example 1.

Using the following definition (see Shaked and Shanthikumar, 2007), we show that J(Xt)J\left({}_{t}X\right) is increasing in tt.

Definition 2.

Let XX and YY be two non-negative variables with reliability functions F¯,G¯\overline{F},\overline{G} and pdfs f,gf,g respectively. XX is smaller than YY

  • a)

    in the likelihood ratio order, denoted by XlrYX\leq_{lr}Y, if f(x)g(x)\frac{f(x)}{g(x)} is decreasing in x0x\geq 0;

  • b)

    in the usual stochastic order, denoted by XstYX\leq_{st}Y if F¯(x)G¯(x)\overline{F}(x)\leq\overline{G}(x) for x0x\geq 0.

Remark 2.

It is well known that if XlrYX\leq_{lr}Y then XstYX\leq_{st}Y and XstYX\leq_{st}Y if and only if 𝔼(φ(Y))()𝔼(φ(X))\mathbb{E}(\varphi(Y))\leq(\geq)\mathbb{E}(\varphi(X)) for any decreasing (increasing) function φ\varphi.

Theorem 2.3.

Let XX be a random variable with CDF FF and pdf ff. If f(F1(x))f\left(F^{-1}(x)\right) is decreasing in x0x\geq 0, then J(Xt)J\left({}_{t}X\right) is increasing in t0t\geq 0.

Proof.

Let UtU_{t} be a random variable with uniform distribution on (0,F(t))(0,F(t)) with pdf gt(x)=1F(t)g_{t}(x)=\frac{1}{F(t)} for x(0,F(t))x\in(0,F(t)), then based on (4) we have

J(Xt)\displaystyle J\left({}_{t}X\right) =\displaystyle= 12F2(t)0F(t)f(F1(u))du=12F(t)0F(t)gt(u)f(F1(u))du\displaystyle-\frac{1}{2F^{2}(t)}\int_{0}^{F(t)}f\left(F^{-1}(u)\right)\mathrm{d}u=-\frac{1}{2F(t)}\int_{0}^{F(t)}g_{t}(u)f\left(F^{-1}(u)\right)\mathrm{d}u
=\displaystyle= 12F(t)𝔼[f(F1(Ut))].\displaystyle-\frac{1}{2F(t)}\mathbb{E}\left[f\left(F^{-1}(U_{t})\right)\right].

Let 0t1t20\leq t_{1}\leq t_{2}. If 0<xF(t1)0<x\leq F(t_{1}), then gt1(x)gt2(x)=F(t2)F(t1)\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)}=\frac{F(t_{2})}{F(t_{1})} is a non-negative constant. If F(t1)<xF(t2)F(t_{1})<x\leq F(t_{2}), then gt1(x)gt2(x)=0\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)}=0. Therefore gt1(x)gt2(x)\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)} is decreasing in x(0,F(t2))x\in(0,F(t_{2})), which implies Ut1lrUt2U_{t_{1}}\leq_{lr}U_{t_{2}}. Hence Ut1stUt2U_{t_{1}}\leq_{st}U_{t_{2}} and so

0𝔼[f(F1(Ut2))]𝔼[f(F1(Ut1)]0\leq\mathbb{E}\left[f\left(F^{-1}(U_{t_{2}})\right)\right]\leq\mathbb{E}\left[f\left(F^{-1}(U_{t_{1}}\right)\right]

using the assumption that f(F1(Ut))f\left(F^{-1}(U_{t})\right) is a decreasing function. Since 01F(t2)1F(t1)0\leq\frac{1}{F(t_{2})}\leq\frac{1}{F(t_{1})} then

J(Xt1)=12F(t1)𝔼[f(F1(Ut1))]12F(t2)𝔼[f(F1(Ut2))]=J(Xt2).J\left({}_{t_{1}}X\right)=-\frac{1}{2F(t_{1})}\mathbb{E}\left[f\left(F^{-1}(U_{t_{1}})\right)\right]\leq-\frac{1}{2F(t_{2})}\mathbb{E}\left[f\left(F^{-1}(U_{t_{2}})\right)\right]=J\left({}_{t_{2}}X\right).

Remark 3.

Let XX be a random variable with CDF F(x)=x2F(x)=x^{2}, for x(0,1)x\in(0,1). Then f(F1(x))=2xf\left(F^{-1}(x)\right)=2\sqrt{x} is increasing in x(0,1)x\in(0,1). However J(Xt)=23tJ\left({}_{t}X\right)=-\frac{2}{3t} is increasing in t(0,1)t\in(0,1). So the condition in theorem 2.3 that f(F1(x))f\left(F^{-1}(x)\right) is decreasing in xx is sufficient but not necessary.

3 Past extropy of order statistics

Let X1,X2,,XnX_{1},X_{2},\dots,X_{n} be a random sample with distribution function FF, the order statistics of the sample are defined by the arrangement X1,X2,,XnX_{1},X_{2},\dots,X_{n} from the minimum to the maximum by X(1),X(2),,X(n)X_{(1)},X_{(2)},\dots,X_{(n)}. Qiu and Jia (2017) defined the residual extropy of the ithi-th order statistics and showed that the residual extropy of order statistics can determine the underlying distribution uniquely. Let X1,X2,,XnX_{1},X_{2},\dots,X_{n} be continuous and i.i.d. random variables with CDF FF indicate the lifetimes of nn components of a parallel system. Also X1:n,X2:n,,Xn:nX_{1:n},X_{2:n},\dots,X_{n:n} be the ordered lifetimes of the components. Then Xn:nX_{n:n} represents the lifetime of parallel system with CDF FXn:n(x)=(F(x))nF_{X_{n:n}}(x)=(F(x))^{n}, x>0x>0. The CDF of [tXn:n|Xn:n<t]\left[t-X_{n:n}|X_{n:n}<t\right] is 1(F(tx)F(t))n1-\left(\frac{F(t-x)}{F(t)}\right)^{n} where [tXn:n|Xn:n<t]\left[t-X_{n:n}|X_{n:n}<t\right] is called reversed residual lifetime of the system. Now past extropy for reversed residual lifetime of parallel system with distribution function FXn:n(x)F_{X_{n:n}}(x) is as follows:

J(Xn:nt)=n22(F(t))2n0tf2(x)[F(x)]2n2dx.J\left({}_{t}X_{n:n}\right)=-\frac{n^{2}}{2(F(t))^{2n}}\int_{0}^{t}f^{2}(x)[F(x)]^{2n-2}\mathrm{d}x.
Theorem 3.1.

If XX has an increasing pdf ff on [0,T][0,T], with T>tT>t, then J(Xn:nt)J\left({}_{t}X_{n:n}\right) is decreasing in n1n\geq 1.

Proof.

The pdf of (Xn:n|Xn:nt)(X_{n:n}|X_{n:n}\leq t) can be expressed as

gn:nt(x)=nf(x)Fn1(x)Fn(t), xt.g_{n:n}^{t}(x)=\frac{nf(x)F^{n-1}(x)}{F^{n}(t)},\mbox{ }x\leq t.

We note that

g2n1:2n1t(x)g2n+1:2n+1t(x)=2n12n+1F2(t)F2(x)\frac{g_{2n-1:2n-1}^{t}(x)}{g_{2n+1:2n+1}^{t}(x)}=\frac{2n-1}{2n+1}\frac{F^{2}(t)}{F^{2}(x)}

is decreasing in x[0,t]x\in[0,t] and so (X2n1:2n1|X2n1:2n1t)lr(X2n+1:2n+1|X2n+1:2n+1t)(X_{2n-1:2n-1}|X_{2n-1:2n-1}\leq t)\leq_{lr}(X_{2n+1:2n+1}|X_{2n+1:2n+1}\leq t) which implies (X2n1:2n1|X2n1:2n1t)st(X2n+1:2n+1|X2n+1:2n+1t)(X_{2n-1:2n-1}|X_{2n-1:2n-1}\leq t)\leq_{st}(X_{2n+1:2n+1}|X_{2n+1:2n+1}\leq t). If ff is increasing on [0,T][0,T] we have

𝔼[f(X2n1:2n1)|X2n1:2n1t]𝔼[f(X2n+1:2n+1)|X2n+1:2n+1t].\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq t\right]\leq\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq t\right].

From the definition of the past extropy it follows that

J(Xn:nt)\displaystyle J\left({}_{t}X_{n:n}\right) =\displaystyle= n22F2n(t)0tf2(x)F2n2(x)dx\displaystyle-\frac{n^{2}}{2F^{2n}(t)}\int_{0}^{t}f^{2}(x)F^{2n-2}(x)\mathrm{d}x
=\displaystyle= n22(2n1)F(t)0t(2n1)F2n2(x)f(x)F2n1(t)f(x)dx\displaystyle\frac{-n^{2}}{2(2n-1)F(t)}\int_{0}^{t}\frac{(2n-1)F^{2n-2}(x)f(x)}{F^{2n-1}(t)}f(x)\mathrm{d}x
=\displaystyle= n22(2n1)F(t)𝔼[f(X2n1:2n1)|X2n1:2n1t].\displaystyle\frac{-n^{2}}{2(2n-1)F(t)}\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq t\right].

Then it follows that

J(Xn:nt)J(Xn+1:n+1t)\displaystyle\frac{J\left({}_{t}X_{n:n}\right)}{J\left({}_{t}X_{n+1:n+1}\right)} =\displaystyle= n2(n+1)22n12n+1𝔼[f(X2n1:2n1)|X2n1:2n1t]𝔼[f(X2n+1:2n+1)|X2n+1:2n+1t]\displaystyle\frac{n^{2}}{(n+1)^{2}}\frac{2n-1}{2n+1}\frac{\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq t\right]}{\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq t\right]}
\displaystyle\leq 𝔼[f(X2n1:2n1)|X2n1:2n1t]𝔼[f(X2n+1:2n+1)|X2n+1:2n+1t]1.\displaystyle\frac{\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq t\right]}{\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq t\right]}\leq 1.

Since the past extropy of a random variable is non-negative we have J(Xn:nt)J(Xn+1:n+1t)J\left({}_{t}X_{n:n}\right)\geq J\left({}_{t}X_{n+1:n+1}\right) and the proof is completed. ∎

Example 3.

Let XX be a random variable distribuited as a Weibull with two parameters, XW2(α,λ)X\sim W2(\alpha,\lambda), i.e. f(x)=λαxα1exp(λxα)f(x)=\lambda\alpha x^{\alpha-1}\exp\left(-\lambda x^{\alpha}\right). It can be showed that for α>1\alpha>1 this pdf has a maximum point T=(α1λα)1αT=\left(\frac{\alpha-1}{\lambda\alpha}\right)^{\frac{1}{\alpha}}. Let us consider the case in which XX has a Weibull distribution with parameters α=2\alpha=2 and λ=1\lambda=1, XW2(2,1)X\sim W2(2,1) and so T=22T=\frac{\sqrt{2}}{2}. The hypothesis of the theorem 3.1 are satisfied for t=0.5<T=22t=0.5<T=\frac{\sqrt{2}}{2}. Figure 1 shows that J(Xn:n0.5)J\left({}_{0.5}X_{n:n}\right) is decreasing in n{1,2,,10}n\in\{1,2,\dots,10\}. Moreover the result of the theorem 3.1 does not hold for the smallest order statistic as shown in figure 2.

Refer to caption
Figure 1: J(Xn:n0.5)J\left({}_{0.5}X_{n:n}\right) of a W2(2,1)W2(2,1) for n=1,2,,10n=1,2,\dots,10
Refer to caption
Figure 2: J(X1:n0.5)J\left({}_{0.5}X_{1:n}\right) of a W2(2,1)W2(2,1) for n=1,2,,10n=1,2,\dots,10

In the case in which XX has an increasing pdf on [0,T][0,T] with T>tT>t we give a lower bound for J(Xt)J\left({}_{t}X\right).

Theorem 3.2.

If XX has an increasing pdf ff on [0,T][0,T], with T>tT>t, then J(Xt)τ(t)2J\left({}_{t}X\right)\geq-\frac{\tau(t)}{2}.

Proof.

From the definition we get

J(Xt)\displaystyle J\left({}_{t}X\right) =\displaystyle= 12F2(t)0tf2(x)dx\displaystyle-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x
=\displaystyle= f(t)2F(t)+12F2(t)0tF(x)f(x)dx\displaystyle\frac{-f(t)}{2F(t)}+\frac{1}{2F^{2}(t)}\int_{0}^{t}F(x)f^{\prime}(x)\mathrm{d}x
\displaystyle\geq τ(t)2.\displaystyle-\frac{\tau(t)}{2}.

Example 4.

Let XW2(2,1)X\sim W2(2,1), as in example 3, so we know that its pdf is increasing in [0,T][0,T] with T=22T=\frac{\sqrt{2}}{2}. The hypothesis of the theorem 3.2 are satisfied for t<T=22t<T=\frac{\sqrt{2}}{2}. Figure 3 shows that the function τ(t)2-\frac{\tau(t)}{2} (in red) is a lower bound for the past extropy (in black). We remark that the theorem gives information only for t[0,T]t\in[0,T], in fact for larger values of tt the function τ(t)2-\frac{\tau(t)}{2} could not be a lower bound anymore, as showed in figure 3.

Refer to caption
Figure 3: J(Xt)J\left({}_{t}X\right) (in black) and τ(t)2-\frac{\tau(t)}{2} (in red) of a W2(2,1)W2(2,1)

Qiu (2016), Qiu and Jia (2017) showed that extropy of the ithi-th order statistics and residual extropy of the ithi-th order statistics can characterize the underlying distribution uniquely. In the following theorem, whose proof requires next lemma, we show that the past extropy of the largest order statistic can uniquely characterize the underlying distribution.

Lemma 3.1.

Let XX and YY be non-negative random variables such that J(Xn:n)=J(Yn:n)J\left(X_{n:n}\right)=J\left(Y_{n:n}\right), n1\forall n\geq 1. Then X=𝑑YX\overset{d}{=}Y.

Proof.

From the definition of the extropy, J(Xn:n)=J(Yn:n)J\left(X_{n:n}\right)=J\left(Y_{n:n}\right) holds if and only if

0+FX2n2(x)fX2(x)dx=0+FY2n2(x)fY2(x)dx\int_{0}^{+\infty}F_{X}^{2n-2}(x)f_{X}^{2}(x)\mathrm{d}x=\int_{0}^{+\infty}F_{Y}^{2n-2}(x)f_{Y}^{2}(x)\mathrm{d}x

i.e. if and only if

0+FX2n2(x)τX(x)dFX2(x)=0+FY2n2(x)τY(x)dFY2(x).\int_{0}^{+\infty}F_{X}^{2n-2}(x)\tau_{X}(x)\mathrm{d}F^{2}_{X}(x)=\int_{0}^{+\infty}F_{Y}^{2n-2}(x)\tau_{Y}(x)\mathrm{d}F^{2}_{Y}(x).

Putting u=FX2(x)u=F^{2}_{X}(x) in the left side of the above equation and u=FY2(x)u=F^{2}_{Y}(x) in the right side we have

01un1τX(FX1(u))du=01un1τY(FY1(u))du.\int_{0}^{1}u^{n-1}\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)\mathrm{d}u=\int_{0}^{1}u^{n-1}\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right)\mathrm{d}u.

that is equivalent to

01un1[τX(FX1(u))τY(FY1(u))]du=0 n1.\int_{0}^{1}u^{n-1}\left[\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)-\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right)\right]\mathrm{d}u=0\mbox{ }\forall n\geq 1.

Then from Lemma 3.1 of Qui (2017) we get τX(FX1(u))=τY(FY1(u))\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)=\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right) for all u(0,1)u\in(0,1). By taking u=v\sqrt{u}=v we have τX(FX1(v))=τY(FY1(v))\tau_{X}\left(F_{X}^{-1}(v)\right)=\tau_{Y}\left(F_{Y}^{-1}(v)\right) and so fX(FX1(v))=fY(FY1(v))f_{X}\left(F_{X}^{-1}(v)\right)=f_{Y}\left(F_{Y}^{-1}(v)\right) for all v(0,1)v\in(0,1). This is equivalent to (FX1)(v)=(FY1)(v)(F_{X}^{-1})^{\prime}(v)=(F_{Y}^{-1})^{\prime}(v) i.e. FX1(v)=FY1(v)+CF_{X}^{-1}(v)=F_{Y}^{-1}(v)+C, for all v(0,1)v\in(0,1) with CC constant. But for v=0v=0 we have FX1(0)=FY1(0)=0F_{X}^{-1}(0)=F_{Y}^{-1}(0)=0 and so C=0C=0. ∎

Theorem 3.3.

Let XX and YY be two non-negative random variables with cumulative distribution functions F(x)F(x) and G(x)G(x), respectively. Then FF and GG belong to the same family of distributions if and only if for t0t\geq 0, n1n\geq 1,

J(Xn:nt)=J(Yn:nt).J\left({}_{t}X_{n:n}\right)=J\left({}_{t}Y_{n:n}\right).
Proof.

It sufficies to prove the sufficiency. J(Xn:nt)J\left({}_{t}X_{n:n}\right) is the past extropy for Xn:nX_{n:n} but it is also the extropy for the variable Xn:nt{}_{t}X_{n:n}. So through lemma 3.1 we get Xt=𝑑 tY{}_{t}X\overset{d}{=}\mbox{ }_{t}Y. Then F(tx)F(t)=G(tx)G(t)\frac{F(t-x)}{F(t)}=\frac{G(t-x)}{G(t)}, for x(0,t)x\in(0,t). If exists tt^{\prime} such that F(t)G(t)F(t^{\prime})\neq G(t^{\prime}) then in (0,t)(0,t^{\prime}) F(x)=αG(x)F(x)=\alpha G(x) with α1\alpha\neq 1. But for all t>tt>t^{\prime}, exists x(0,t)x\in(0,t) such that tx=tt-x=t^{\prime} and so F(t)G(t)F(t)\neq G(t) and as in the precedent step we have F(x)=αG(x)F(x)=\alpha G(x) for x(0,t)x\in(0,t). Letting tt to ++\infty we have a contradiction because FF and GG are both distribution function and their limit is 1. ∎

4 Conclusion

In this paper we studied a measure of uncertainty, the past extropy. It is the extropy of the inactivity time. It is important in the moment in which with an observation we find our system down and we want to investigate about how much time has elapsed after its fail. Moreover we studied some connections with the largest order statistic.

5 Acknowledgement

Francesco Buono is partially supported by the GNAMPA research group of INdAM (Istituto Nazionale di Alta Matematica) and MIUR-PRIN 2017, Project ”Stochastic Models for Complex Systems” (No. 2017 JFFHSH).

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  • [1] Cover, T. M. and Thomas, J. A., 2006. Elements of Information Theory, (2nd edn). Wiley, New York.
  • [2] Di Crescenzo, A., Longobardi, M., 2002. Entropy-based measure of uncertainty in past lifetime distributions, Journal of Applied Probability, 39, 434–440.
  • [3] Ebrahimi, N., 1996. How to measure uncertainty in the residual life time distribution. Sankhya: The Indian Journal of Statistics, Series A, 58, 48–56.
  • [4] Krishnan, A. S., Sunoj S. M., Nair N. U., 2020. Some reliability properties of extropy for residual and past lifetime random variables. Journal of the Korean Statistical Society. https://doi.org/10.1007/s42952-019-00023-x.
  • [5] Lad, F., Sanfilippo, G., Agrò, G., 2015. Extropy: complementary dual of entropy. Statistical Science 30, 40–58.
  • [6] Qiu, G., 2017. The extropy of order statistics and record values. Statistics & Probability Letters, 120, 52–60.
  • [7] Qiu, G., Jia, K., 2018. The residual extropy of order statistics. Statistics & Probability Letters, 133, 15–22.
  • [8] Shaked, M., Shanthikumar, J. G., 2007. Stochastic orders. Springer Science & Business Media.
  • [9] Shannon, C. E., 1948. A mathematical theory of communication. Bell System Technical Journal, 27, 379–423.