This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sequential measurements on qubits by multiple observers: Joint best guess strategy

Dov Fields1,2, Árpád Varga3, and János A. Bergou1,2 1Department of Physics and Astronomy, Hunter College of the City University of New York,
695 Park Avenue, New York, NY, USA 10065
2Physics Program, Graduate Center of the City University of New York,
365 Fifth Avenue, New York, NY 10016
and
3Institute of Physics, University of Pécs,
H-7624 Pécs, Ifjúság útja 6, Hungary
Emails: [email protected], [email protected], [email protected]
Abstract

We study sequential state discrimination measurements performed on the same qubit by subsequent observers. Specifically, we focus on the case when the observers perform a kind of a minimum-error type state discriminating measurement where the goal of the observers is to maximize their joint probability of successfully guessing the state that the qubit was initially prepared in. We call this the joint best guess strategy. In this scheme, Alice prepares a qubit in one of two possible states. The qubit is first sent to Bob, who measures it, and then on to Charlie, and so on to altogether N consecutive receivers who all perform measurements on it. The goal for all observers is to determine which state Alice sent. In the joint best guess strategy, every time a system is received the observer is required to make a guess, aided by the measurement, about its state. The price to pay for this requirement is that errors must be permitted, the guess can be correct or in error. There is a nonzero probability for all the receivers to successfully identify the initially prepared state, and we maximize this joint probability of success. This work is a step toward developing a theory of nondestructive sequential quantum measurements and could be useful in multiparty quantum communication schemes based on communicating with single qubits, particularly in schemes employing continuous variable states. It also represents a case where subsequent observers can probabilistically and optimally get around both the collapse postulate and the no-broadcasting theorem.

I Overview

Quantum state discrimination is an essential topic in Quantum Information theory with applications including, but not limited to, the secure distribution of information [1] and the designing of experimental measurement schemes [2]. While the laws of quantum mechanics do not allow perfect discrimination between two arbitrary nonorthogonal quantum states, these states can still be resolved probabilistically. There are a variety of possible criteria for quantum state discrimination. For recent reviews of state discrimination, see [3, 4, 5]. In this paper, we will focus on one particular discrimination procedure, minimum error state discrimination (ME). The ME strategy was first introduced in [6, 7, 8] as a protocol between two parties, Alice and Bob, involving two states (pure or mixed). In this strategy, Alice sends one of the two predetermined states to Bob according to a predetermined probability, and Bob’s task is to perform a measurement with two possible outcomes. By correlating the outcomes of his measurement with the states that Alice sent, Bob can guess which state was sent by Alice at the cost of sometimes erroneously identifying the state sent. The goal of the ME strategy is for Bob to optimize his choice of measurement in a way that minimizes the average probability that he erroneously identifies the state sent by Alice. Further work on minimum error discrimination has focused on deriving the necessary and sufficient conditions for ME discrimination [9] and utilizing these conditions to understand the ME discrimination problem between more than two states [10].

One recent development in the field of state discrimination is the sequential discrimination of states between multiple observers. The first extension of the standard unambiguous state discrimination (USD) to its sequential counterpart was proposed in Ref. [11]. The work of this paper, which was complemented by Pang et al., [12], focused on optimizing the sequential unambiguous state discrimination problem (SUD) for the case of equal prior probabilities. These works were further extended from qubits to qudits [13] and led to investigations of the roles of quantum information [14] and quantum correlations and discord [15] in sequential measurements.

In a recent paper [16], we completed the analysis of the SUD problem by generalizing the optimal solution to arbitrary prior probabilities. The present paper serves as a counterpoint to that work, in that we extend the ME strategy to its sequential equivalent for arbitrary priors. The sequential extension of the ME strategy is the joint best guess (JBG) strategy. The JBG protocol involves multiple parties, Alice, Bob, Charlie, etc. Alice sends Bob one of two predetermined states, and Bob chooses a Positive Operator Valued Measurement (POVM) such that he can have some probability of successfully determining the state sent by Alice at the cost of having some probability of error. Bob can design his POVM in such a way that when he sends his post-measurement states to Charlie, Charlie is able to then discriminate between these post-measurement states. This procedure allows Charlie an opportunity to also learn about the state sent by Alice. Additionally, this process can be repeated an arbitrary number of times so that there is a chain of N receivers who each, in sequence, receives the post-measurement states from their predecessor and can discriminate between them. The goal of this strategy is for all the receivers to optimize their joint probability of success.

The goal of this paper is to consider the JBG strategy for N receivers for arbitrary priors. For simplicity, this analysis is restricted to only allowing pure post-measurement states so that each participant only needs to discriminate between two pure states. In the first section of this paper, we show how one can set up Bob’s POVM so that Charlie can gain information from Bob’s post-measurement states. Afterwards, we show how this can be extended to the problem of having N receivers, and derive the optimization problem and the related constraints for this setup. In the second part of this paper, we use basic optimization techniques to show that the general problem can be reduced to a simplified problem. In the last section of the paper, we derive an analytic solution for equal priors for the N-receiver problem and describe the numeric solution for arbitrary priors. We also consider the regions where these solutions are valid. Finally, we briefly consider an alternate strategy where, instead of maximizing the joint probability of success, each participant optimized their individual probability of succeeding.

II Setting up the POVM for Minimum Error

II-A Two receivers

For the problem of JBG discrimination between two non-orthogonal pure states, Alice sends the states |ψ1\ket{\psi_{1}} or |ψ2\ket{\psi_{2}} with probability η1\eta_{1} and η2\eta_{2}, respectively. These states then get passed sequentially through a number of receivers, each of whom performs their own POVM on the states they receive from their predecessor, and then passes their post-measurement states along to the next link in the chain. The goal in this problem is to maximize the average probability that all of them succeed. To see how this sequence can be set up, it helps to start with the case of only two receivers, Bob and Charlie. In this situation, Bob’s POVM takes the following form:

i=12Πbi\displaystyle\sum_{i=1}^{2}\Pi_{bi} =\displaystyle= I\displaystyle I (1)
Πbi\displaystyle\Pi_{bi} \displaystyle\geq 0fori=1,2.\displaystyle 0\ \ \ \mbox{for}\ \ \ i=1,2\ . (2)

Then ψi|Πbi|ψi=pbi\braket{\psi_{i}}{\Pi_{bi}}{\psi_{i}}=p_{bi} is Bob’s probability of correctly determining that state ii was sent and ψi|Πj|ψi=rbi\braket{\psi_{i}}{\Pi_{j}}{\psi_{i}}=r_{bi} for iji\neq j is Bob’s probability of making an erroneous identification. Because the POVM elements, Πbi\Pi_{bi}, are positive operators, we can write them in the form Πbi=BiBi\Pi_{bi}=B_{i}^{{\dagger}}B_{i}. The detection operators, BiB_{i} determine the effect of Bob’s POVM on the input states:

B1\displaystyle B_{1} =\displaystyle= β11|v11ψ2|+β12|v12ψ1|,\displaystyle\beta_{11}\ket{v_{11}}\bra{\psi_{2}^{\perp}}+\beta_{12}\ket{v_{12}}\bra{\psi_{1}^{\perp}}, (3)
B2\displaystyle B_{2} =\displaystyle= β21|v21ψ2|+β22|v22ψ1|.\displaystyle\beta_{21}\ket{v_{21}}\bra{\psi_{2}^{\perp}}+\beta_{22}\ket{v_{22}}\bra{\psi_{1}^{\perp}}. (4)

Here |ψi\ket{\psi_{i}^{\perp}} is the state orthogonal to |ψi\ket{\psi_{i}}, ψi|ψi=0\braket{\psi_{i}^{\perp}}{\psi_{i}}=0.

While the problem can be completely determined from these conditions, it is convenient to represent Bob’s POVM through the Neumark representation [17, 18]:

Ub|ψ1|0\displaystyle U_{b}\ket{\psi_{1}}\ket{0} =\displaystyle= p1b|v11|1+r1b|v12|2\displaystyle\sqrt{p_{1b}}\ket{v_{11}}\ket{1}+\sqrt{r_{1b}}\ket{v_{12}}\ket{2} (5)
Ub|ψ2|0\displaystyle U_{b}\ket{\psi_{2}}\ket{0} =\displaystyle= r2b|v21|1+p2b|v22|2\displaystyle\sqrt{r_{2b}}\ket{v_{21}}\ket{1}+\sqrt{p_{2b}}\ket{v_{22}}\ket{2} (6)

Bob’s measurement consists of entangling his qubit with an ancilla, |0\ket{0} being a generic initial state of the ancilla, and then measuring the ancilla state. If he measures the ancilla to be in the state |i\ket{i}, corresponding to the Πi\Pi_{i} detector, Bob concludes that the state sent was |ψi\ket{\psi_{i}} (i=1,2i=1,2). Bob’s probability of being correct given that state |ψi\ket{\psi_{i}} was sent is pibp_{ib}. From unitarity pib+rib=1p_{ib}+r_{ib}=1 follows (again i=1,2i=1,2). Taking the inner product of Eqs. (5) and (6), and making use of unitarity, one can derive the following constraint for Bob’s probabilities:

ψ2|ψ1=pb1rb2v11|v12+pb2rb1v21|v22.\displaystyle\braket{\psi_{2}}{\psi_{1}}=\sqrt{p_{b1}r_{b2}}\braket{v_{11}}{v_{12}}+\sqrt{p_{b2}r_{b1}}\braket{v_{21}}{v_{22}}. (7)

After Bob’s measurement, the qubit is in one of two mixed states, ρi=pbi|viivii|+rbi|vijvij|\rho_{i}=p_{bi}\ket{v_{ii}}\bra{v_{ii}}+r_{bi}\ket{v_{ij}}\bra{v_{ij}} (i=1,2i=1,2, iji\neq j), depending on what state Alice sent. These states can then be discriminated by Charlie. If Bob chooses his POVM such that |v11=|v12=|v21=|v22\ket{v_{11}}=\ket{v_{12}}=\ket{v_{21}}=\ket{v_{22}} then Bob performs an optimal ME discrimination and leaves no possibility for Charlie to perform any type of discrimination. Alternatively, Bob can choose his POVM such that |v11=|v12\ket{v_{11}}=\ket{v_{12}} and |v22=|v21\ket{v_{22}}=\ket{v_{21}} ensuring that his output to Charlie is always a pure state. In this case, Bob has denied Charlie any possibility to learn the outcome of Bob’s measurement, while still allowing Charlie to have a chance to guess the state initially sent by Alice. While Bob can also choose to send Charlie a set of mixed states for Charlie to discriminate, this paper will focus on the case where Charlie only has to discriminate between pure states.

After Bob’s measurement, Charlie can perform a ME discrimination on the states resulting from Bob’s POVM. In order for Charlie’s measurement to be optimal, his POVM, Πc\Pi_{c}, must be such that there is only one output state to his measurement. The remaining problem lies in choosing Πb\Pi_{b} and Πc\Pi_{c} such that the joint probability of both Bob and Charlie successfully identifying the state sent by Alice is optimized. Formally, one can state the problem as follows. Find the maximum of:

Pss=i=12ηiψbi|Πbi|ψbivi|Πci|vi,\displaystyle P_{ss}=\sum_{i=1}^{2}\eta_{i}\braket{\psi_{bi}}{\Pi_{bi}}{\psi_{bi}}\braket{v_{i}}{\Pi_{ci}}{v_{i}}, (8)

subject to Eqs. (1) and (2) and their analogs for Charlie:

iNΠci\displaystyle\sum_{i}^{N}\Pi_{ci} =\displaystyle= I\displaystyle I (9)
Πci\displaystyle\Pi_{ci} \displaystyle\geq 0fori=1,2.\displaystyle 0\ \ \ \mbox{for}\ \ \ i=1,2\ . (10)

In this setup, Bob chooses a POVM such that when Alice sends |ψi\ket{\psi_{i}} the output state is |vi\ket{v_{i}}. This problem can be equivalently formulated in the following way. Find the maximum of:

Pss\displaystyle P_{ss} =\displaystyle= i=12ηipbipci,\displaystyle\sum_{i=1}^{2}\eta_{i}p_{bi}p_{ci}, (11)

subject to:

st\displaystyle\frac{s}{t} =\displaystyle= pb1(1pb2)+pb2(1pb1)\displaystyle\sqrt{p_{b1}\left(1-p_{b2}\right)}+\sqrt{p_{b2}\left(1-p_{b1}\right)} (12)
t\displaystyle t =\displaystyle= pc1(1pc2)+pc2(1pc1)\displaystyle\sqrt{p_{c1}\left(1-p_{c2}\right)}+\sqrt{p_{c2}\left(1-p_{c1}\right)} (13)

where pbip_{bi} and pcip_{ci} are Bob and Charlie’s probabilities of correctly identifying that Alice sent state |ψi\ket{\psi_{i}}, and where sψ1|ψ2s\equiv\braket{\psi_{1}}{\psi_{2}} and tv1|v2t\equiv\braket{v_{1}}{v_{2}}. Since there are only two states being discriminated, we can assume, without any loss of generality, that the overlaps between the states are real.

II-B N receivers

From this example, it is clear how to extend the problem should to NN sequential receivers. Instead of choosing the optimal measurement, Charlie also needs to set up his POVM so that the post measurement states from his measurement are not identical. This is true for the first N1N-1 receivers, while the last receiver in the chain performs the optimal ME measurement on the states received from the previous observer. Assuming that the post-measurement states are restricted to pure states, the problem can then be formulated as follows. Find the maximum of the weighted Joint Best guess,

PJBGN=η1n=1Np1n+η2n=1Np2n,P^{N}_{JBG}=\eta_{1}\prod^{N}_{n=1}p_{1n}+\eta_{2}\prod^{N}_{n=1}p_{2n}, (14)

subject to the constraints

tntn+1\displaystyle\frac{t_{n}}{t_{n+1}} =\displaystyle= p1n(1p2n)+p2n(1p1n)nN1\displaystyle\sqrt{p_{1n}\left(1-p_{2n}\right)}+\sqrt{p_{2n}\left(1-p_{1n}\right)}\ \ \ n\leq N-1
tN\displaystyle t_{N} =\displaystyle= p1N(1p2N)+p2N(1p1N).\displaystyle\sqrt{p_{1N}\left(1-p_{2N}\right)}+\sqrt{p_{2N}\left(1-p_{1N}\right)}.

Here pinp_{in} is the NthN^{th} receiver’s probability of succeeding given that Alice sent state |ψi\ket{\psi_{i}}, t1ψ1|ψ2t_{1}\equiv\braket{\psi_{1}}{\psi_{2}}, and tnt_{n} is the overlap of the post-measurement states received by the NthN^{th} receiver.

III Simplifying the problem

In order to optimize the JBG problem for arbitrary priors, it is important to realize that the problem can be simplified significantly. In order to show that this is possible, we need to use induction, starting from the case with only two receivers. Using Lagrange’s principle, one can reformulate the SME problem for two receivers as follows:

F\displaystyle F =\displaystyle= η1p1bp1c+η2p2bp2c\displaystyle\eta_{1}p_{1b}p_{1c}+\eta_{2}p_{2b}p_{2c}
+λ1(st(p1b(1p2b)+p2b(1p1b)))\displaystyle+\lambda_{1}\left(\frac{s}{t}-\left(\sqrt{p_{1b}\left(1-p_{2b}\right)}+\sqrt{p_{2b}\left(1-p_{1b}\right)}\right)\right)
+λ2(t((p1c(1p2c))+p2c(1p1c))).\displaystyle+\lambda_{2}\left(t-\left(\sqrt{\left(p_{1c}\left(1-p_{2c}\right)\right)}+\sqrt{p_{2c}\left(1-p_{1c}\right)}\right)\right).

Optimizing with respect to pibp_{ib}, picp_{ic}, and tt gives five Lagrange conditions for the optimal solution:

ηipicpib(1pib)=λ12[(1p1b)(1p2b)p1bp2b],\displaystyle\eta_{i}p_{ic}\sqrt{p_{ib}\left(1-p_{ib}\right)}=\frac{\lambda_{1}}{2}\left[\sqrt{\left(1-p_{1b}\right)\left(1-p_{2b}\right)}-\sqrt{p_{1b}p_{2b}}\right],
ηipibpic(1pic)=λ22[(1p1c)(1p2c)p1cp2c].\displaystyle\eta_{i}p_{ib}\sqrt{p_{ic}\left(1-p_{ic}\right)}=\frac{\lambda_{2}}{2}\left[\sqrt{\left(1-p_{1c}\right)\left(1-p_{2c}\right)}-\sqrt{p_{1c}p_{2c}}\right].
st2=λ2λ1.\displaystyle\frac{s}{t^{2}}=\frac{\lambda_{2}}{\lambda_{1}}. (15)

Since the right hand sides of the first two sets of equations above do not depend on ii, the left hand sides must also be equal, yielding

η1p1cp1b(1p1b)\displaystyle\eta_{1}p_{1c}\sqrt{p_{1b}\left(1-p_{1b}\right)} =\displaystyle= η2p2cp2b(1p2b)\displaystyle\eta_{2}p_{2c}\sqrt{p_{2b}\left(1-p_{2b}\right)} (16)
η1p1bp1c(1p1c)\displaystyle\eta_{1}p_{1b}\sqrt{p_{1c}\left(1-p_{1c}\right)} =\displaystyle= η2p2bp2c(1p2c)\displaystyle\eta_{2}p_{2b}\sqrt{p_{2c}\left(1-p_{2c}\right)} (17)

If we divide Eq. (17) by Eq. (16) and rearrange slightly, we get the relation:

p2b(1p1b)p1b(1p2b)=p2c(1p1c)p1c(1p2c).\frac{\sqrt{p_{2b}\left(1-p_{1b}\right)}}{\sqrt{p_{1b}\left(1-p_{2b}\right)}}=\frac{\sqrt{p_{2c}\left(1-p_{1c}\right)}}{\sqrt{p_{1c}\left(1-p_{2c}\right)}}. (18)

By taking Eq. (12) and dividing by Eq. (13), and using Eq. (18), we can derive that st2=p1b(1p2b)p1c(1p2c)\frac{s}{t^{2}}=\frac{\sqrt{p_{1b}\left(1-p_{2b}\right)}}{\sqrt{p_{1c}\left(1-p_{2c}\right)}}. By equating this with λ2λ1\frac{\lambda_{2}}{\lambda_{1}} using the final expression in Eq. (III), and dividing the second expression in Eq. (III) by the first to get another formula for λ2λ1\frac{\lambda_{2}}{\lambda_{1}}, we can derive the following:

(1p1b)(1p2b)(1p1c)(1p2c)=(1p1b)(1p2b)p1bp2b(1p1c)(1p2c)p1cp2c,\displaystyle\frac{\sqrt{\left(1-p_{1b}\right)\left(1-p_{2b}\right)}}{\sqrt{\left(1-p_{1c}\right)\left(1-p_{2c}\right)}}=\frac{\sqrt{\left(1-p_{1b}\right)\left(1-p_{2b}\right)}-\sqrt{p_{1b}p_{2b}}}{\sqrt{\left(1-p_{1c}\right)\left(1-p_{2c}\right)}-\sqrt{p_{1c}p_{2c}}},
p2b(1p2c)=p1c(1p1b)p1b(1p1c)p2c(1p2b),\displaystyle\sqrt{p_{2b}\left(1-p_{2c}\right)}=\frac{\sqrt{p_{1c}\left(1-p_{1b}\right)}}{\sqrt{p_{1b}\left(1-p_{1c}\right)}}\sqrt{p_{2c}\left(1-p_{2b}\right)},\ \
p2c(1p2b)=p2b(1p2c)p2c=p2b.\displaystyle p_{2c}\left(1-p_{2b}\right)=p_{2b}\left(1-p_{2c}\right)\Rightarrow p_{2c}=p_{2b}.\ \ \ \ \ \ \ \

In the second to last line, we used Eq. (18) to derive the final line. Using this method, we can conclude that in the optimal solution p1b=p1cp_{1b}=p_{1c}, p2b=p2cp_{2b}=p_{2c} and t=st=\sqrt{s}. In conclusion, the problem can be reduced to a simpler form of maximizing

Pss=η1p12+η2p22,P_{ss}=\eta_{1}p_{1}^{2}+\eta_{2}p_{2}^{2}, (19)

subject to the constraint:

s=p1(1p2)+p2(1p1).\sqrt{s}=\sqrt{p_{1}\left(1-p_{2}\right)}+\sqrt{p_{2}\left(1-p_{1}\right)}. (20)

In order to simplify the problem of NN receivers, we can use induction. Applying the Lagrange formalism to the JBG problem with NN receivers, the following optimization conditions can be derived in a straightforward manner:

η1jiNp1jp1i(1p1i)=λi2[(1p1i)(1p2i)p1ip2i],\displaystyle\eta_{1}\prod^{N}_{j\neq i}p_{1j}\sqrt{p_{1i}\left(1-p_{1i}\right)}=\frac{\lambda_{i}}{2}\left[\sqrt{\left(1-p_{1i}\right)\left(1-p_{2i}\right)}-\sqrt{p_{1i}p_{2i}}\right],
η2jiNp2jp2i(1p2i)=λi2[(1p1i)(1p2i)p1ip2i],\displaystyle\eta_{2}\prod^{N}_{j\neq i}p_{2j}\sqrt{p_{2i}\left(1-p_{2i}\right)}=\frac{\lambda_{i}}{2}\left[\sqrt{\left(1-p_{1i}\right)\left(1-p_{2i}\right)}-\sqrt{p_{1i}p_{2i}}\right],
tntn+12\displaystyle\quad\qquad\frac{t_{n}}{t_{n+1}^{2}} =\displaystyle= λn+1λntn+2nN2,\displaystyle\frac{\lambda_{n+1}}{\lambda_{n}t_{n+2}}\;n\leq N-2,
tN1tN2\displaystyle\quad\qquad\frac{t_{N-1}}{t_{N}^{2}} =\displaystyle= λNλN1.\displaystyle\frac{\lambda_{N}}{\lambda_{N-1}}.

Focusing on the constraints for N and N-1, it can be shown that:

η1jN2p1jp1(N)p1(N1)(1p1(N1))\displaystyle\eta_{1}\prod^{N-2}_{j}p_{1j}p_{1(N)}\sqrt{p_{1(N-1)}\left(1-p_{1(N-1)}\right)}
=η2jN2p2jp2(N)p2(N1)(1p2(N1)),\displaystyle=\eta_{2}\prod^{N-2}_{j}p_{2j}p_{2(N)}\sqrt{p_{2(N-1)}\left(1-p_{2(N-1)}\right)},
η1jN2p1jp1(N1)p1(N)(1p1(N))\displaystyle\eta_{1}\prod^{N-2}_{j}p_{1j}p_{1(N-1)}\sqrt{p_{1(N)}\left(1-p_{1(N)}\right)}
=η2jN2p2jp2(N1)p2(N)(1p2(N)),\displaystyle=\eta_{2}\prod^{N-2}_{j}p_{2j}p_{2(N-1)}\sqrt{p_{2(N)}\left(1-p_{2(N)}\right)},
p1(N(1p1(N1))p1(N1(1p1(N))=p2(N(1p2(N1))p2(N1(1p2(N)).\displaystyle\frac{\sqrt{p_{1(N}\left(1-p_{1(N-1)}\right)}}{\sqrt{p_{1(N-1}\left(1-p_{1(N)}\right)}}=\frac{\sqrt{p_{2(N}\left(1-p_{2(N-1)}\right)}}{\sqrt{p_{2(N-1}\left(1-p_{2(N)}\right)}}.

This final condition can be used to show that:

tN1tN2=p1(N1)(1p2(N1))p1(N)(1p2(N))=λNλN1.\frac{t_{N-1}}{t_{N}^{2}}=\frac{\sqrt{p_{1(N-1)}\left(1-p_{2(N-1)}\right)}}{\sqrt{p_{1(N)}\left(1-p_{2(N)}\right)}}=\frac{\lambda_{N}}{\lambda_{N-1}}.

In an identical fashion to the 2 receiver case, this can then be used to show that the optimal solution occurs for pi(N1)=pi(N)p_{i(N-1)}=p_{i(N)} and tN=tn1t_{N}=\sqrt{t_{n-1}}. By substituting tN2t^{2}_{N} for tN1t_{N-1}, the same procedure can be used N times to finally derive that for the optimal solution tNN=t1=st^{N}_{N}=t_{1}=s. Using this result, we can show that this allows the JBG problem for N receivers to be simplified in the following form. Maximize

PJBGN=η1p1N+η2p2N,P^{N}_{JBG}=\eta_{1}p_{1}^{N}+\eta_{2}p_{2}^{N}, (21)

subject to the constraint

s1N=p1(1p2)+p2(1p1).s^{\frac{1}{N}}=\sqrt{p_{1}\left(1-p_{2}\right)}+\sqrt{p_{2}\left(1-p_{1}\right)}. (22)

We note that this reduction is equivalent to maximizing the optimal probability of success for discriminating between two states with overlap ψ~1|ψ~2=s1N\braket{\tilde{\psi}_{1}}{\tilde{\psi}_{2}}=s^{\frac{1}{N}}, a total of NN times in a row.

IV Optimizing for Arbitrary Priors

While there is no simple analytic solution for optimizing the general JBG problem for arbitrary priors, the solution for equal priors, η1=η2=12\eta_{1}=\eta_{2}=\frac{1}{2}, is known. The Lagrange constraints for equal priors can be satisfied when p1=p2=12(1+1s2N)p_{1}=p_{2}=\frac{1}{2}\left(1+\sqrt{1-s^{\frac{2}{N}}}\right), yielding the optimal solution:

PJBG,eqN=[12(1+1s2N)]N.P_{JBG,eq}^{N}=\left[\frac{1}{2}\left(1+\sqrt{1-s^{\frac{2}{N}}}\right)\right]^{N}. (23)

It is important to note that this analytic solution is only optimal for values of the overlap below a certain threshold. While the choice p1=p2p_{1}=p_{2} satisfies the constraints, it is not guaranteed to be a global maximum, only a local one. In order to understand the behavior of the full solution, we first describe the method to obtain the numerical solution.

(a)(b)PssP_{ss}p1p_{1}η1\eta_{1}η1\eta_{1}Refer to captionRefer to caption

Figure 1: (a) Optimal joint probability of success, PssP_{ss}, and (b) Optimal success for the first state for two receivers, 1, versus the prior probability of sending the first state, η1\eta_{1}. Plots are calculated numerically for the values of the overlap: s=0.5s=0.5 (Solid), s=0.4s=0.4 (Dashed), and s=0.3s=0.3 (Dottted). Surprisingly, the term for p1p_{1} does not go to zero when η10\eta_{1}\rightarrow 0. This feature is an artifact of constraint s=p1(1p2)+p2(1p1)\sqrt{s}=\sqrt{p_{1}\left(1-p_{2}\right)}+\sqrt{p_{2}\left(1-p_{1}\right)}, which, for p2=1p_{2}=1, requires p1=1sp_{1}=1-s. However, η1p10\eta_{1}p_{1}\rightarrow 0 when η10\eta_{1}\rightarrow 0, as it should.
Refer to caption

ssη1\eta_{1}PssP_{ss}

Figure 2: A plot of the numerically derived optimal joint probability of success for two receivers as a function of the overlap and the prior probability

A complete solution can be derived numerically for both equal and arbitrary priors. This is shown for the case of two receivers in Figures 1 and 2. Doing so requires rewriting the constraint in Eq. (22) for p2p_{2}. This can most easily be done by using the substitution pi=cos2(θi)p_{i}=cos^{2}\left(\theta_{i}\right) and s1n=sin(ϕ)s^{\frac{1}{n}}=\sin\left(\phi\right). Doing so rewrites the constraint as sin(ϕ)=sin(θ1±θ2)\sin\left(\phi\right)=\sin\left(\theta_{1}\pm\theta_{2}\right), or ϕ=θ1±θ2\phi=\theta_{1}\pm\theta_{2}. Note that the two expressions for the constraint come from being able to choose both posive and negative square roots of pip_{i}. While this gives four total possible expressions, there are only two distinct solutions. This gives two possible ways to express p2p_{2}:

p2\displaystyle p_{2} =\displaystyle= cos2(ϕ+θ1)=(s1n1p1p11s2n)2,\displaystyle\cos^{2}\left(\phi+\theta_{1}\right)=\left(s^{\frac{1}{n}}\sqrt{1-p_{1}}-\sqrt{p_{1}}\sqrt{1-s^{\frac{2}{n}}}\right)^{2},
=\displaystyle= cos2(ϕθ1)=(s1n1p1+p11s2n)2.\displaystyle\cos^{2}\left(\phi-\theta_{1}\right)=\left(s^{\frac{1}{n}}\sqrt{1-p_{1}}+\sqrt{p_{1}}\sqrt{1-s^{\frac{2}{n}}}\right)^{2}.

As the second of these two solutions for p2p_{2} is always greater, it should therefore be the one chosen. By substituting this expression into the original joint success probability in Eq. (21) and optimizing with respect to θ1\theta_{1}, one derives the following expression:

0\displaystyle 0 =\displaystyle= η1p1n1p1(1p1)\displaystyle\eta_{1}p_{1}^{n-1}\sqrt{p_{1}\left(1-p_{1}\right)}
+\displaystyle+ η2(s1n1p1+p11s2n)2n1((1p1)(1s2n)s1np1)1.\displaystyle\frac{\eta_{2}\left(s^{\frac{1}{n}}\sqrt{1-p_{1}}+\sqrt{p_{1}}\sqrt{1-s^{\frac{2}{n}}}\right)^{2n-1}}{\left(\sqrt{\left(1-p_{1}\right)\left(1-s^{\frac{2}{n}}\right)}-s^{\frac{1}{n}}\sqrt{p_{1}}\right)^{-1}}.

While optimizing with respect to p1p_{1} gives an equation that is discontinuous at p1=0p_{1}=0 and p1=1p_{1}=1, optimizing with respect to θ1\theta_{1} avoids this problem. By solving this equation for p1p_{1}, one can obtain local optima of the joint success probability that can then be compared to the global optimum. For a visualization of the complete solution for two and three receivers, see Figures 2 and 3.

One possible concern for this numeric solution is that there might be boundary solutions for us to consider. The boundaries of this optimization problem are either ignoring one of the incoming states (pi=0p_{i}=0) or prioritizing one of the incoming states (pi=1p_{i}=1). However, the function is perfectly differentiable for all values pi[0,1]p_{i}\in\left[0,1\right], ensuring that the boundary solution is accounted for by the internal point solution.

As noted earlier, while there is an analytic solution for the case of equal priors, it is only valid for values of the overlap s<sbs<s_{b}. When we numerically calculate sbs_{b}, we find that sb0.75s_{b}\approx 0.75 for the case of two receivers and sb0.42s_{b}\approx 0.42 for the case of three receivers. In general, sbs_{b} decreases as the number of receivers increases, decreasing the range of validity of the analytic solution. In Fig. 4, we compare the analytic solution, the numerically calculated optimal solution, and the boundary solution for the case of two receivers and equal priors.

Refer to caption

ssη1\eta_{1}PssP_{ss}

Figure 3: Plot of the numerically derived optimal joint probability of success for three receivers as a function of the overlap and the prior probability.

While the focus of this paper has been on the joint best guess strategy, there is a simpler approach that produces an analytic solution at a cost. Each member in the chain of receivers can simply optimize their own individual probability of succeeding (Pjs1P_{js}^{1}) subject to the constraint given by Eq. (22). From the standard Helstrom solution, doing so gives the probabilities of success

pi=12(1+12(1ηi)s2n14(η1η2s2n)).p_{i}=\frac{1}{2}\left(1+\frac{1-2(1-\eta_{i})s^{\frac{2}{n}}}{\sqrt{1-4(\eta_{1}\eta_{2}s^{\frac{2}{n}})}}\right).

This solution ensures that each individual member maximizes their own average probability of succeeding at the cost of reducing the probability that they all simultaneously succeed. For the case of two receivers, the distinction between these two solutions is extremely small, as can be seen in Figure 5. However, as the number of receivers increase, the cost of optimizing individual success to the joint success becomes more apparent.

Refer to caption

PssP_{ss}ss

Figure 4: A comparison of the joint probability of success as a function of the overlap for the numerically derived optimal solution (Solid), analytic solution (Dotted), and boundary solution (Dashed) for two receivers and equal priors.

Refer to caption EEη1\eta_{1}

Figure 5: A plot of the E=|Pjs,eq2Pjs,appx2|E=|P_{js,eq}^{2}-P^{2}_{js,appx}| as a function of prior probability η\eta for s=0.5s=0.5 (Solid), s=0.4s=0.4(Dashed), and s=0.3s=0.3(Dotted).

V Conclusion

In this paper, we contribute further analysis to the theory of sequential measurements. As a complement to the works in [11] and [16], we present a complete analysis of the JBG discrimination problem for arbitrary prior probabilities. We start by explicitly showing how Bob can design a POVM that both lets him determine, with some probability, the state sent initially by Alice while also leaving enough information in the post-measurement states to pass on to Charlie who can then do the same. After deriving the constraints imposed by the chain of N-receivers, we show that in the optimal JBG strategy each participant in the chain discriminates between states with the same effective overlap. In other words, in the optimal strategy no participant is preferred and each succeeds and fails with the same probabilities. After deriving this critical result, we then show a full analysis of both the analytical and numerical solutions for the SME problem. Finally, we present an analytic solution that optimizes each participant’s individual probability of success.

This paper provides an alternative method of sequential communication to sequential unambiguous discrimination and serves a foundation for further research. The methods described in this paper could help answer open questions such as optimizing sequential measurement for mixed states and implementing other discrimination protocols sequentially.

Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-20-2-0097. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  • [1] C. H. Bennett, “Quantum cryptography using any two nonorthogonal states,” Phys. Rev. Lett., vol. 68, pp. 3121–3124, May 1992.
  • [2] R. Han, J. A. Bergou, and G. Leuchs, “Near optimal discrimination of binary coherent signals via atom–light interaction,” New Journal of Physics, vol. 20, no. 4, pp. 043 005–043 019, Apr 2018.
  • [3] S. M. Barnett and S. Croke, “Quantum state discrimination,” Adv. Opt. Photon., vol. 1, no. 2, pp. 238–278, Apr 2009.
  • [4] J. A. Bergou, “Discrimination of quantum states,” Journal of Modern Optics, vol. 57, no. 3, pp. 160–180, Jan 2010.
  • [5] J. Bae and L.-C. Kwek, “Quantum state discrimination and its applications,” Journal of Physics A: Mathematical and Theoretical, vol. 48, no. 8, pp. 083 001–083 036, Jan 2015.
  • [6] A. Holevo, “Statistical decision theory for quantum systems,” Journal of Multivariate Analysis, vol. 3, no. 4, pp. 337 – 394, 1973.
  • [7] C. W. Helstrom, “Quantum detection and estimation theory,” Journal of Statistical Physics, vol. 1, pp. 231–252, Jun 1969.
  • [8] H. Yuen, R. Kennedy, and M. Lax, “Optimum testing of multiple hypotheses in quantum detection theory,” IEEE Transactions on Information Theory, vol. 21, no. 2, pp. 125–134, Mar 1975.
  • [9] S. M. Barnett and S. Croke, “On the conditions for discrimination between quantum states with minimum error,” Journal of Physics A: Mathematical and Theoretical, vol. 42, no. 6, p. 062001, Jan 2009.
  • [10] J. Bae, “Structure of minimum-error quantum state discrimination,” New Journal of Physics, vol. 15, no. 7, pp. 073 037–073 068, Jul 2013.
  • [11] J. Bergou, E. Feldman, and M. Hillery, “Extracting information from a qubit by multiple observers: Toward a theory of sequential state discrimination,” Phys. Rev. Lett., vol. 111, pp. 100 501–100 505, Sep 2013.
  • [12] C.-Q. Pang, F.-L. Zhang, L.-F. Xu, M.-L. Liang, and J.-L. Chen, “Sequential state discrimination and requirement of quantum dissonance,” Phys. Rev. A, vol. 88, pp. 052 331–052 336, Nov 2013.
  • [13] M. Hillery and J. Mimih, “Sequential discrimination of qudits by multiple observers,” Journal of Physics A: Mathematical and Theoretical, vol. 50, no. 43, pp. 435 301–425 319, Sep 2017.
  • [14] P. Rapčan, J. Calsamiglia, R. Muñoz Tapia, E. Bagan, and V. Bužek, “Scavenging quantum information: Multiple observations of quantum systems,” Phys. Rev. A, vol. 84, pp. 032 326–032 342, Sep 2011.
  • [15] J.-H. Zhang, F.-L. Zhang, and M.-L. Liang, “Sequential state discrimination with quantum correlation,” Quantum Information Processing, vol. 17, pp. 260–276, Aug 2018.
  • [16] D. Fields, R. Han, M. Hillery, and J. A. Bergou, “Extracting unambiguous information from a single qubit by sequential observers,” Phys. Rev. A, vol. 101, pp. 012 118–012 129, Jan 2020.
  • [17] M. A. Neumark, “Self-adjoint extensions of the second kind of a symmetric operator,” Izv. Akad. Nauk SSSR Ser. Mat., vol. 4, pp. 53–104, 1940.
  • [18] J. A. Bergou and M. Hillery, Introduction to the Theory of Quantum Information Processing, ser. Graduate Texts in Physics.   New York,NY, USA: Springer, 2013.