This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Correctly Implementing Synchronous Message Passing in the Pi-Calculus By Concurrent Haskell’s MVars

Manfred Schmidt-Schauß Goethe-University, Frankfurt am Main, Germany [email protected] LMU, Munich, Germany    David Sabel LMU, Munich, GermanyThis research is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant SA2908/3-1 [email protected]
Abstract

Comparison of concurrent programming languages and correctness of program transformations in concurrency are the focus of this research. As criterion we use contextual semantics adapted to concurrency, where may- as well as should-convergence are observed. We investigate the relation between the synchronous pi-calculus and a core language of Concurrent Haskell (CH). The contextual semantics is on the one hand forgiving with respect to the details of the operational semantics, and on the other hand implies strong requirements for the interplay between the processes after translation. Our result is that CH embraces the synchronous pi-calculus. Our main task is to find and prove correctness of encodings of pi-calculus channels by CH’s concurrency primitives, which are MVars. They behave like (blocking) 1-place buffers modelling the shared-memory. The first developed translation uses an extra private MVar for every communication. We also automatically generate and check potentially correct translations that reuse the MVars where one MVar contains the message and two additional MVars for synchronization are used to model the synchronized communication of a single channel in the pi-calculus. Our automated experimental results lead to the conjecture that one additional MVar is insufficient.

1 Introduction

Our goals are the comparison of programming languages, correctness of transformations, compilation and optimization of programs, in particular of concurrent programs. We already used the contextual semantics of concurrent (functional) programming languages to effectively verify correctness of transformations [(17), (24), (25)], also under the premise not to worsen the runtime [(31)]. We propose to test may- and should-convergence in the contextual semantics, since, in particular, it rules out transformations that transform an always successful process into a process that may run into an error, for example a deadlock. There are also other notions of program equivalence in the literature, like bisimulation based program equivalences [(28)], however, these tend to take also implementation specific behavior of the operational semantics into account, whereas contextual equivalence abstracts from the details of the executions.

In [(29), (32)] we developed notions of correctness of translations w.r.t. contextual semantics, and in [(33)] we applied them in the context of concurrency, but for quite similar source and target languages. In this paper we translate a synchronous message passing model into a shared memory model, namely a synchronous π\pi-calculus into a core-language of Concurrent Haskell, called 𝐶𝐻{\mathit{CH}}.

The contextual semantics of concurrent programming languages is a generalization of the extensionality principle of functions. The test for a program PP is whether C[P]C[P] – i.e. PP plugged into a program context – successfully terminates (converges) or not, which usually means that the standard reduction sequence ends with a value. For a concurrent program PP, we use two observations: may-convergence (PP{\downarrow}) – at least one execution path terminates successfully, and should-convergence111An alternative observation is must-convergence (all execution paths terminate). The advantages of equivalence notions based on may- and should-convergence are invariance under fairness restrictions, preservation of deadlock-freedom, and equivalence of busy-wait and wait-until behavior (see e.g. [(33)]). (PP{\Downarrow}) – every intermediate state of a reduction sequence may-converges. For two processes PP and QQ, PcQP\leq_{c}Q holds iff for all contexts C[]C[\cdot]: (C[P]C[Q])(C[P]{\downarrow}\implies C[Q]{\downarrow}), and PP and QQ are contextually equivalent, PcQP\sim_{c}Q, iff PcQP\leq_{c}Q and QcPQ\leq_{c}P. Showing equal expressivity of two (concurrent) calculi by a translation τ\tau requires that may- and should-convergence make sense in each calculus. Important properties are convergence-equivalence (may- and should-convergencies are preserved and reflected by the translation) and adequacy (see Definition 4.4), which holds if τ(P)c,CHτ(Q)\tau(P)\leq_{c,CH}\tau(Q) implies Pc,πQP\leq_{c,\pi}Q, for all π\pi-calculus processes P,QP,Q. Full-abstraction, i.e. P,Q:τ(P)cτ(Q)\forall P,Q:\tau(P)\leq_{c}\tau(Q) iff PcQP\leq_{c}Q, only holds if the two calculi are more or less the same.

Source and Target Calculi.

The well-known π\pi-calculus [(16), (15), (28)] is a minimal model for mobile and concurrent processes. Dataflow is expressed by passing messages between them via named channels, where messages are channel names. Processes and links between processes can be dynamically created and removed which makes processes mobile. The interest in the π\pi-calculus is not only due to the fact that it is used and extended for various applications, like reasoning about cryptographic protocols [(2)], applications in molecular biology [(22)], and distributed computing [(14), (8)]. The π\pi-calculus also permits the study of intrinsic principles and semantics of concurrency and the inherent nondeterministic behavior of mobile and communicating processes. We investigate a variant of the π\pi-calculus which is the synchronous π\pi-calculus with replication, but without sums, matching operators, or recursion. To observe termination of a process, the calculus has a constant Stop which signals successful termination.

The calculus 𝐶𝐻{\mathit{CH}}, a core language of Concurrent Haskell, is a process calculus where threads evaluate expressions from a lambda calculus extended by data constructors, case-expressions, recursive let-expressions, and Haskell’s seq-operator. Also monadic operations (sequencing and creating threads) are available. The shared memory is modelled by MVars (mutable variables) which are one-place buffers that can be either filled or empty. The operation takeMVar tries to empty a filled MVar and blocks if the MVar is already empty. The operation putMVar tries to fill an empty MVar and blocks if the MVar is already filled. The calculus 𝐶𝐻{\mathit{CH}} is a variant (or a subcalculus) of the calculus 𝐶𝐻𝐹\mathit{CHF} [(24), (25)] which extends Concurrent Haskell with futures. A technical advantage of this approach is that we can reuse studies and results on the contextual semantics of 𝐶𝐻𝐹\mathit{CHF} also for 𝐶𝐻{\mathit{CH}}.

Details and Variations of the Translation.

One main issue for a correct translation from π\pi-processes to 𝐶𝐻{\mathit{CH}}-programs is to encode the synchronous communication of the π\pi-calculus. The problem is that the MVars in 𝐶𝐻{\mathit{CH}} have an asynchronous behavior (communication has to be implemented in two steps: the sender puts the message into an MVar, which is later taken by the receiver). To implement synchronous communication, the weaker synchronisation property of MVars has to be exploited, where we must be aware of the potential interferences of the executions of other translated communications on the same channel. The task of finding such translations is reminiscent of the channel-encoding used in [(21)], but, however, there an asynchronous channel is implemented while we look for synchronous communication.

We provide a translation τ0\tau_{0} which uses a private MVar per channel and per communication to ensure that no other process can interfere with the interaction. A similar idea was used in [(13), (4)] for keeping channel names private in a different scenario (see [(11), (10)] for recent treatments of these encodings). We prove that the translation τ0\tau_{0} is correct. Since we are also interested in simpler translations, we looked for correct translations with a fixed and static number of MVars per channel in the π\pi-calculus. Since this task is too complex and error-prone for hand-crafting, we automated it by implementing a tool to rule out incorrect translations222The tool and some output generated by the tool are available via https://gitlab.com/davidsabel/refute-pi.. Thereby we fix the MVars used for every channel: a single MVar for exchanging the channel-name and perhaps several additional MVars of unit type to perform checks whether the message was sent or received (we call them check-MVars, they behave like binary semaphores that are additionally blocking for signal-operations on an unlocked semaphore). The outcomes of our automated search are: a further correct translation that uses two check-MVars, where one is used as a mutex between all senders or receivers on one channel, and further correct translations using three additional MVars where the filling and emptying operations for each MVar need not come from the same sender or receiver. The experiments lead to the conjecture that there is no translation using only one check-MVar.

Results.

Our novel result is convergence-equivalence and adequacy of the open translation τ\tau (Theorems 4.5 and 4.8), translating the π\pi-calculus into 𝐶𝐻{\mathit{CH}}. The comparison of the π\pi-calculus with a concurrent programming language (here 𝐶𝐻{\mathit{CH}}) using contextual semantics for may- and should-convergence in both calculi exhibits that the π\pi-calculus is embeddable in 𝐶𝐻{\mathit{CH}} where we can prove that the semantical properties of interest are kept. The adaptation of the adequacy and full abstraction notions (Definition 4.4) for open processes is a helpful technical extension of our work in [(29), (32)].

We further define a general formalism for the representation of translations with global names and analyze different classes of such translations using an automated tool. In particular, we show correctness of two particular translations in Theorems 5.9 and 5.12. The discovered correct translations look quite simple and their correctness seems to be quite intuitive. However, our experience is that searching for correct translations is quite hard, since there are apparently correct (and simple) translations which were wrong. Our automated tool helped us to rule out wrong translations and to find potentially correct ones.

Discussion of Related Work on Characterizing Encodings.

There are five criteria for valid translations resp. encodings proposed and discussed in [(12), (10)], which mainly restrict the translations w.r.t. language syntax and reduction semantics of the source and target language, called: compositionality, name invariance, operational correspondence, divergence reflection and success sensitiveness. Compositionality and name invariance restrict the syntactic form of the translated processes; operational correspondence means that the transitive closure of the reduction relation is transported by the translation, modulo the syntactic equivalence; and divergence reflection and success sensitiveness are conditions on the semantics.

In our approach, we define semantical congruence (and precongruence) relations on the source and target language. Thus the first two conditions are not part of our notion of contextual equivalence, however, may be used as restrictions in showing non-encodability. We also omit the third condition and only use stronger variants of the fourth and fifth condition. Convergence equivalence as a tool for finding out may-and should-convergence is our replacement of Gorla’s divergence reflection and success sensitiveness. We do not define an infinite reduction sequence as an error, which has as nice consequence that synchronization could be implemented by busy-wait techniques.

Further Related Work.

Encodings of synchronous communication by asynchronous communication using a private name mechanism are given in [(13), (4)] for (variants of the) π\pi-calculus. Our idea of the translation τ0\tau_{0} similarly uses a private MVar to encode the channel based communication, but our setting is different, since our target language is Concurrent Haskell. Encodings between π\pi-calculi with synchronous and with asynchronous communication were, for instance, already considered in [(13), (4), (20), (19)] where encodability results are obtained for the π\pi-calculus without sums [(13), (4)], while in [(19), (20)] the expressive power of synchronous and asynchronous communication in the π\pi-calculus with mixed sums was compared and non-encodability is a main result. Translations of the π\pi-calculus into programming calculi and logical systems are given in [(3)], where a translation into a graph-rewriting calculus is given and soundness and completeness w.r.t. the operational behavior is proved. The article [(34)] shows a translation and a proof that the π\pi-calculus is exactly operationally represented. There are several works on session types which are related to the π\pi-calculus, e.g., orchard-yoshida:16 studies encodings from a session calculus into PCF extended by concurrency and effects and also an embedding in the other direction, mapping PCF extended by effects into a session calculus. The result is a (strong) operational correspondence between the two calculi. In cano-et-al:2017 an embedding of a session π\pi-calculus into ReactiveML is given and operational correspondence between the two languages is shown. Encodings of CML-events in Concurrent Haskell using MVars are published in Russell01 ; Chaudhuri09 . This approach is more high-level than ours (since it considers events, while we focus the plain π\pi-calculus). In Chaudhuri09 correctness of a distributed protocol for selective-communication w.r.t. an excerpt of CML is shown and a correct implementation of the protocol in the π\pi-calculus is given. The protocol is implemented in Concurrent-Haskell, but no correctness of this part is shown, since Chaudhuri09 focuses to show that CML-events are implementable in languages with first-order message-passing which is different from our focus (translating the π\pi-calculus into 𝐶𝐻{\mathit{CH}}).

Outline.

We introduce the source and target calculi in Sections 2 and 3, the translation using private names in Section 4, and in Section 5 we treat translations with global names. We conclude and discuss future work in Section 6. Due to space constraints most proofs are in the technical report schmidt-schauss-sabel:frank-60:19 .

2 The π\pi-Calculus with Stop

P,QΠStop::=νx.Px¯y.Px(y).P!PP|Q0StopCΠStop,C::=[]x¯(y).Cx(y).CC|PP|C!Cνx.C𝔻PCtxtπ::=[]𝔻|PP|𝔻νx.𝔻.\begin{array}[]{@{}r@{\,}c@{\,}l@{}}P,Q\in\Pi_{\text{{Stop}}}&::=&\nu x.P\mid\overline{x}y.P\mid x(y).P\mid{!P}\mid P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q\mid 0\mid\text{{Stop}}\\ C\in\Pi_{\text{{Stop}},C}&::=&[\cdot]\mid\bar{x}(y).C\mid x(y).C\mid C{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P\mid P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}C\mid!C\mid\nu x.C\\ \mathbb{D}\in\textit{PCtxt}_{\pi}&::=&[\cdot]\mid\mathbb{D}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P\mid P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\mathbb{D}\mid\nu x.\mathbb{D}.\end{array} Interaction rule: (ia) x(y).P|x¯z.QiaP[z/y]|Qx(y).P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{x}z.Q\xrightarrow{ia}P[z/y]{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q

Figure 1: Syntax of processes ΠStop\Pi_{\text{{Stop}}}, process contexts ΠStop,C\Pi_{\text{{Stop}},C} and reduction contexts PCtxtπ\textit{PCtxt}_{\pi} where x,yx,y are names.
Closure: If P𝔻[P],PiaQ,𝔻[Q]Q, and 𝔻PCtxtπP\equiv\mathbb{D}[P^{\prime}],P^{\prime}\xrightarrow{ia}Q^{\prime},\mathbb{D}[Q^{\prime}]\equiv Q,\text{ and }\mathbb{D}\in\textit{PCtxt}_{\pi} then PsrQP\xrightarrow{sr}Q
Figure 2: Reduction rule and standard reduction in ΠStop\Pi_{\text{{Stop}}}

PQ,if P=αQP|(Q|R)(P|Q)|Rνx.(P|Q)P|νx.Q,if x𝐹𝑁(P)P|0Pνx.00νx.StopStopνx.νy.Pνy.νx.PP|QQ|P!PP|!P\begin{array}[]{@{}r@{~}c@{~}l@{}}P~{}&\equiv\hfil~{}&Q,~{}\text{if }P=_{\alpha}Q\\ P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}(Q{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}R)~{}&\equiv\hfil~{}&(P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}R\\ \nu x.(P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q)~{}&\equiv\hfil~{}&P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\nu x.Q,\text{if }x\notin\mathit{FN}(P)\\ P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}0~{}&\equiv\hfil~{}&P\\ \nu x.0~{}&\equiv\hfil~{}&0\\ \nu x.\text{{Stop}}~{}&\equiv\hfil~{}&\text{{Stop}}\\ \nu x.\nu y.P~{}&\equiv\hfil~{}&\nu y.\nu x.P\\ P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q~{}&\equiv\hfil~{}&Q{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P\\ !P~{}&\equiv\hfil~{}&P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}!P\end{array}

Figure 3: Structural congruence in ΠStop\Pi_{\text{{Stop}}}

We explain the synchronous π\pi-calculus milner-parrow-walker:92 ; milner-pi-calc:99 ; sangiorgi-walker:01 without sums, with replication, extended with a constant Stop sabel-schmidt-schauss-pistop:2015 , that signals successful termination. The π\pi-calculus with Stop and the π\pi-calculus without Stop but with so-called barbed convergences sangiorgi-walker:2001 are equivalent w.r.t.  contextual semantics schmidt-schauss-sabel:frank-60:19 . Thus, adding the constant Stop is not essential, but our arguments are easier to explain with Stop.

Definition 2.1 (Calculus ΠStop\Pi_{\text{{Stop}}}).

Let 𝒩\mathcal{N} be a countable set of (channel) names. The syntax of processes is shown in Fig. 3. Name restriction νx.P\nu x.P restricts the scope of name xx to process PP, P|QP{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q is the parallel composition of PP and QQ, the process x¯y.P\overline{x}y.P waits on channel xx to output yy over channel xx and then becoming PP, the process x(y).Px(y).P waits on channel xx to receive input, after receiving the input zz, the process turns into P[z/y]P[z/y] (where P[z/y]P[z/y] is the substitution of all free occurrences of name yy by name zz in process PP), the process !P!P is the replication of process PP, i.e. it behaves like an infinite parallel combination of process PP with itself, the process 0 is the silent process, and Stop is a process constant that signals success. We sometimes write x(y)x(y) instead of x(y).0x(y).0 as well as x¯y\overline{x}y instead of x¯y.0\overline{x}y.0.

Free names 𝐹𝑁(P)\mathit{FN}(P), bound names 𝐵𝑁(P)\mathit{BN}(P), and α\alpha-equivalence =α=_{\alpha} in ΠStop\Pi_{\text{{Stop}}} are as usual in the π\pi-calculus. A process PP is closed if 𝐹𝑁(P)=\mathit{FN}(P)=\emptyset. Let ΠStopc\Pi_{\text{{Stop}}}^{c} be the closed processes in ΠStop\Pi_{\text{{Stop}}}. Structural congruence \equiv is the least congruence satisfying the laws shown in Fig. 3. Process contexts ΠStop,C\Pi_{\text{{Stop}},C} and reduction contexts PCtxtπ\textit{PCtxt}_{\pi} are defined in Fig. 3. Let C[P]C[P] be the substitution of the hole [][\cdot] in CC by PP. The reduction rule ia\xrightarrow{ia} performs interaction and standard reduction sr\xrightarrow{sr} is its closure w.r.t. reduction contexts and \equiv (see Fig. 3). Let sr,n\xrightarrow{sr,n} denote nn sr\xrightarrow{sr}-reductions and sr,\xrightarrow{sr,*} denotes the reflexive-transitive closure of sr\xrightarrow{sr}. A process PΠStopP\in\Pi_{\text{{Stop}}} is successful, if P𝔻[Stop]P\equiv\mathbb{D}[\text{{Stop}}] for some 𝔻PCtxtπ\mathbb{D}\in\textit{PCtxt}_{\pi}.

Remark 2.2.

We do not include “new” laws for structural congruences on the constant Stop, like Stop | Stop equals Stop, since this would require to re-develop a lot of theory known from the π\pi-calculus without Stop. In our view, Stop is a mechanism for a notion of success that can be easily replaced by other similar notions (e.g. observing an open input or output as in barbed testing). However, it is easy to prove those equations on the semantic level. (i.e. w.r.t.  c\sim_{c} as defined below in Definition 2.5).

As an example for a reduction sequence, consider sending name yy over channel xx and then sending name uu over channel yy: (x(z).z¯u.0|x¯y.y(x).0)ia(z¯u.0[y/z]|y(x).0)(y(x).0|y¯u.0)ia(0|0)0.(x(z).\overline{z}u.0{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{x}y.y(x).0)\xrightarrow{ia}(\overline{z}u.0[y/z]{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}y(x).0)\equiv(y(x).0{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{y}u.0)\xrightarrow{ia}(0{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}0)\equiv 0.

For the semantics of processes, we observe whether standard reductions successfully terminate or not. Since reduction is nondeterministic, we test whether there exists a successful reduction sequence (may-convergence), and we test whether all reduction possibilities are successful (should-convergence).

Definition 2.3.

Let PP be a ΠStop\Pi_{\text{{Stop}}}-process. We say process PP is may-convergent and write PP{{\downarrow}}, if and only if there is a successful process PP^{\prime} with Psr,PP\xrightarrow{sr,*}P^{\prime}. We say PP is should-convergent and write PP{{\Downarrow}} if and only if for all PP^{\prime}: Psr,PP\xrightarrow{sr,*}P^{\prime} implies PP^{\prime}{{\downarrow}}. If PP is not may-convergent, then PP we say is must-divergent (written PP{\Uparrow}). If PP is not should-convergent, then we say it is may-divergent (written PP{\uparrow}).

Example 2.4.

The process P:=νx,y.(x(z).0x¯y.Stop)P:=\nu x,y.(x(z).0\mid\overline{x}y.\text{{Stop}}) is may-convergent (PP{{\downarrow}}) and should-convergent (PP{{\Downarrow}}), since Psr0StopP\xrightarrow{sr}0\mid\text{{Stop}} is the only sr\xrightarrow{sr}-sequence for PP. The process P:=νx,y.(x(z).0x¯y.0)P^{\prime}:=\nu x,y.(x(z).0\mid\overline{x}y.0) is may- and must-divergent (i.e. PP^{\prime}{\uparrow} and PP^{\prime}{\Uparrow}), since Psr0P^{\prime}\xrightarrow{sr}0 is the only sr\xrightarrow{sr}-sequence for PP^{\prime}.
For P′′:=νx,y.(x¯y.0x(z).Stopx(z).0)P^{\prime\prime}:=\nu x,y.(\overline{x}y.0\mid x(z).\text{{Stop}}\mid x(z).0), we have P′′srνx,y.(Stopx(z).0)P^{\prime\prime}\xrightarrow{sr}\nu x,y.(\text{{Stop}}\mid x(z).0) and P′′srνx,y.x(z).StopP^{\prime\prime}\xrightarrow{sr}\nu x,y.x(z).\text{{Stop}}. The first result is successful, and the second result is not successful. Hence, for P′′P^{\prime\prime} we have P′′P^{\prime\prime}{{\downarrow}} and P′′P^{\prime\prime}{{\uparrow}}.

Should-convergence implies may-convergence, and must-divergence implies may-divergence.

Definition 2.5.

For P,QΠStopP,Q\in\Pi_{\text{{Stop}}} and observation ξ{,,,}\xi\in\{{\downarrow},{\Downarrow},\uparrow,\Uparrow\}, we define PξQP\leq_{\xi}Q iff PξQξ.P\xi\implies Q\xi. The ξ\xi-contextual preorders c,ξ\leq_{c,\xi} and then ξ\xi-contextual equivalences c,ξ\sim_{c,\xi} are defined as

Pc,ξQ iff CΠStop,C:C[P]ξC[Q] and Pc,ξQ iff Pc,ξQQc,ξPP\leq_{c,\xi}Q\text{ iff }\forall C\in\Pi_{\text{{Stop}},C}:C[P]\leq_{\xi}C[Q]\text{~{}~{}~{}and~{}~{}~{}}P\sim_{c,\xi}Q\text{ iff }P\leq_{c,\xi}Q\wedge Q\leq_{c,\xi}P

Contextual equivalence of ΠStop\Pi_{\text{{Stop}}}-processes is defined as PcQ iff Pc,QPc,Q.P\sim_{c}Q\text{ iff }P\sim_{c,{\downarrow}}Q\wedge P\sim_{c,{\Downarrow}}Q.

Example 2.6.

For Q:=νx,y.(x¯y.0x(z).Stopx(z).0)Q:=\nu x,y.(\overline{x}y.0\mid x(z).\text{{Stop}}\mid x(z).0), we have Stopc,Q\text{{Stop}}\sim_{c,{\downarrow}}Q (which can be proved using the methods in sabel-schmidt-schauss-pistop:2015 ), but Stop≁cQ\text{{Stop}}\not\sim_{c}Q, since Stop\text{{Stop}}{\Downarrow} and QQ{\uparrow} and thus Stop≁c,Q\text{{Stop}}\not\sim_{c,{\Downarrow}}Q. Note that c,=c{\leq_{c,{\Downarrow}}}={\leq_{c}} holds in ΠStop\Pi_{\text{{Stop}}}, since there is a context CC such that for all processes PP: C[P]PC[P]{\Downarrow}\iff P{\downarrow} (see sabel-schmidt-schauss-pistop:2015 ). For instance, the equivalence 0c,Q0\sim_{c,{\Downarrow}}Q does not hold, since !0!0{\Uparrow} and !Q!Q{\Downarrow} and thus the context C=![]C=![\cdot] distinguishes 0 and QQ w.r.t. should-convergence.

Contextual preorder and equivalence are (pre)-congruences. Contextual preorder remains unchanged if observation is restricted to closing contexts:

Lemma 2.7.

Let ξ{,,,}\xi\in\{\downarrow,\uparrow,\Downarrow,\Uparrow\}, P,QP,Q be ΠStop\Pi_{\text{{Stop}}}-processes. Then Pc,ξQP\leq_{c,\xi}Q if, and only if CΠStop,C\forall C\in\Pi_{\text{{Stop}},C} such that C[P]C[P] and C[Q]C[Q] are closed: C[P]ξC[Q]C[P]\leq_{\xi}C[Q].

3 The Process Calculus 𝐶𝐻{\mathit{CH}}

PProc𝐶𝐻::=(P1|P2)|e|νx.P|xme|xm|x=eeExpr𝐶𝐻::=x|λx.e|(e1e2)|ce1ear(c)|letrecx1=e1,,xn=enine|m|seqe1e2|caseTeof(cT,1x1xar(cT,1)->e1)(cT,|T|x1xar(cT,|T|)->e|T|)mMExpr𝐶𝐻::=returne|e>​>=e|forkIOe|takeMVare|𝚗𝚎𝚠𝙼𝚅𝚊𝚛e|putMVaree𝔱Typ𝐶𝐻::=𝙸𝙾𝔱|(T𝔱1𝔱n)|𝙼𝚅𝚊𝚛𝔱|𝔱1𝔱2𝔻PCtxt𝐶𝐻::=[]|𝔻|P|P|𝔻|νx.𝔻𝕄MCtxt𝐶𝐻::=[]|𝕄>​>=e𝔼ECtxt𝐶𝐻::=[]|(𝔼e)|(seq𝔼e)|(case𝔼ofalts)𝔽FCtxt𝐶𝐻::=𝔼|(takeMVar𝔼)|(putMVar𝔼e)\begin{array}[]{@{}r@{\,}l@{\,}l@{\,}l@{}}P\in{\textit{Proc}_{{\mathit{CH}}}}&::=&(P_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{2})~{}|~{}{\Leftarrow}\,{e}~{}|~{}\nu x.P~{}|~{}{x}\,{\textbf{{m}}}\,{e}~{}|~{}{{x}}\,{\textbf{{m}}}\,{-}~{}|~{}{x}={e}\\[2.15277pt] e\in{\textit{Expr}_{{\mathit{CH}}}}&::=&x~{}|~{}\lambda x.e~{}|~{}(e_{1}~{}e_{2})~{}|~{}c~{}e_{1}\ldots e_{\mathrm{ar}(c)}~{}|~{}\text{letrec}~{}x_{1}{=}e_{1},\ldots,x_{n}{=}e_{n}~{}\text{{in}}~{}e~{}|~{}m~{}|~{}\text{{seq}}~{}e_{1}~{}e_{2}\\ &&~{}|~{}\text{{case}}_{T}\,e\,\text{{of}}\,(c_{T,1}\,x_{1}\ldots x_{\mathrm{ar}(c_{T,1})}\,\texttt{->}\,e_{1})\ldots(c_{T,|T|}\,x_{1}\ldots x_{\mathrm{ar}(c_{T,|T|})}\,\texttt{->}\,e_{|T|})\\[2.15277pt] m\in{\textit{MExpr}_{{\mathit{CH}}}}&::=&\text{return}\,e~{}|~{}e\,{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,\,e^{\prime}~{}|~{}\text{{forkIO}}\,e~{}|~{}\text{{takeMVar}}\,e~{}|~{}{\tt newMVar}\,e~{}|~{}\text{{putMVar}}\,e\,e^{\prime}\\[2.15277pt] {\mathfrak{t}}\in{\textit{Typ}_{{\mathit{CH}}}}&::=&{\tt IO}~{}{\mathfrak{t}}~{}|~{}(T~{}{\mathfrak{t}}_{1}\ldots{\mathfrak{t}}_{n})~{}|~{}{\tt MVar}~{}{{\mathfrak{t}}}~{}|~{}{\mathfrak{t}}_{1}\to{\mathfrak{t}}_{2}\end{array}\\ \begin{array}[]{@{}r@{\,}l@{\,}l@{\,}l@{}}\mathbb{D}\in{\textit{PCtxt}_{{\mathit{CH}}}}&::=&[\cdot]~{}|~{}\mathbb{D}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P~{}|~{}P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\mathbb{D}~{}|~{}\nu x.\mathbb{D}\\ \mathbb{M}\in{\textit{MCtxt}_{{\mathit{CH}}}}&::=&[\cdot]~{}|~{}\mathbb{M}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,e\\ \end{array}\hfill\begin{array}[]{@{}r@{\,}l@{\,}l@{\,}l@{}}\mathbb{E}\in{\textit{ECtxt}_{{\mathit{CH}}}}&::=&[\cdot]~{}|~{}(\mathbb{E}\,e)~{}|~{}(\text{{seq}}\,\mathbb{E}\,e)~{}|~{}(\text{{case}}\,\mathbb{E}\,\text{{of}}\,alts)\\ \mathbb{F}\in{\textit{FCtxt}_{{\mathit{CH}}}}&::=&\mathbb{E}~{}|~{}(\text{{takeMVar}}\,\mathbb{E})~{}|~{}(\text{{putMVar}}\,\mathbb{E}\,e)\end{array}

Figure 4: Syntax of processes, expressions, types, and context classes of 𝐶𝐻{\mathit{CH}}

Functional Evaluation:(cpce)𝕄[𝔽[x]]|x=esr𝕄[𝔽[e]]|x=e(mkbinds)𝕄[𝔽[letrecx1=e1,,xn=enine]]srνx1xn.(𝕄[𝔽[e]]|x1=e1||xn=en)(beta)𝕄[𝔽[((λx.e1)e2)]]sr𝕄[𝔽[e1[e2/x]]](case)𝕄[𝔽[caseT(ce1en)of(cy1yn->e)]]sr𝕄[𝔽[e[e1/y1,,en/yn]]](seq)𝕄[𝔽[(seqve)]]sr𝕄[𝔽[e]] where v is a functional valueMonadic Computations:(lunit)𝕄[returne1>​>=e2]sr𝕄[e2e1](tmvar)𝕄[takeMVarx]|xmesr𝕄[returne]|xm(pmvar)𝕄[putMVarxe]|xmsr𝕄[return()]|xme(nmvar)𝕄[newMVare]srνx.(𝕄[returnx]|xme)(fork)𝕄[forkIOe]sr𝕄[return()]|eClosure w.r.t. 𝔻-contexts and : If P1𝔻[P1]P2𝔻[P2], and P1srP2 then P1srP2.Capture avoidance: We assume capture avoiding reduction for all reductions.\begin{array}[]{@{~}l@{~}l@{}}\lx@intercol\text{Functional Evaluation:}\hfil\lx@intercol\\ {}\lx@intercol\text{(cpce)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[x]]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x=e\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[e]]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x=e\\ {}\lx@intercol\text{(mkbinds)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[\text{letrec}\,x_{1}{=}e_{1},\ldots,x_{n}{=}e_{n}\,\text{{in}}\,e]]}\xrightarrow{sr{}}\nu x_{1}\ldots x_{n}.({\Leftarrow}\,{\mathbb{M}[\mathbb{F}[e]]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x_{1}{=}e_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\!\ldots\!{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x_{n}{=}e_{n})\\ {}\lx@intercol\text{(beta)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[((\lambda x.e_{1})~{}e_{2})]]}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[e_{1}[e_{2}/x]]]}\\ {}\lx@intercol\text{(case)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[\text{{case}}_{T}~{}(c~{}e_{1}\ldots e_{n})~{}\text{{of}}~{}\ldots(c~{}y_{1}\ldots y_{n}\,\texttt{->}\,e)\ldots]]}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[e[e_{1}/y_{1},\ldots,e_{n}/y_{n}]]]}\\ {}\lx@intercol\text{(seq)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[(\text{{seq}}~{}v~{}e)]]}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\mathbb{F}[e]]}\text{~{}~{}~{}~{}where $v$ is a functional value}\\[2.15277pt] \lx@intercol\text{Monadic Computations:}\hfil\lx@intercol\\[2.15277pt] {}\lx@intercol\text{(lunit)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\text{return}~{}e_{1}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,~{}e_{2}]}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[e_{2}~{}e_{1}]}\\ {}\lx@intercol\text{(tmvar)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\text{{takeMVar}}~{}x]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{x}\,{\textbf{{m}}}\,{e}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\text{return}~{}e]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{x}}\,{\textbf{{m}}}\,{-}\\ {}\lx@intercol\text{(pmvar)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\text{{putMVar}}~{}x~{}e]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{x}}\,{\textbf{{m}}}\,{-}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\text{return}~{}()]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{x}\,{\textbf{{m}}}\,{e}\\ {}\lx@intercol\text{(nmvar)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\text{{newMVar}}~{}e]}\xrightarrow{sr{}}\nu x.({\Leftarrow}\,{\mathbb{M}[\text{return}~{}x]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{x}\,{\textbf{{m}}}\,{e})\\ {}\lx@intercol\text{(fork)}\hfil~{}&{\Leftarrow}\,{\mathbb{M}[\text{{forkIO}}~{}e]}\xrightarrow{sr{}}{\Leftarrow}\,{\mathbb{M}[\text{return}~{}()]}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{\Leftarrow}\,{e}\\ \lx@intercol\text{{Closure w.r.t. $\mathbb{D}$-contexts and $\equiv$:}~{}If $P_{1}\equiv\mathbb{D}[P_{1}^{\prime}]$, $P_{2}\equiv\mathbb{D}[P_{2}^{\prime}]$, and $P_{1}^{\prime}\xrightarrow{sr{}}P_{2}^{\prime}$ then $P_{1}\xrightarrow{sr{}}P_{2}$}.\hfil\lx@intercol\\[2.15277pt] \lx@intercol\text{{Capture avoidance:}~{}We assume capture avoiding reduction for all reductions.}\hfil\lx@intercol\end{array}

Figure 5: Standard reduction rules of 𝐶𝐻{\mathit{CH}} (call-by-name-version)

The calculus 𝐶𝐻{\mathit{CH}} (a variant of the language 𝐶𝐻𝐹\mathit{CHF}, sabel-schmidt-schauss-PPDP:2011 ; sabel-schmidt-schauss-LICS:12 ) models a core language of Concurrent Haskell peyton-gordon-finne:96 . We assume a partitioned set of data constructors cc where each family represents a type TT. The data constructors of type TT are cT,1,,cT,|T|c_{T,1},\ldots,c_{T,|T|} where each cT,ic_{T,i} has an arity ar(cT,i)0\mathrm{ar}(c_{T,i})\geq 0. We assume that there is a type () with data constructor (), a type Bool with constructors True, False, a type List with constructors Nil and : (written infix), and a type Pair with a constructor (,)(,) written (a,b)(a,b).

Processes PProc𝐶𝐻P\in{\textit{Proc}_{{\mathit{CH}}}} in 𝐶𝐻{\mathit{CH}} have expressions eExpr𝐶𝐻e\in{\textit{Expr}_{{\mathit{CH}}}} as subterms. See Fig. 5 where u,w,x,y,zu,w,x,y,z are variables from an infinite set Var. Processes are formed by parallel composition “  |  ”. The ν\nu-binder restricts the scope of a variable. A concurrent thread e{\Leftarrow}\,{e} evaluates ee. In a process there is (at most one) unique distinguished thread, called main thread, written as \xLongleftarrowmaine{\xLongleftarrow{\!\!\text{main}\!\!}}\,{e}. MVars are mutable variables which are empty or filled. A thread blocks if it wants to fill a filled MVar xme{x}\,{\textbf{{m}}}\,{e} or empty an empty MVar xm{{x}}\,{\textbf{{m}}}\,{-}. Here xx is called the name of the MVar. Bindings x=e{x}={e} model the global heap, of shared expressions, where xx is called a binding variable. If xx is a name of an MVar or a binding variable, then xx is called an introduced variable. In Q|νx.PQ{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\nu x.P the scope of xx is PP. A process is well-formed, if all introduced variables are pairwise distinct and there exists at most one main thread \xLongleftarrowmaine{\xLongleftarrow{\!\!\text{main}\!\!}}\,{e}.

Expressions Expr𝐶𝐻{\textit{Expr}_{{\mathit{CH}}}} consist of functional and monadic expressions MExpr𝐶𝐻{{\textit{MExpr}_{{\mathit{CH}}}}}. Functional expressions are variables, abstractions λx.e\lambda x.e, applications (e1e2)(e_{1}~{}e_{2}), seq-expressions (seqe1e2)(\text{{seq}}~{}e_{1}~{}e_{2}), constructor applications (ce1ear(c))(c~{}e_{1}~{}\ldots~{}e_{\mathrm{ar}(c)}), letrec-expressions (letrecx1=e1,,xn=enine)(\text{letrec}~{}x_{1}=e_{1},\ldots,x_{n}=e_{n}~{}\text{{in}}~{}e), and caseT-expressions for every type TT. We abbreviate case-expressions as caseTeofalts\text{{case}}_{T}~{}e~{}\text{{of}}~{}alts where altsalts are the case-alternatives such that there is exactly one alternative (cT,ix1xar(cT,i)->ei)(c_{T,i}~{}x_{1}\ldots x_{\mathrm{ar}(c_{T,i})}\,\texttt{->}\,e_{i}) for every constructor cT,ic_{T,i} of type TT, where x1,,xar(cT,i)x_{1},\ldots,x_{\mathrm{ar}(c_{T,i})} (occurring in the pattern cT,ix1xar(cT,i)c_{T,i}~{}x_{1}\ldots x_{\mathrm{ar}(c_{T,i})}) are pairwise distinct variables that become bound with scope eie_{i}. We often omit the type index TT in caseT\text{{case}}_{T}. In letrecx1=e1,,\text{letrec}~{}x_{1}=e_{1},\ldots, xn=enx_{n}=e_{n} ine\text{{in}}~{}e the variables x1,,xnx_{1},\ldots,x_{n} are pairwise distinct and the bindings xi=eix_{i}=e_{i} are recursive, i.e. the scope of xix_{i} is e1,,ene_{1},\ldots,e_{n} and ee. Monadic operators 𝚗𝚎𝚠𝙼𝚅𝚊𝚛{\tt newMVar}, takeMVar, and putMVar are used to create, to empty and to fill MVars, the “bind”-operator  >​>=  implements sequential composition of IO-operations, the forkIO-operator performs thread creation, and return lifts expressions into the monad.

Monadic values are newMVare\text{{newMVar}}\,e, takeMVare\text{{takeMVar}}\,e, putMVare1e2\text{{putMVar}}\,e_{1}\,e_{2}, returne\text{return}\,e, e1>​>=e2e_{1}\,{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,\,e_{2}, or forkIOe\text{{forkIO}}\,e. Functional values are abstractions and constructor applications. A value is a functional or a monadic value.

Abstractions, letrec-expressions, case-alternatives, and νx.P\nu x.P introduce variable binders. This induces bound and free variables (dentoted by 𝐹𝑉()\mathit{FV}(\cdot)), α\alpha-renaming, and α\alpha-equivalence =α=_{\alpha}. If 𝐹𝑉(P)=\mathit{FV}(P)=\emptyset, then we call process PP closed. We assume the distinct variable convention: free variables are distinct from bound variables, and bound variables are pairwise distinct. We assume that α\alpha-renaming is applied to obey this convention. Structural congruence \equiv of 𝐶𝐻{\mathit{CH}}-processes is the least congruence satisfying the laws P1|P2P2|P1P_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{2}\equiv P_{2}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{1}, (P1|P2)|P3P1|(P2|P3)(P_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{2}){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{3}\equiv P_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}(P_{2}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{3}), νx1.νx2.Pνx2.νx1.P\nu x_{1}.\nu x_{2}.P\equiv\nu x_{2}.\nu x_{1}.P, P1P2 if P1=αP2P_{1}\equiv P_{2}\text{ if $P_{1}=_{\alpha}P_{2}$}, and (νx.P1)|P2νx.(P1|P2) if x𝐹𝑉(P2)(\nu x.P_{1}){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{2}\equiv\nu x.(P_{1}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P_{2})\text{ if $x\not\in\mathit{FV}(P_{2})$}.

We assume expressions and processes to be well-typed w.r.t.  a monomorphic type system: the typing rules are standard (they can be found in schmidt-schauss-sabel:frank-60:19 ). The syntax of types is in Fig. 5 where (𝙸𝙾𝔱)({\tt IO}~{}{\mathfrak{t}}) is the type of a monadic action with return type 𝔱{\mathfrak{t}}, (𝙼𝚅𝚊𝚛𝔱)({\tt MVar}~{}{\mathfrak{t}}) is the type of an MVar with content type 𝔱{\mathfrak{t}}, and 𝔱1𝔱2{\mathfrak{t}}_{1}\to{\mathfrak{t}}_{2} is a function type. We treat constructors like overloaded constants to use them in a polymorphic way.

We introduce a call-by-name small-step reduction for 𝐶𝐻{\mathit{CH}}. This operational semantics can be shown to be equivalent to a call-by-need semantics (see sabel-schmidt-schauss-PPDP:2011 for the calculus 𝐶𝐻𝐹\mathit{CHF}). However, the equivalence of the reduction strategies is not important for this paper. That is why we do not include it.

In 𝐶𝐻{\mathit{CH}}, a context is a process or an expression with a (typed) hole [][\cdot]. We introduce several classes of contexts in Fig. 5. They are used by the reduction rules.

Definition 3.1.

The standard reduction sr\xrightarrow{sr{}} is defined by the rules and the closure in Fig. 5. It is only permitted for well-formed processes which are not successful.

Functional evaluation includes β\beta-reduction (beta), copying shared bindings into needed positions (cpce), evaluating case- and seq-expressions (case) and (seq), and moving letrec-bindings into the global bindings (mkbinds). For monadic computations, rule (lunit) implements the monadic evaluation. Rules (nmvar), (tmvar), and (pmvar) handle the MVar creation and access. A takeMVar-operation can only be performed on a filled MVar, and a putMVar-operation needs an empty MVar. Rule (fork) spawns a new thread. A concurrent thread of the form returne{\Leftarrow}\,{\text{return}~{}e} is terminated (where ee is of type ()).

Example 3.2.

The process \xLongleftarrowmainnewMVar()>​>=(λy.forkIO(takeMVary))>​>=λ_.putMVary(){\!\!}\xLongleftarrow{\!\!\text{main}\!\!}{\text{{newMVar}}}\,\texttt{\rm()}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,(\lambda y.\text{{forkIO}}\,(\text{{takeMVar}}\,y)){\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,\lambda\_.\text{{putMVar}}\,y\,\texttt{\rm()} creates a filled MVar, that is emptied by a spawned thread, and then again filled by the main thread.

We say that a 𝐶𝐻{\mathit{CH}}-process PP is successful if Pνx1xn.(\xLongleftarrowmainreturne|P)P\equiv\nu x_{1}\ldots x_{n}.({\xLongleftarrow{\!\!\text{main}\!\!}}\,{\text{return}~{}e}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}P^{\prime}) and if PP is well-formed. This captures Haskell’s behavior that termination of the main-thread terminates all threads.

Definition 3.3.

Let PP be a 𝐶𝐻{\mathit{CH}}-process. Then PP may-converges (denoted as PP{\downarrow}), iff PP is well-formed and P:Psr,P\exists P^{\prime}:P\xrightarrow{sr{,*}}P^{\prime} such that PP^{\prime} is successful. If PP{\downarrow} does not hold, then PP must-diverges and we write PP{\Uparrow}. Process PP should-converges (written as PP{\Downarrow}), iff PP is well-formed and P:Psr,PP\forall P^{\prime}:P\xrightarrow{sr{,*}}P^{\prime}\implies P^{\prime}{\downarrow}. If PP is not should-convergent, then we say PP may-diverges written as PP{\uparrow}.

Definition 3.4.

Contextual approximation c\leq_{c} and equivalence c\sim_{c} on 𝐶𝐻{\mathit{CH}}-processes are defined as c:=c,c,{\leq_{c}}:={{\leq_{c,{\downarrow}}}\cap{\leq_{c,{\Downarrow}}}} and c:=cc{\sim_{c}}:={{\leq_{c}}\cap{\geq_{c}}} where P1c,P2 iff 𝔻PCtxt𝐶𝐻:𝔻[P1]𝔻[P2]P_{1}\leq_{c,{\downarrow}}P_{2}\text{ iff }\forall\mathbb{D}\in{\textit{PCtxt}_{{\mathit{CH}}}}:\mathbb{D}[P_{1}]{\downarrow}\implies\mathbb{D}[P_{2}]{\downarrow} and P1c,P2 iff 𝔻PCtxt𝐶𝐻:𝔻[P1]𝔻[P2]P_{1}\leq_{c,{\Downarrow}}P_{2}\text{ iff }\forall\mathbb{D}\in{\textit{PCtxt}_{{\mathit{CH}}}}:\mathbb{D}[P_{1}]{\Downarrow}\implies\mathbb{D}[P_{2}]{\Downarrow}. For 𝐶𝐻{\mathit{CH}}-expressions, let e1ce2e_{1}\leq_{c}e_{2} iff for all process-contexts CC with a hole at expression position: C[e1]cC[e2]C[e_{1}]\leq_{c}C[e_{2}] and e1ce2e_{1}\sim_{c}e_{2} iff e1ce2e2ce1e_{1}\leq_{c}e_{2}\wedge e_{2}\leq_{c}e_{1}.

As an example, we consider the processes

P1:=νm.(\xLongleftarrowmaintakeMVarm|takeMVarm|mm())P2:=\xLongleftarrowmainreturn()P3:=\xLongleftarrowmainletrecx=xinx\begin{array}[]{lcl}P_{1}&:=&\nu m.({}\xLongleftarrow{\!\!\text{main}\!\!}{\text{{takeMVar}}~{}m}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{}\,{\Leftarrow}\,{\text{{takeMVar}}~{}m}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{m}\,{\textbf{{m}}}\,{()})\\ P_{2}&:=&{}\xLongleftarrow{\!\!\text{main}\!\!}{\text{return}~{}()}\\ P_{3}&:=&{}\xLongleftarrow{\!\!\text{main}\!\!}{\text{letrec}~{}x=x~{}\text{{in}}~{}x}\end{array}

Process P1P_{1} is may-convergent and may-divergent (and thus not should-convergent), since either the main-thread succeeds in emptying the MVar mm, or (if the other threads empties the MVar mm) the main-thread is blocked forever. The process P2P_{2} is sucessful. The process P3P_{3} is must-divergent. The equivalence P1c,P2P_{1}\sim_{c,{\downarrow}}P_{2} holds, but P1≁cP2P_{1}\not\sim_{c}P_{2}, since P2P_{2} is should-convergent and thus P1≁c,P2P_{1}\not\sim_{c,{\Downarrow}}P_{2}. As a further example, it is easy to verify that P1c,P3P_{1}\sim_{c,{\Downarrow}}P_{3} holds, since both processes are not should-convergent and a surrounding context cannot change this. However, P1≁c,P3P_{1}\not\sim_{c,{\downarrow}}P_{3}, since P3P_{3}{\Uparrow}.

Contextual approximation and equivalence are (pre)-congruences. The following equivalence will help to prove properties of our translation.

Lemma 3.5.

The relations in Definition 3.4 are unchanged, if we add closedness: for ξ{,}\xi\in\{{\downarrow},{\Downarrow}\}, let P1c,ξP2P_{1}\leq_{c,\xi}P_{2} iff 𝔻PCtxt𝐶𝐻\forall\mathbb{D}\in{\textit{PCtxt}_{{\mathit{CH}}}} such that 𝔻[P1],𝔻[P2]\mathbb{D}[P_{1}],\mathbb{D}[P_{2}] are closed: 𝔻[P1]ξ𝔻[P2]ξ\mathbb{D}[P_{1}]\xi{\implies}\mathbb{D}[P_{2}]\xi.

4 The Translation τ0\tau_{0} with Private MVars

τ0(P)=\xLongleftarrowmain𝐝𝐨{𝑠𝑡𝑜𝑝newMVar();forkIOτ(P);putMVar𝑠𝑡𝑜𝑝()}τ(x¯y.P)=𝐝𝐨{checkxnewMVar();putMVar(𝑢𝑛𝑐ℎ𝑎𝑛x)(y,checkx);putMVarcheckx();τ(P)}τ(x(y).P)=𝐝𝐨{(y,checkx)takeMVar(𝑢𝑛𝑐ℎ𝑎𝑛x);takeMVarcheckx;τ(P)}}τ(P|Q)=𝐝𝐨{forkIOτ(Q);τ(P)}τ(νx.P)=𝐝𝐨{𝑐ℎ𝑎𝑛𝑥newEmptyMVar;letrecx=Chan𝑐ℎ𝑎𝑛𝑥inτ(P)}τ(0)=return()τ(Stop)=takeMVar𝑠𝑡𝑜𝑝τ(!P)=letrecf=𝐝𝐨{forkIOτ(P);f}inf\begin{array}[]{@{}l@{\,}l@{\,}l@{}}\tau_{0}(P)&=&{\xLongleftarrow{\!\!\text{main}\!\!}}\,{\mathbf{do}~{}\{}\begin{array}[t]{@{}l@{}}{\mathit{stop}}\leftarrow\text{{newMVar}}~{}();\text{{forkIO}}~{}\tau(P);\text{{putMVar}}~{}{\mathit{stop}}~{}()\}\end{array}\\ \tau(\overline{x}y.P)&=&\mathbf{do}~{}\{\begin{array}[t]{@{}l@{}}checkx\leftarrow\text{{newMVar}}~{}();\text{{putMVar}}~{}(\mathit{unchan}~{}x)~{}(y,checkx);\text{{putMVar}}~{}checkx~{}();\tau(P)\}\end{array}\\ \tau(x(y).P)&=&\mathbf{do}~{}\{\begin{array}[t]{@{}l@{}}(y,checkx)\leftarrow\text{{takeMVar}}~{}(\mathit{unchan}~{}x);\text{{takeMVar}}~{}checkx;\tau(P)\}\}\end{array}\\ \tau(P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q)&=&\mathbf{do}~{}\{\text{{forkIO}}~{}\tau(Q);\tau(P)\}\\ \tau(\nu x.P)&=&\mathbf{do}\,\{\begin{array}[t]{@{}l@{}}\mathit{chanx}\leftarrow\text{{newEmptyMVar}};\text{letrec}~{}x=\text{{Chan}}\,\mathit{chanx}~{}\text{{in}}~{}\tau(P)\}\\ \end{array}\\ \tau(0)&=&{\text{return}~{}()}\\ \tau(\text{{Stop}})&=&{\text{{takeMVar}}~{}{\mathit{stop}}}\\ \tau(!P)&=&{\text{letrec}~{}f=\mathbf{do}~{}\{\text{{forkIO}}~{}\tau(P);f\}~{}\text{{in}}~{}f}\end{array}

Figure 6: Translations τ0\tau_{0} and τ\tau

We present a translation τ0\tau_{0} that encodes ΠStop\Pi_{\text{{Stop}}}-processes as 𝐶𝐻{\mathit{CH}}-programs. It establishes correct synchronous communication by using a private MVar, which is created by the sender and its name is sent to the receiver. The receiver uses it to acknowledge that the message was received. Since only the sender and the receiver know this MVar, no other thread can interfere the communication.The approach has similarities with Boudol’s translation boudol:1992 from the π\pi-calculus into an asynchronous one, where a private channel name of the π\pi-calculus was used to guarantee safe communication between sender and receiver.

For translating π\pi-calculus channels into 𝐶𝐻{\mathit{CH}}, we use a recursive data type Channel (with constructor Chan), which is defined in Haskell-syntax as

dataChannel=Chan(MVar(Channel,(MVar())))\texttt{data}~{}\text{{Channel}}=\text{{Chan}}~{}(\texttt{MVar}~{}(\text{{Channel}},(\texttt{MVar}~{}())))

We abbreviate (caseChaneof(Chanm->m))(\text{{case}}_{\texttt{Chan}}~{}e~{}\text{{of}}~{}(\text{{Chan}}~{}m~{}\,\texttt{->}\,~{}m)) as (𝑢𝑛𝑐ℎ𝑎𝑛e)(\mathit{unchan}~{}e).

We use a>​>ba~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax}$}}}\,~{}b for a>​>=(λ_.b)a{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,(\lambda\_\,.\,b) and also use Haskell’s do-notation with the following meaning:

𝐝𝐨{xe1;e2}=e1>​>=λx.(𝐝𝐨{e2})𝐝𝐨{(x,y)e1;e2}=e1>​>=λz.case𝙿𝚊𝚒𝚛zof(x,y)->(𝐝𝐨{e2})𝐝𝐨{e1;e2}=e1>​>(𝐝𝐨{e2})𝐝𝐨{e}=e\begin{array}[]{@{}l@{~}l}\mathbf{do}~{}\{x\leftarrow e_{1};e_{2}\}\hfil~{}&=e_{1}~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,~{}\lambda x.(\mathbf{do}~{}\{e_{2}\})\\ \mathbf{do}~{}\{(x,y)\leftarrow e_{1};e_{2}\}\hfil~{}&=e_{1}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,~{}\lambda z.\text{{case}}_{\tt Pair}~{}z~{}\text{{of}}~{}(x,y)\,\texttt{->}\,(\mathbf{do}~{}\{e_{2}\})\end{array}\qquad\begin{array}[]{l@{~}l}\mathbf{do}~{}\{e_{1};e_{2}\}\hfil~{}&=e_{1}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax}$}}}\,~{}(\mathbf{do}~{}\{e_{2}\})\\ \mathbf{do}~{}\{e\}\hfil~{}&=e\end{array}

As a further abbreviation, we write ynewEmptyMVary\leftarrow\text{{newEmptyMVar}} inside a 𝐝𝐨\mathbf{do}-block to abbreviate the sequence ynewMVar;takeMVaryy\leftarrow\text{{newMVar}}~{}\bot;\text{{takeMVar}}~{}y, where \bot is a must-divergent expression. Our translation uses one MVar per channel which contains a pair consisting of the (translated) name of the channel and a further MVar used for the synchronization, which is private, i.e. only the sender and the receiver know it. Privacy is established by the sender: it creates a new MVar for every send operation. Message yy is sent over channel xx by sending a pair (y,check)(y,check) where checkcheck is an MVar containing (). The receiver waits for a message (y,check)(y,check) by the sender. After sending the message, the sender waits until check is emptied, and the receiver acknowledges by emptying the MVar check333A variant of the translation would be to change the roles for the acknowledgement such that an empty MVar is created, which is filled by the receiver and emptied by the sender. The reasoning on the correctness of the translation is very similar to the one presented here.

Definition 4.1.

We define the translation τ0\tau_{0} and its inner translation τ\tau from the ΠStop\Pi_{\text{{Stop}}}-calculus into the 𝐶𝐻{\mathit{CH}}-calculus in Fig. 6. For contexts, the translations are the same where the context hole is treated like a constant and translated as τ([])=[]\tau([\cdot])=[\cdot].

The translation τ0\tau_{0} generates a main-thread and an MVar 𝑠𝑡𝑜𝑝{\mathit{stop}}. The main thread is then waiting for the MVar 𝑠𝑡𝑜𝑝{\mathit{stop}} to be emptied. The inner translation τ\tau translates the constructs and constants of the ΠStop\Pi_{\text{{Stop}}}-calculus into 𝐶𝐻{\mathit{CH}}-expressions. Except for the main-thread (and using keyword let instead of letrec), the translation τ0\tau_{0} generates a valid Concurrent Haskell-program, i.e. if we write τ0(P)=\xLongleftarrowmaine\tau_{0}(P)={\xLongleftarrow{\!\!\text{main}\!\!}}\,{e} as main=e\texttt{main}=e , we can execute the translation in the Haskell-interpreter.

Example 4.2.

We consider the ΠStop\Pi_{\text{{Stop}}}-process P:=νx,y1,y2,z.(x(y1).0|x(y2).Stop|x¯z.0)P:=\nu x,y_{1},y_{2},z.(x(y_{1}).0{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x(y_{2}).\text{{Stop}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{x}z.0) which is may-convergent and may-divergent: depending on which receiver communicates with the sender, the result is the successful process νx,y1.(x(y1).0|Stop)\nu x,y_{1}.(x(y_{1}).0{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\text{{Stop}}) or the must-divergent process νx,y2.(x(y2).Stop)\nu x,y_{2}.(x(y_{2}).\text{{Stop}}). The 𝐶𝐻{\mathit{CH}}-process τ0(P)\tau_{0}(P) reduces after several steps to the process

νstop,chanx,chany1,chany2,chanz,checkx,x,y1,y2,z.(chanxm(z,checkx)|chany1m|chany2m|chanzm|checkxm()|𝑠𝑡𝑜𝑝m()|x=Chanchanx|z=Chanchanz|y1=Chanchany1|y2=Chanchany2|\xLongleftarrowmainputMVar𝑠𝑡𝑜𝑝()|𝐝𝐨{putMVarcheckx();return()}|𝐝𝐨{(y1,checkx)takeMVarchanx;takeMVarcheckx;return()}|𝐝𝐨{(y2,checkx)takeMVarchanx;takeMVarcheckx;takeMVar𝑠𝑡𝑜𝑝})\begin{array}[]{l}\lx@intercol\nu stop,chanx,chany_{1},chany_{2},chanz,checkx,x,y_{1},y_{2},z.(\hfil\\ {chanx}\,{\textbf{{m}}}\,{(z,checkx)}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{chany_{1}}}\,{\textbf{{m}}}\,{-}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{chany_{2}}}\,{\textbf{{m}}}\,{-}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{chanz}}\,{\textbf{{m}}}\,{-}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{checkx}\,{\textbf{{m}}}\,{\texttt{\rm()}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{{\mathit{stop}}}\,{\textbf{{m}}}\,{\texttt{\rm()}}\\ {\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x{=}\text{{Chan}}\,{chanx}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}z{=}\text{{Chan}}\,{chanz}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}y_{1}{=}\text{{Chan}}\,{chany_{1}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}y_{2}{=}\text{{Chan}}\,{chany_{2}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{}\xLongleftarrow{\!\!\text{main}\!\!}{\text{{putMVar}}}~{}{\mathit{stop}}~{}()\\ {\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{}\,{\Leftarrow}\,{\mathbf{do}}~{}\{\begin{array}[t]{@{}l@{}}\text{{putMVar}}~{}checkx~{}();\text{return}~{}\texttt{\rm()}\}\end{array}\\ {\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{}\,{\Leftarrow}\,{\mathbf{do}}~{}\{\begin{array}[t]{@{}l@{}}(y_{1},checkx)\leftarrow\text{{takeMVar}}~{}chanx;\text{{takeMVar}}~{}checkx;\text{return}~{}\texttt{\rm()}\}\end{array}\\ {\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}{}\,{\Leftarrow}\,{\mathbf{do}}~{}\{\begin{array}[t]{@{}l@{}}(y_{2},checkx)\leftarrow\text{{takeMVar}}~{}chanx;\text{{takeMVar}}~{}checkx;\text{{takeMVar}}~{}{\mathit{stop}}\})\end{array}\end{array}

Now the first thread (which is the translation of sender x¯z.0\overline{x}z.0) is blocked, since it tries to fill the full MVar checkxcheckx. The second thread (the encoding of x(y1).0x(y_{1}).0) and the third thread (the encoding of x(y2).Stopx(y_{2}).\text{{Stop}}) race for emptying the MVar chanxchanx. If the second thread wins, then it will fill the MVar checkxcheckx which is then emptied by the first thread, and all other threads are blocked forever. If the third thread wins, then it will fill the MVar checkxcheckx which is then emptied by the first thread, and then the second thread will empty the MVar 𝑠𝑡𝑜𝑝{\mathit{stop}}. This allows the main-thread to fill it, resulting in a successful process.

For the following definition of τ\tau being compositional, adequate, or fully abstract, we adopt the view that τ\tau is a translation from ΠStop\Pi_{\text{{Stop}}} into the 𝐶𝐻{\mathit{CH}}-language with a special initial evaluation context C𝑜𝑢𝑡τC_{\mathit{out}}^{\tau}.

Definition 4.3.

Let C𝑜𝑢𝑡τ:=ν𝑠𝑡𝑜𝑝.\xLongleftarrowmain𝐝𝐨{𝑠𝑡𝑜𝑝newMVar();forkIO[];putMVar𝑠𝑡𝑜𝑝()}.\begin{array}[]{@{}l@{}l@{}}C_{\mathit{out}}^{\tau}:=\nu{\mathit{stop}}.{\xLongleftarrow{\!\!\text{main}\!\!}}\,{\mathbf{do}}~{}\{&{\mathit{stop}}\leftarrow\text{{newMVar}}~{}();\text{{forkIO}}~{}[\cdot];\text{{putMVar}}~{}{\mathit{stop}}~{}()\}.\end{array} Variants 0,0{\downarrow}_{0},{\Downarrow}_{0} of may- and should-convergence of expressions ee within the context C𝑜𝑢𝑡τC_{\mathit{out}}^{\tau} in 𝐶𝐻{\mathit{CH}} are defined as e0iffC𝑜𝑢𝑡τ[e]e{\downarrow}_{0}~{}\text{iff}~{}C_{\mathit{out}}^{\tau}[e]{\downarrow} and e0iffC𝑜𝑢𝑡τ[e]e{\Downarrow}_{0}~{}\text{iff}~{}C_{\mathit{out}}^{\tau}[e]{\Downarrow}. The relation c,τ0\sim_{c,\tau_{0}} is defined by c,τ0:=c,τ0c,τ0\sim_{c,\tau_{0}}:=\leq_{c,\tau_{0}}\cap\geq_{c,\tau_{0}}, where e1c,τ0e2e_{1}\leq_{c,\tau_{0}}e_{2} iff C:if 𝐹𝑉(C[e1],C[e2]){𝑠𝑡𝑜𝑝},then\forall C:\text{if }\mathit{FV}(C[e_{1}],C[e_{2}])\subseteq\{{\mathit{stop}}\},\text{then} C[e1]0C[e2]0 and C[e1]0C[e2]0C[e_{1}]{\downarrow}_{0}\implies C[e_{2}]{\downarrow}_{0}\mbox{ and }C[e_{1}]{\Downarrow}_{0}\implies C[e_{2}]{\Downarrow}_{0}.

Since c,𝐶𝐻\leq_{c,{\mathit{CH}}} is a subset of c,τ0\leq_{c,\tau_{0}}, we often can use the more general relations for reasoning.

Definition 4.4.

Let ΠStop,C\Pi_{\text{{Stop}},C} be the contexts of ΠStop\Pi_{\text{{Stop}}}. We define the following properties for τ0\tau_{0} and τ\tau (see schmidtschauss-sabel-niehren-schwing-tcs:15 for a general framework of properties of translations under observational semantics). For open processes P,PP,P^{\prime}, we say that translation τ\tau is

convergence-equivalent

iff for all PΠStopP\in\Pi_{\text{{Stop}}}: Pτ(P)0P{\downarrow}\iff\tau(P){\downarrow}_{0} and Pτ(P)0P{\Downarrow}\iff\tau(P){\Downarrow}_{0},

compositional upto {0,0}\{{\downarrow}_{0},{\Downarrow}_{0}\}

iff for all PΠStopP\in\Pi_{\text{{Stop}}}, all CΠStop,CC\in\Pi_{\text{{Stop}},C}, and all ξ{0,0}:\xi\in\{{\downarrow}_{0},{\Downarrow}_{0}\}:
if 𝐹𝑉(C[P]){stop}, then τ(C[P])ξτ(C)[τ(P)]ξ\text{if }\mathit{FV}(C[P])\subseteq\{stop\},\text{ then }\tau(C[P])\xi\iff\tau(C)[\tau(P)]\xi,

adequate

iff for all processes P,PΠStopP,P^{\prime}\in\Pi_{\text{{Stop}}}: τ(P)c,τ0τ(P)PcP\tau(P)\leq_{c,\tau_{0}}\tau(P^{\prime})\implies P\leq_{c}P^{\prime}, and

fully abstract

iff for all processes P,PΠStopP,P^{\prime}\in\Pi_{\text{{Stop}}}: PcPτ(P)c,τ0τ(P)P\leq_{c}P^{\prime}\iff\tau(P)\leq_{c,\tau_{0}}\tau(P^{\prime}).

Convergence-equivalence of translation τ0\tau_{0} for may- and should-convergence holds. For readability the proof is omitted, but given in the technical report schmidt-schauss-sabel:frank-60:19 , where we show:

Theorem 4.5.

Let PΠStopP\in\Pi_{\text{{Stop}}} be closed. Then τ0\tau_{0} is convergence-equivalent for {\downarrow} and {\Downarrow}, i.e. PP{\downarrow} is equivalent to τ0(P)\tau_{0}(P){\downarrow}. and PP{\Downarrow} is equivalent to τ0(P)\tau_{0}(P){\Downarrow}. This also shows convergence-equivalence of τ\tau w.r.t.  0,0{\downarrow}_{0},{\Downarrow}_{0}, i.e. Pτ(P)0P{\downarrow}\iff\tau(P){\downarrow}_{0} and Pτ(P)0P{\Downarrow}\iff\tau(P){\Downarrow}_{0}.

We show that the translation is adequate (see Theorem 4.8 below). The interpretation of this result is that the π\pi-calculus with the concurrent semantics is semantically represented within 𝐶𝐻{\mathit{CH}}. This result is on a more abstract level, since it is based on the property whether the programs (in all contexts) produce values or may run into failure, or get stuck; or not. Since the π\pi-calculus does not have a notion of values, also the translated processes cannot be compared w.r.t. values other than a single form of value.

The translation τ0\tau_{0} is not fully abstract (see Theorem 4.9 below), which is rather natural, since it only means that it is mapped into a subset of the 𝐶𝐻{\mathit{CH}}-expressions and that this is a proper subset w.r.t. the semantics. For proving both theorems, we first use a simple form of a context lemma:

Lemma 4.6.

Let e,ee,e^{\prime} be 𝐶𝐻{\mathit{CH}}-expressions, where the only free variable in e,ee,e^{\prime} is 𝑠𝑡𝑜𝑝{\mathit{stop}}.

Then C𝑜𝑢𝑡τ[e]cC𝑜𝑢𝑡τ[e]C_{\mathit{out}}^{\tau}[e]\leq_{c}C_{\mathit{out}}^{\tau}[e^{\prime}] holds, if and only if C𝑜𝑢𝑡τ[e]C𝑜𝑢𝑡τ[e]C_{\mathit{out}}^{\tau}[e]{\downarrow}\implies C_{\mathit{out}}^{\tau}[e^{\prime}]{\downarrow} and C𝑜𝑢𝑡τ[e]C𝑜𝑢𝑡τ[e]C_{\mathit{out}}^{\tau}[e]{\Downarrow}\implies C_{\mathit{out}}^{\tau}[e^{\prime}]{\Downarrow}.

Proposition 4.7.

The translation τ\tau is compositional upto {0,0}\{{\downarrow}_{0},{\Downarrow}_{0}\}.

We show that the translation τ\tau transports ΠStop\Pi_{\text{{Stop}}}-processes into 𝐶𝐻{\mathit{CH}}, such that adequacy holds. Thus the translated processes also correctly mimic the behavior of the original ΠStop\Pi_{\text{{Stop}}}-processes when plugged into contexts. If the translated open processes cannot be distinguished by c,τ0\leq_{c,\tau_{0}}, i.e. there is no test that detects a difference w.r.t. may- and should-convergence, then the original processes are equivalent in the π\pi-calculus. However, this open translation is not fully abstract, which means that there are 𝐶𝐻{\mathit{CH}}-contexts (not in the image of the translation) that can see and exploit too much of the details of the translation.

Theorem 4.8.

The translation τ\tau is adequate.

Proof.

We prove the adequacy for the preorder c,τ0\leq_{c,\tau_{0}}, for c,τ0\sim_{c,\tau_{0}} and c\sim_{c} the claim follows by symmetry. Let P,PP,P^{\prime} be ΠStop\Pi_{\text{{Stop}}}-processes, such that τ(P)c,τ0τ(P)\tau(P)\leq_{c,\tau_{0}}\tau(P^{\prime}). We show that PcPP\leq_{c}P^{\prime}. We use Lemma 3.5 to restrict considerations to closed C[P],C[P]C[P],C[P^{\prime}] below. Let CC be a context in ΠStop\Pi_{\text{{Stop}}}, such that C[P],C[P]C[P],C[P^{\prime}] are closed and C[P]C[P]{\downarrow}. Then τ0(C[P])=C𝑜𝑢𝑡τ[τ(C[P])]\tau_{0}(C[P])=C_{\mathit{out}}^{\tau}[\tau(C[P])]. Closed convergence-equivalence implies C𝑜𝑢𝑡τ[τ(C[P])]C_{\mathit{out}}^{\tau}[\tau(C[P])]{\downarrow}. By Proposition 4.7. we have C𝑜𝑢𝑡τ[τ(C)[τ(P)]]C_{\mathit{out}}^{\tau}[\tau(C)[\tau(P)]]{\downarrow}. Now τ(P)c,τ0τ(P)\tau(P)\leq_{c,\tau_{0}}\tau(P^{\prime}) implies C𝑜𝑢𝑡τ[τ(C)[τ(P)]]C_{\mathit{out}}^{\tau}[\tau(C)[\tau(P^{\prime})]]{\downarrow}, which is the same as C𝑜𝑢𝑡τ[τ(C[P])]C_{\mathit{out}}^{\tau}[\tau(C[P^{\prime}])]{\downarrow} using Proposition 4.7. Closed convergence-equivalence implies C[P]C[P^{\prime}]{\downarrow}. The same arguments hold for {\Downarrow} instead of {\downarrow}. In summary, we obtain PcPP\leq_{c}P^{\prime}. ∎

Theorem 4.9.

The translation τ\tau is not fully abstract, but it is fully abstract on closed processes, i.e. for closed processes P1,P2ΠStopP_{1},P_{2}\in\Pi_{\text{{Stop}}}, we have P1cP2τ(P1)c,τ0τ(P2)P_{1}\leq_{c}P_{2}\iff\tau(P_{1})\leq_{c,\tau_{0}}\tau(P_{2}).

Proof.

The first part holds, since an open translation can be closed by a context without initializing the ν\nu-bound MVars. For P=x¯(y).Stop|x(z).StopP=\bar{x}(y).\text{{Stop}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x(z).\text{{Stop}}, we have PcStopP\sim_{c}\text{{Stop}} but τ(P)≁c,0τ(Stop)\tau(P)\not\sim_{c,0}\tau(\text{{Stop}}): let 𝔻\mathbb{D} be a context that does not initialize the MVars for xx (as the translation does). Then 𝔻[τ(P)]0\mathbb{D}[\tau(P)]{\Uparrow}_{0}, but 𝔻[τ(Stop)]0\mathbb{D}[\tau(\text{{Stop}})]{\Downarrow}_{0}. Restricted to closed processes, full abstraction holds: P1cP2τ(P1)c,τ0τ(P2)P_{1}\leq_{c}P_{2}{\implies}\tau(P_{1})\leq_{c,\tau_{0}}\tau(P_{2}) follows from Lemma 4.6, since τ0\tau_{0} produces closed processes in context C𝑜𝑢𝑡τC_{\mathit{out}}^{\tau}. Theorem 4.8 implies the other direction. ∎

5 Translations with Global MVars

In this section we investigate translations that do not use private MVars, but use a fixed number of global MVars. We first motivate this investigation. The translation τ\tau is quite complex and thus we want to figure out whether there are simpler translations. A further reason is that τ\tau is not optimal, since it generates one MVar per communication which can be garbage-collected after the communication, however, generation and garbage collection require resources and thus the translation τ\tau may be inefficient in practice.

To systematically search for small global translations we implemented an automated tool. It searches for translations with global MVars (abstracting from a lot of other aspects of the translation) and tries to refute the correctness. As we show, most of the small translations are shown as incorrect by our tool. Analyzing correctness of the remaining translations can then be done by hand.

We only consider the aspect of how to encode the synchronous message passing of the π\pi-calculus, the other aspects (encoding parallel composition, replication and the Stop-constant) are not discussed and we assume that they are encoded as before (as the translation τ\tau did). We also keep the main idea to translate a channel of the π\pi-calculus into 𝐶𝐻{\mathit{CH}} by representing it as an object of a user-defined data type Channel that consists of an MVar for transferring the message (which again is a Channel), and additional MVars for implementing a correct synchronization mechanism. For the translation τ\tau, we used a private MVar (created by the sender, and transferred together with the message). Now we investigate translations where this mechanism is replaced by one or several public MVars, which are created once together with the channel object. To restrict the search space for translations, only the synchronization mechanism of MVars (by emptying and filling them) is used, but we forbid to transfer specific data (like numbers etc.). Hence, we restrict these MVars (which we call check-MVars) to be of type MVar (). Such MVars are comparable to binary semaphores, where filling and emptying correspond to operations signal and wait. In summary, we analyze translations of π\pi-calculus channels into a 𝐶𝐻{\mathit{CH}}-data type Channel defined in Haskell-syntax as

data Channel = Chan (MVar Channel)(MVar ())(MVar ())\texttt{data Channel\,=\,Chan\,(MVar\,Channel)}\,(\texttt{MVar\,()})\ldots(\texttt{MVar\,()})

A π\pi-calculus channel xx is represented as a 𝐶𝐻{\mathit{CH}}-binding x=Chancontentcheck1checknx=\texttt{Chan}~{}content~{}check_{1}~{}\ldots~{}check_{n} where contentcontent, check1,,checkncheck_{1},\ldots,check_{n} are appropriately initialized (i.e. empty) MVars. The MVars are public (or global), since all processes which know xx have access to the components of the channel. After fixing this representation of a π\pi-channel in 𝐶𝐻{\mathit{CH}}, the task is to translate the input- and output-actions x(y)x(y) and x¯z\overline{x}z into 𝐶𝐻{\mathit{CH}}-programs such that the interaction reduction is performed correctly and synchronously. We call the translation of x(y)x(y), the receiver (program) and the translation of x¯z\overline{x}z the sender (program). As a simplification, we restrict the allowed operations of the sender and receiver allowing only the operations:

𝚙𝚞𝚝𝚂\mathtt{putS}:

The sender puts its message into the contentscontents-MVar of the channel. It represents the expression caseChannelxof(Chanca1an->putMVarcz>​>e)\text{{case}}_{\texttt{Channel}}~{}x~{}\text{{of}}~{}(\text{{Chan}}~{}c~{}a_{1}~{}\ldots~{}a_{n}\,\texttt{->}\,~{}\text{{putMVar}}~{}c~{}z~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax}$}}}\,~{}e) in 𝐶𝐻{\mathit{CH}} where ee is the remaining program of the sender. The operation occurs exactly once in the sender program. We write it as 𝚙𝚞𝚝𝚂xz\mathtt{putS}_{x}~{}z, or as 𝚙𝚞𝚝𝚂\mathtt{putS}, if xx and zz are clear.

𝚝𝚊𝚔𝚎𝚂\mathtt{takeS}:

The receiver takes the message from the contentscontents-MVar of channel xx and replaces name yy by the received name in the subsequent program. The operation occurs exactly once in the receiver program. We write it as 𝚝𝚊𝚔𝚎𝚂xy\mathtt{takeS}_{x}~{}y, or as 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS}, if xx and yy are clear. It represents the 𝐶𝐻{\mathit{CH}}-expression caseChannelxof(Chanca1an->takeMVarc>​>=λy.e)\text{{case}}_{\texttt{Channel}}~{}x~{}\text{{of}}~{}(\text{{Chan}}~{}c~{}a_{1}~{}\ldots~{}a_{n}\,\texttt{->}\,~{}\text{{takeMVar}}~{}c~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax=}$}}}\,~{}\lambda y.e) where ee is the remaining program of the receiver. In 𝐝𝐨\mathbf{do}-notation, we write 𝐝𝐨{y𝚝𝚊𝚔𝚎𝚂x;e}\mathbf{do}~{}\{y\leftarrow\mathtt{takeS}_{x};e\} to abbreviate the above 𝐶𝐻{\mathit{CH}}-expression.

𝚙𝚞𝚝𝙲\mathtt{putC} and 𝚝𝚊𝚔𝚎𝙲\mathtt{takeC}:

The sender and the receiver may synchronize on a check-MVar checkicheck_{i} by putting ()() into it or by emptying the MVar. These operations are written as 𝚙𝚞𝚝𝙲xi\mathtt{putC}_{x}^{i} and 𝚝𝚊𝚔𝚎𝙲xi\mathtt{takeC}_{x}^{i}, or also as 𝚙𝚞𝚝𝙲i,𝚝𝚊𝚔𝚎𝙲i\mathtt{putC}^{i},\mathtt{takeC}^{i} if the name xx is clear. We write 𝚙𝚞𝚝𝙲\mathtt{putC} and 𝚝𝚊𝚔𝚎𝙲\mathtt{takeC} if there is only one check-MVar. Let ee be the remaining program of the sender or receiver. Then 𝚙𝚞𝚝𝙲xi\mathtt{putC}_{x}^{i} represents the 𝐶𝐻{\mathit{CH}}-expression caseChannelxof(Chanca1an->putMVarai()>​>e)\text{{case}}_{\texttt{Channel}}~{}x~{}\text{{of}}~{}(\text{{Chan}}~{}c~{}a_{1}~{}\ldots~{}a_{n}\,\texttt{->}\,~{}\text{{putMVar}}~{}a_{i}~{}()~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax}$}}}\,~{}e) and 𝚝𝚊𝚔𝚎𝙲xi\mathtt{takeC}_{x}^{i} represents the expression caseChannelxof(Chanca1an->takeMVarai>​>e).\text{{case}}_{\texttt{Channel}}~{}x~{}\text{{of}}~{}(\text{{Chan}}~{}c~{}a_{1}~{}\ldots~{}a_{n}\,\texttt{->}\,~{}\text{{takeMVar}}~{}a_{i}~{}{\,\text{{$\texttt{\char 62\relax\!\char 62\relax}$}}}\,~{}e).

We restrict our search for translations to the case that the sender and the receiver programs are sequences of the above operations, assuming that they are independent of the channel name xx. With this restriction, we can abstractly write the translation of the sender and the receiver as a pair of sequences, where only 𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲i\mathtt{putS},\mathtt{takeS},\mathtt{putC}^{i} and 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} operations are used. We make some more restrictions:

Definition 5.1.

Let n>0n>0 be a number of check-MVars. A standard global synchronized-to-buffer translation (or gstb-translation) is a pair (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}) of a send-sequence T𝑠𝑒𝑛𝑑T_{\mathit{send}} and a receive-sequence T𝑟𝑒𝑐𝑒𝑖𝑣𝑒T_{\mathit{receive}} consisting of 𝚙𝚞𝚝𝚂\mathtt{putS}, 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS}, 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} and 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} operations, where the send-sequence contains 𝚙𝚞𝚝𝚂\mathtt{putS} once, the receive-sequence contains 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS} once, and for every 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i}-action in (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}), there is also a 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i}-action in (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}). W.l.o.g., we assume that in the send-sequence the indices ii are ascending. I.e. if 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} or 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} is before 𝚙𝚞𝚝𝙲j\mathtt{putC}^{j} or 𝚝𝚊𝚔𝚎𝙲j\mathtt{takeC}^{j}, then i<ji<j holds.

We often say translation instead of gstb-translation, if this is clear from the context.

Definition 5.2.

Let T=(T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)T=(T_{\mathit{send}},T_{\mathit{receive}}) be a gstb-translation. We write T𝑠𝑒𝑛𝑑x,yT_{\mathit{send}}^{x,y} for the program T𝑠𝑒𝑛𝑑T_{\mathit{send}} instantiated for x¯y\overline{x}y, i.e. 𝚙𝚞𝚝𝚂\mathtt{putS} is 𝚙𝚞𝚝𝚂xy\mathtt{putS}_{x}~{}y, and all other operations are indexed with xx. We write T𝑟𝑒𝑐𝑒𝑖𝑣𝑒x,yT_{\mathit{receive}}^{x,y} for the program T𝑟𝑒𝑐𝑒𝑖𝑣𝑒T_{\mathit{receive}} instantiated for x(y)x(y), i.e. 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS} is 𝚝𝚊𝚔𝚎𝚂xy\mathtt{takeS}_{x}~{}y, and all other operations are indexed with xx.

ϕ0,T(P)=\xLongleftarrowmain𝐝𝐨{𝑠𝑡𝑜𝑝newMVar();forkIOϕT(P);putMVar𝑠𝑡𝑜𝑝()}ϕT(x¯y.P)=𝐝𝐨{T𝑠𝑒𝑛𝑑x,y;ϕT(P)}ϕT(x(y).P)=𝐝𝐨{T𝑟𝑒𝑐𝑒𝑖𝑣𝑒x,y;ϕT(P)}ϕT(P|Q)=𝐝𝐨{forkIOϕT(Q);ϕT(P)}ϕT(νx.P)=𝐝𝐨{𝑐𝑜𝑛𝑡𝑥newEmptyMVar;𝑐ℎ𝑒𝑐𝑘𝑥1newEmptyMVar;;𝑐ℎ𝑒𝑐𝑘𝑥nnewEmptyMVar;letrecx=Chan𝑐𝑜𝑛𝑡𝑥𝑐ℎ𝑒𝑐𝑘𝑥1𝑐ℎ𝑒𝑐𝑘𝑥ninϕT(P)}ϕT(0)=return()ϕT(Stop)=takeMVar𝑠𝑡𝑜𝑝ϕT(!P)=letrecf=𝐝𝐨{forkIOϕT(P);f}inf\begin{array}[]{@{}l@{\,}l@{\,}l@{}}\phi_{0,T}(P)&=&{\xLongleftarrow{\!\!\text{main}\!\!}}\,{\mathbf{do}~{}\{}\begin{array}[t]{@{}l@{}}{\mathit{stop}}\leftarrow\text{{newMVar}}~{}();\text{{forkIO}}~{}\phi_{T}(P);\text{{putMVar}}~{}{\mathit{stop}}~{}()\}\end{array}\\ \phi_{T}(\overline{x}y.P)&=&\mathbf{do}~{}\{T_{\mathit{send}}^{x,y};\phi_{T}(P)\}\\[4.30554pt] \phi_{T}(x(y).P)&=&\mathbf{do}~{}\{T_{\mathit{receive}}^{x,y};\phi_{T}(P)\}\\ \phi_{T}(P{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}Q)&=&\mathbf{do}~{}\{\text{{forkIO}}~{}\phi_{T}(Q);\phi_{T}(P)\}\\ \\[-12.05553pt] \phi_{T}(\nu x.P)&=&\mathbf{do}\,\{\begin{array}[t]{@{}l@{}}\mathit{contx}\leftarrow\text{{newEmptyMVar}};\mathit{checkx}_{1}\leftarrow\text{{newEmptyMVar}};\ldots;\mathit{checkx}_{n}\leftarrow\text{{newEmptyMVar}};\\ \text{letrec}~{}x=\text{{Chan}}\,\mathit{contx}~{}\mathit{checkx}_{1}\ldots\mathit{checkx}_{n}\text{{in}}~{}\phi_{T}(P)\}\\ \end{array}\\ \phi_{T}(0)&=&{\text{return}~{}()}\\ \phi_{T}(\text{{Stop}})&=&{\text{{takeMVar}}~{}{\mathit{stop}}}\\ \phi_{T}(!P)&=&{\text{letrec}~{}f=\mathbf{do}~{}\{\text{{forkIO}}~{}\phi_{T}(P);f\}~{}\text{{in}}~{}f}\end{array}

Figure 7: Induced translations ϕT\phi_{T} and ϕ0,T\phi_{0,T} where T=(T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑖𝑒𝑣𝑒)T=(T_{\mathit{send}},T_{\mathit{recieve}}) uses nn check-MVars

The induced translations ϕ0,T\phi_{0,T} and ϕT\phi_{T} of (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}) are defined in Fig. 7.

The induced translations are defined similar to the translations τ0\tau_{0} and τ\tau, where the differences are the representations of the channel. The translation of νx\nu x, x(y)x(y), and x¯y\overline{x}y is different, but the other cases remain the same. Since ϕ0,T(P)=C𝑜𝑢𝑡τ[ϕT(P)]\phi_{0,T}(P)=C_{\mathit{out}}^{\tau}[\phi_{T}(P)] and by the same arguments as in Theorem 4.8, we have:

Proposition 5.3.

If ϕT\phi_{T} is closed convergence-equivalent, then ϕT\phi_{T} is adequate.

An execution of a translation (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}) for name xx is the simulation of the abstract program, i.e. a program that starts with empty MVars xx, x1,,xnx_{1},\ldots,x_{n}, and is an interleaved sequence of actions from the send and receive-sequence T𝑠𝑒𝑛𝑑T_{\mathit{send}} and T𝑟𝑒𝑐𝑒𝑖𝑣𝑒T_{\mathit{receive}}, respectively.

To speak about the translations we make further classifications: We say that a translation allows multiple uses, if a check-MVar is used more than once, i.e. the sender or receiver may contain 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} or 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} more than once for the same ii. A translation has the interprocess check restriction, if for every ii: 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} and 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} do not occur both in T𝑠𝑒𝑛𝑑T_{\mathit{send}}, and also not both in T𝑟𝑒𝑐𝑒𝑖𝑣𝑒T_{\mathit{receive}}.

Definition 5.4.

A translation T=(T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)T=(T_{\mathit{send}},T_{\mathit{receive}}) according to Definition 5.1 is

  • executable if there is a deadlock free execution of TT;

  • communicating, if T𝑠𝑒𝑛𝑑T_{\mathit{send}} contains at least one 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i}-action;

  • overlap-free if for a fixed name xx, starting with empty MVars, every interleaved (concurrent) execution of (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}) cannot be disturbed by starting another execution of T𝑠𝑒𝑛𝑑T_{\mathit{send}} and/or T𝑟𝑒𝑐𝑒𝑖𝑣𝑒T_{\mathit{receive}}. More formally, let ((s1;;si);(r1;.rj))((s_{1};...;s_{i});(r_{1};....r_{j})) and ((s1;;si);(r1;.rj))((s^{\prime}_{1};...;s^{\prime}_{i});(r^{\prime}_{1};....r^{\prime}_{j})) be two copies of (Tsend,Treceive)(T_{send},T_{receive}) for a fixed name xx. We call a command aka_{k} from one of the four sequences, an aa-action for a{s,s,r,r}a\in\{s,s^{\prime},r,r^{\prime}\}. The translation TT is overlap-free if every execution of the four sequences has the property that it can be split into a prefix and a suffix (called parts in the following) such that one of the following properties holds

    1. 1.

      One part contains only ss- and rr-actions, and the other part contains only ss^{\prime}- and rr^{\prime}-actions.

    2. 2.

      One part contains only ss- and rr^{\prime}-actions and the other part contains only ss^{\prime}- and rr-actions.

We implemented a tool to enumerate translations and to test whether each translation preserves and reflects may- and should-convergence for a (given) finite set of processes. Hence, our tool can refute the correctness of translations, but it can also output (usually small) sets of translations which are not refuted and which are promising candidates for correct translations. The above mentioned parallel execution of TsendT_{send} and TreceiveT_{receive} is not sufficient to refute most of the translations, since it corresponds to the evaluation of the π\pi-program νx.(x(y)|x¯z)\nu x.(x(y){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{x}z) (which is must-divergent). Thus, we apply the translation to a subset of π\pi-processes, which we view as critical and for which we can automatically decide may- and should-convergence (before and after the translation). We consider only π\pi-programs of the form (νx1,,xn.P)(\nu x_{1},\ldots,x_{n}.P) where PP contains only 0, Stop, parallel composition, and input- and output-prefixes. These programs are replication free and the ν\nu-binders are on the top, and hence terminate. In the following we omit the ν\nu-binders, but always mean them to be present. We also implemented techniques to generates all such programs until a bound on the size of the program is reached.

Our simulation tool444Available via https://gitlab.com/davidsabel/refute-pi. can execute all possible evaluations of those π\pi-processes and – since all evaluation paths are finite – the tool can check for may- and should-convergence of the π\pi-program. For the translated program, we do not generate a full 𝐶𝐻{\mathit{CH}}-program, but generate a sequence of sequences of 𝚝𝚊𝚔𝚎𝚂x,𝚙𝚞𝚝𝚂x,𝚝𝚊𝚔𝚎𝙲xi,𝚙𝚞𝚝𝙲xiz\mathtt{takeS}_{x},\mathtt{putS}_{x},\mathtt{takeC}_{x}^{i},\mathtt{putC}_{x}^{i}~{}z and Stop-operations by applying the translation to all action prefixes in the π\pi-program and by encoding Stop as Stop, 0 into an empty sequence. We get a sequence of sequences, since we have several threads and each thread is represented by one sequence. For executing the translated program, we simulate the global store (of MVars) and execute all possible interleavings where we check for may- and should-convergence by looking whether the Stop eventually occurs at the beginning of the sequence. This simulates the behavior of the real 𝐶𝐻{\mathit{CH}}-program in a controllable manner.

With the encoding of the sender- and receiver program and a π\pi-calculus process PP we

  1. 1.

    translate PP with the encodings in the sequence of sequences consisting of 𝚝𝚊𝚔𝚎𝚂x\mathtt{takeS}_{x}, 𝚙𝚞𝚝𝚂x\mathtt{putS}_{x}, 𝚝𝚊𝚔𝚎𝙲xi\mathtt{takeC}_{x}^{i}, 𝚙𝚞𝚝𝙲xiz\mathtt{putC}_{x}^{i}~{}z and Stop-operations;

  2. 2.

    simulate the execution on all interleavings;

  3. 3.

    test may- and should convergence of the original π\pi-program PP as well as the encoded program (w.r.t.  the simulation);

  4. 4.

    compare the convergence before and after the translation. If there is a difference in the convergence behavior, then PP is a counter-example for the correctness of the encodings.

Example 5.5.

Let us consider the gstb-translation (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)=([𝚝𝚊𝚔𝚎𝙲,𝚙𝚞𝚝𝚂],[𝚙𝚞𝚝𝙲,𝚝𝚊𝚔𝚎𝚂])(T_{\mathit{send}},T_{\mathit{receive}})=([\mathtt{takeC},\mathtt{putS}],[\mathtt{putC},\mathtt{takeS}]) and the π\pi-process P=νx,y,z,w(x¯y.x(z).Stop|x(w).0)P=\nu x,y,z,w(\overline{x}y.x(z).\text{{Stop}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x(w).0). Our tool recognizes that PP{\uparrow} and PP{\Uparrow} holds, since PP reduces to the must-divergent process νx,z.(x(z).Stop)\nu x,z.(x(z).\text{{Stop}}) and there are no other reduction possibilities.

Applying (T𝑠𝑒𝑛𝑑,T𝑟𝑒𝑐𝑒𝑖𝑣𝑒)(T_{\mathit{send}},T_{\mathit{receive}}) to PP yields the abstract program

q:=[[𝚝𝚊𝚔𝚎𝙲x,𝚙𝚞𝚝𝚂xy,𝚙𝚞𝚝𝙲x,𝚝𝚊𝚔𝚎𝚂xz,Stop],[𝚙𝚞𝚝𝙲x,𝚝𝚊𝚔𝚎𝚂xw]].q:=[[\mathtt{takeC}_{x},\mathtt{putS}_{x}~{}y,\mathtt{putC}_{x},\mathtt{takeS}_{x}~{}z,\text{{Stop}}],[\mathtt{putC}_{x},\mathtt{takeS}_{x}~{}w]].

For qq, our tool inspects all executions. Among them there is the sequence

𝚙𝚞𝚝𝙲x;𝚝𝚊𝚔𝚎𝙲x;𝚙𝚞𝚝𝚂xy;𝚙𝚞𝚝𝙲x;𝚝𝚊𝚔𝚎𝚂xz;Stop\mathtt{putC}_{x};\mathtt{takeC}_{x};\mathtt{putS}_{x}~{}y;\mathtt{putC}_{x};\mathtt{takeS}_{x}~{}z;\text{{Stop}}

which can be executed ending in Stop. Thus qq is may-convergent, and thus the process PP is a counter-example that refutes the correctness of the translation.

The case that there is no check-MVar leads to one possible translation ([𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝚂])([\mathtt{putS}],[\mathtt{takeS}]) which means that x¯z\overline{x}z is translated into 𝚙𝚞𝚝𝚂xy\mathtt{putS}_{x}~{}y and x(y)x(y) is translated into 𝚝𝚊𝚔𝚎𝚂xy\mathtt{takeS}_{x}~{}y. This translation is not correct, since for instance the π\pi-process x¯z.x(y).Stop\overline{x}z.x(y).\text{{Stop}} is neither may- nor should-convergent, but the translation (written as an abstract program) is [[𝚙𝚞𝚝𝚂xz,𝚝𝚊𝚔𝚎𝚂xy,Stop]][[\mathtt{putS}_{x}~{}z,\mathtt{takeS}_{x}~{}y,\text{{Stop}}]]. I.e., it consists of one process which is may- and should-convergent (since 𝚙𝚞𝚝𝚂xz;𝚝𝚊𝚔𝚎𝚂xy;Stop\mathtt{putS}_{x}~{}z;\mathtt{takeS}_{x}~{}y;\text{{Stop}} is the only evaluation sequence and its execution ends in Stop). Note that the translation into 𝐶𝐻{\mathit{CH}} will generate two threads: the main threads that will wait until the MVar 𝑠𝑡𝑜𝑝{\mathit{stop}} is filled, and a concurrent thread that will do the above operations.

5.1 Translations with Interprocess Check Restriction

We consider translations with the interprocess check restriction (each 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} and 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} must be distributed between the sender and the receiver). There are n!2n(n+1)2n!\cdot 2^{n}\cdot(n+1)^{2} different translations (for nn check-MVars).

Translation (sender,receiver) Counter-example (π\pi-process)
{\downarrow} before
{\Downarrow} before
{\downarrow} after
{\Downarrow} after
([𝚙𝚞𝚝𝙲,𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{putC},\mathtt{putS}],[\mathtt{takeC},\mathtt{takeS}]) x¯y.x(y).Stop\overline{x}y.x(y).\text{{Stop}} N N Y Y
([𝚙𝚞𝚝𝙲,𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝚂,𝚝𝚊𝚔𝚎𝙲])([\mathtt{putC},\mathtt{putS}],[\mathtt{takeS},\mathtt{takeC}]) x¯y.x(y).Stop\overline{x}y.x(y).\text{{Stop}} N N Y Y
([𝚙𝚞𝚝𝚂,𝚙𝚞𝚝𝙲],[𝚝𝚊𝚔𝚎𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{putS},\mathtt{putC}],[\mathtt{takeC},\mathtt{takeS}]) x¯y.x(y).Stop\overline{x}y.x(y).\text{{Stop}} N N Y Y
([𝚙𝚞𝚝𝚂,𝚙𝚞𝚝𝙲],[𝚝𝚊𝚔𝚎𝚂,𝚝𝚊𝚔𝚎𝙲])([\mathtt{putS},\mathtt{putC}],[\mathtt{takeS},\mathtt{takeC}]) x¯y.x(y).Stop\overline{x}y.x(y).\text{{Stop}} N N Y Y
([𝚝𝚊𝚔𝚎𝙲,𝚙𝚞𝚝𝚂],[𝚙𝚞𝚝𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{takeC},\mathtt{putS}],[\mathtt{putC},\mathtt{takeS}]) x¯y.x(z).Stop|x(w)\overline{x}y.x(z).\text{{Stop}}~{}|~{}x(w) N N Y N
([𝚝𝚊𝚔𝚎𝙲,𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲])([\mathtt{takeC},\mathtt{putS}],[\mathtt{takeS},\mathtt{putC}]) x¯y.Stop|x(y)\overline{x}y.\text{{Stop}}~{}|~{}x(y) Y Y N N
([𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲],[𝚙𝚞𝚝𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{putS},\mathtt{takeC}],[\mathtt{putC},\mathtt{takeS}]) x¯y.x(z).Stop|x(w)\overline{x}y.x(z).\text{{Stop}}~{}|~{}x(w) N N Y N
([𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲])([\mathtt{putS},\mathtt{takeC}],[\mathtt{takeS},\mathtt{putC}]) x¯z.z¯a.Stop|x¯w.w¯a.Stop|x(y).y(u)\overline{x}z.\overline{z}a.\text{{Stop}}~{}|~{}\overline{x}w.\overline{w}a.\text{{Stop}}~{}|~{}x(y).y(u) Y Y Y N
Table 1: Translations using one check-MVar and with the interprocess check restriction

For a single check-MVar, all 8 translations are rejected by our tool, Table 1 shows the translations and the obtained counter examples. For 22 check-MVars, our tool refutes all 72 translations. Compared to Table 1, there are two further π\pi-programs used as counterexample. However, also the programs x¯y.Stop|x(y)\overline{x}y.\text{{Stop}}{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x(y) and x¯y.x(z).z¯q|x(z)|\overline{x}y.x(z).\overline{z}q{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}x(z){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt} x(z)|x¯z|y(u).Stopx(z){\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}\overline{x}z{\hskip 0.28453pt{\!\scalebox{2.0}[1.0]{\tt|}\!}\hskip 0.28453pt}y(u).\text{{Stop}} suffice to refute all 72 translations.

Theorem 5.6.

There is no valid gstb-translation with the interprocess check restriction for less than three check-MVars.

A reason for the failure of translations with less than three check-MVars may be:

Theorem 5.7.

There is no executable, communicating, and overlap-free gstb-translation with the interprocess check restriction for n<3n<3.

Proof.

For n=1n=1, we check the translations in Table 1. The first four are non-communicating. For the translation ([𝚝𝚊𝚔𝚎𝙲,𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲])([\mathtt{takeC},\mathtt{putS}],[\mathtt{takeS},\mathtt{putC}]) a deadlock occurs. For ([𝚝𝚊𝚔𝚎𝙲,𝚙𝚞𝚝𝚂],[𝚙𝚞𝚝𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{takeC},\mathtt{putS}],[\mathtt{putC},\mathtt{takeS}]), after 𝚙𝚞𝚝𝙲\mathtt{putC}, 𝚝𝚊𝚔𝚎𝙲\mathtt{takeC}, we can execute 𝚙𝚞𝚝𝙲\mathtt{putC} again. For ([𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲])([\mathtt{putS},\mathtt{takeC}],[\mathtt{takeS},\mathtt{putC}]), after executing 𝚙𝚞𝚝𝚂\mathtt{putS}, 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS} we can execute 𝚙𝚞𝚝𝚂\mathtt{putS} again. For ([𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲],[𝚙𝚞𝚝𝙲,𝚝𝚊𝚔𝚎𝚂])([\mathtt{putS},\mathtt{takeC}],[\mathtt{putC},\mathtt{takeS}]), after 𝚙𝚞𝚝𝙲\mathtt{putC},𝚙𝚞𝚝𝚂\mathtt{putS}, 𝚝𝚊𝚔𝚎𝙲\mathtt{takeC} we can execute 𝚙𝚞𝚝𝙲\mathtt{putC} again. For n=2n=2, the simulator finds no executable, communicating, and overlap-free translation: 18 translations are non-communicating, 21 lead to a deadlock, and 33 may lead to an overlap. ∎

For 3 MVars, our tool rejects 762 out of 768 translations (using the same counter examples as for 22 check-MVars) and the following 6 translations remain:

𝔗1\mathfrak{T}_{1} == ([𝚙𝚞𝚝𝚂,𝚙𝚞𝚝𝙲1,𝚝𝚊𝚔𝚎𝙲2,𝚙𝚞𝚝𝙲3],[𝚝𝚊𝚔𝚎𝙲1,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝙲3,𝚝𝚊𝚔𝚎𝚂])([\mathtt{putS},\mathtt{putC}^{1},\mathtt{takeC}^{2},\mathtt{putC}^{3}],[\mathtt{takeC}^{1},\mathtt{putC}^{2},\mathtt{takeC}^{3},\mathtt{takeS}])
𝔗2\mathfrak{T}_{2} == ([𝚝𝚊𝚔𝚎𝙲1,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲2,𝚝𝚊𝚔𝚎𝙲3],[𝚙𝚞𝚝𝙲3,𝚙𝚞𝚝𝙲1,𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲2])([\mathtt{takeC}^{1},\mathtt{putS},\mathtt{takeC}^{2},\mathtt{takeC}^{3}],[\mathtt{putC}^{3},\mathtt{putC}^{1},\mathtt{takeS},\mathtt{putC}^{2}])
𝔗3\mathfrak{T}_{3} == ([𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲2,𝚙𝚞𝚝𝙲3],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝙲3,𝚝𝚊𝚔𝚎𝙲1])([\mathtt{putC}^{1},\mathtt{putS},\mathtt{takeC}^{2},\mathtt{putC}^{3}],[\mathtt{takeS},\mathtt{putC}^{2},\mathtt{takeC}^{3},\mathtt{takeC}^{1}])
𝔗4\mathfrak{T}_{4} == ([𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝙲3,𝚙𝚞𝚝𝚂],[𝚝𝚊𝚔𝚎𝙲2,𝚙𝚞𝚝𝙲3,𝚝𝚊𝚔𝚎𝚂,𝚝𝚊𝚔𝚎𝙲1])([\mathtt{putC}^{1},\mathtt{putC}^{2},\mathtt{takeC}^{3},\mathtt{putS}],[\mathtt{takeC}^{2},\mathtt{putC}^{3},\mathtt{takeS},\mathtt{takeC}^{1}])
𝔗5\mathfrak{T}_{5} == ([𝚝𝚊𝚔𝚎𝙲1,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲2,𝚝𝚊𝚔𝚎𝙲3],[𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲3])([\mathtt{takeC}^{1},\mathtt{putS},\mathtt{takeC}^{2},\mathtt{takeC}^{3}],[\mathtt{putC}^{1},\mathtt{putC}^{2},\mathtt{takeS},\mathtt{putC}^{3}])
𝔗6\mathfrak{T}_{6} == ([𝚙𝚞𝚝𝙲1,𝚝𝚊𝚔𝚎𝙲2,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲3],[𝚝𝚊𝚔𝚎𝙲1,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲3])([\mathtt{putC}^{1},\mathtt{takeC}^{2},\mathtt{putS},\mathtt{takeC}^{3}],[\mathtt{takeC}^{1},\mathtt{putC}^{2},\mathtt{takeS},\mathtt{putC}^{3}])
Proposition 5.8.

The translations 𝔗1,𝔗2,𝔗3\mathfrak{T}_{1},\mathfrak{T}_{2},\mathfrak{T}_{3}, and 𝔗4\mathfrak{T}_{4} are executable, communicating, and overlap-free, whereas the translations 𝔗5\mathfrak{T}_{5} and 𝔗6\mathfrak{T}_{6} are executable, communicating, but overlapping.

Proof.

We only consider overlaps. For 𝔗1\mathfrak{T}_{1} - 𝔗4\mathfrak{T}_{4}, only if all 8 actions are finished, the next send or receive can start. For 𝔗5,𝔗6\mathfrak{T}_{5},\mathfrak{T}_{6}, after executing 𝚙𝚞𝚝𝙲1\mathtt{putC}^{1}, 𝚝𝚊𝚔𝚎𝙲1\mathtt{takeC}^{1}, we can again execute 𝚙𝚞𝚝𝙲1\mathtt{putC}^{1}. ∎

In schmidt-schauss-sabel:frank-60:19 we argue that the induced translation ϕ𝔗1\phi_{\mathfrak{T}_{1}} leaves may- and should-convergence invariant. The main help in reasoning is that there is no unintended interleaving of send and receive sequences according to Proposition 5.8. Application of Proposition 5.3 then shows:

Theorem 5.9.

Translation ϕ𝔗1\phi_{\mathfrak{T}_{1}} is adequate.

For 4 MVars, our tool refutes 9266 and there remain 334 candidates for correct translations.

5.2 Dropping the Interprocess Check Restriction

We now consider gstb-translations without the interprocess check restriction, i.e. 𝚙𝚞𝚝𝙲i\mathtt{putC}^{i} and 𝚝𝚊𝚔𝚎𝙲i\mathtt{takeC}^{i} both may occur in the sender-program (or the receiver program, resp.). If we allow one check-MVar without reuse, then there are 20 candidates for translations. All are refuted by our simulation. Allowing reuse of the single check-MVar seems not to help to construct a correct translation: We simulated this for up to 6 uses, leading to 420420 candidates for a correct translation – our simulation refutes all of them.

Conjecture 5.10.

We conjecture that there is no correct translation for one check-MVar where re-uses are permitted and the interprocess check restriction is dropped, i.e., TsendT_{send} is a word over {𝚙𝚞𝚝𝚂,𝚙𝚞𝚝𝙲,\{\mathtt{putS},\mathtt{putC}, 𝚝𝚊𝚔𝚎𝙲}\mathtt{takeC}\} and TreceiveT_{receive} a word over {𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲,\{\mathtt{takeS},\mathtt{putC}, 𝚝𝚊𝚔𝚎𝙲}\mathtt{takeC}\}, where 𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝚂\mathtt{putS},\mathtt{takeS} occur exactly once. 555We already have a proof in the meantime, not yet published.

For two MVars, one use and without the interprocess check restriction there are 420 translations. Our tool refutes all except for two: 𝔗7=([𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲2,𝚝𝚊𝚔𝚎𝙲1],[𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲2])\mathfrak{T}_{7}=([\mathtt{putC}^{1},\mathtt{putS},\mathtt{takeC}^{2},\mathtt{takeC}^{1}],[\mathtt{takeS},\mathtt{putC}^{2}]) and 𝔗8=([𝚝𝚊𝚔𝚎𝙲1,𝚙𝚞𝚝𝚂],[𝚙𝚞𝚝𝙲2,𝚙𝚞𝚝𝙲1,𝚝𝚊𝚔𝚎𝚂,𝚝𝚊𝚔𝚎𝙲2])\mathfrak{T}_{8}=([\mathtt{takeC}^{1},\mathtt{putS}],[\mathtt{putC}^{2},\mathtt{putC}^{1},\mathtt{takeS},\mathtt{takeC}^{2}]). In 𝔗7\mathfrak{T}_{7} the second check-MVar is used as a mutex for the senders, ensuring that only one sender can operate at a time. 𝔗8\mathfrak{T}_{8} does the same on the receiver side.

Proposition 5.11.

The translations 𝔗7,𝔗8\mathfrak{T}_{7},\mathfrak{T}_{8} are executable, communicating, overlap-free.

Proof.

The translations are executable and communicating. For 𝔗7\mathfrak{T}_{7}, 𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝚂\mathtt{putC}^{1},\mathtt{putS} and 𝚝𝚊𝚔𝚎𝚂\mathtt{takeS} are performed in this order. An additional sender cannot execute its first command before the original sender performs 𝚝𝚊𝚔𝚎𝙲1\mathtt{takeC}^{1} and this again is only possible after the receiver program is finished. An additional receiver can only be executed after a 𝚙𝚞𝚝𝚂\mathtt{putS} is performed, which cannot be done by the current sender and receivers. For 𝔗8\mathfrak{T}_{8}, 𝚙𝚞𝚝𝙲2,𝚙𝚞𝚝𝙲1\mathtt{putC}^{2},\mathtt{putC}^{1} and 𝚝𝚊𝚔𝚎𝙲1\mathtt{takeC}^{1} are performed in this order. An additional receiver can only start after 𝚝𝚊𝚔𝚎𝙲2\mathtt{takeC}^{2} was executed by the original receiver, which can only occur after the original sender and receiver program are fully evaluated. An additional sender can only start after 𝚙𝚞𝚝𝙲1\mathtt{putC}^{1} has been executed again, but the current sender and receiver cannot execute this command. ∎

The induced translation ϕ𝔗7\phi_{\mathfrak{T}_{7}} is (closed) convergence-equivalent schmidt-schauss-sabel:frank-60:19 . With Proposition 5.3 this shows:

Theorem 5.12.

Translation ϕ𝔗7\phi_{\mathfrak{T}_{7}} is adequate.

We are convinced that the same holds for 𝔗8\mathfrak{T}_{8}. We conclude the statistics of our search for translations without the interprocess restriction: For 3 MVars, there are 10080 translations and 9992 are refuted, i.e. 98 are potentially correct. One is ([𝚙𝚞𝚝𝙲1,𝚙𝚞𝚝𝚂,𝚝𝚊𝚔𝚎𝙲2,𝚝𝚊𝚔𝚎𝙲1],[𝚙𝚞𝚝𝙲3,𝚝𝚊𝚔𝚎𝚂,𝚙𝚞𝚝𝙲2,𝚝𝚊𝚔𝚎𝙲3])([\mathtt{putC}^{1},\mathtt{putS},\mathtt{takeC}^{2},\mathtt{takeC}^{1}],[\mathtt{putC}^{3},\mathtt{takeS},\mathtt{putC}^{2},\mathtt{takeC}^{3}]) which is quite intuitive: check-MVar 11 is used as a mutex for all senders on the same channel, check-MVar 33 is used as a mutex for all receivers, and check-MVar 22 is used to send an acknowledgement. For 4 MVars, there are 277200 translations and 273210 are refuted, i.e. 3990 are potentially correct.

6 Discussion and Conclusion

We investigated translating the π\pi-calculus into 𝐶𝐻{\mathit{CH}} and showed correctness and adequacy of a translation τ0\tau_{0} with private MVars for every translated communication. For translations with global names, we started an investigation on exhibiting (potentially) correct translations. We identified several minimal potentially correct translations and characterized all incorrect “small” translations. For two particular global translations, we have shown that they are convergence-equivalent and we proved their adequacy on open processes. The exact form of the translations were found by our tool to search for translations and to refute their correctness. The tool showed that there is no correct gstb-translation with the interprocess check restriction for less than 3 check-MVars. We also may consider extended variants of the π\pi-calculus. We are convinced that adding recursion and sums can easily be built into the translation, while it might be challenging to encode mixed sums or (name) matching operators. For name matching operators, our current translation would require to test usual bindings in 𝐶𝐻\mathit{CH} for equality which is not available in core-Haskell. Solutions may either use an adapted translation or a target language that supports observable sharing Claessen-Sands:99 ; Gill:09 . The translation of mixed-sums into 𝐶𝐻\mathit{CH} appears to require more complex translations, where the send- and receive-parts are not linear lists of actions.

Acknowledgments

We thank the anonymous reviewers for their valuable comments. In particular, we thank an anonymous reviewer for advises to improve the construction of translations, and for providing the counter-example in the last row of Table 1.

References

  • (1)
  • (2) Martín Abadi & Andrew D. Gordon (1997): A Calculus for Cryptographic Protocols: The Spi Calculus. In: CCS 1997, ACM, pp. 36–47, 10.1145/266420.266432.
  • (3) Richard Banach, J. Balázs & George A. Papadopoulos (1995): A Translation of the Pi-Calculus Into MONSTR. J.UCS 1(6), pp. 339–398, 10.3217/jucs-001-06-0339.
  • (4) Gérard Boudol (1992): Asynchrony and the Pi-calculus. Technical Report Research Report RR-1702,inria-00076939, INRIA, France. Available at https://hal.inria.fr/inria-00076939.
  • (5) Mauricio Cano, Jaime Arias & Jorge A. Pérez (2017): Session-Based Concurrency, Reactively. In: FORTE 2017, LNCS 10321, Springer, pp. 74–91, 10.1007/978-3-319-60225-7_6.
  • (6) Avik Chaudhuri (2009): A concurrent ML library in concurrent Haskell. In: ICFP 2009, ACM, pp. 269–280, 10.1145/1596550.1596589.
  • (7) Koen Claessen & David Sands (1999): Observable Sharing for Functional Circuit Description. In: ASIAN 1999, LNCS 1742, Springer, pp. 62–73, 10.1007/3-540-46674-6_7.
  • (8) Cédric Fournet & Georges Gonthier (2002): The Join Calculus: A Language for Distributed Mobile Programming. In: APPSEM 2000, LNCS 2395, Springer, pp. 268–332, 10.1007/3-540-45699-6_6.
  • (9) Andy Gill (2009): Type-safe observable sharing in Haskell. In: Haskell 2009, ACM, pp. 117–128, 10.1145/1596638.1596653.
  • (10) Rob van Glabbeek, Ursula Goltz, Christopher Lippert & Stephan Mennicke (2019): Stronger Validity Criteria for Encoding Synchrony. In: The Art of Modelling Computational Systems: A Journey from Logic and Concurrency to Security and Privacy - Essays Dedicated to Catuscia Palamidessi on the Occasion of Her 60th Birthday, LNCS 11760, Springer, pp. 182–205, 10.1007/978-3-030-31175-9_11.
  • (11) Rob J. van Glabbeek (2018): On the validity of encodings of the synchronous in the asynchronous π\pi-calculus. Inf. Process. Lett. 137, pp. 17–25, 10.1016/j.ipl.2018.04.015.
  • (12) Daniele Gorla (2010): Towards a unified approach to encodability and separation results for process calculi. Inf. Comput. 208(9), pp. 1031–1053, 10.1016/j.ic.2010.05.002.
  • (13) Kohei Honda & Mario Tokoro (1991): An Object Calculus for Asynchronous Communication. In: ECOOP 1991, Springer-Verlag, pp. 133–147, 10.1007/BFb0057019.
  • (14) Cosimo Laneve (1996): On testing equivalence: May and Must Testing in the Join-Calculus. Technical Report UBLCS 96-04, University of Bologna. Available at https://www.cs.unibo.it/~laneve/papers/laneve96may.pdf.
  • (15) Robin Milner (1999): Communicating and mobile systems - the Pi-calculus. Cambridge University Press.
  • (16) Robin Milner, Joachim Parrow & David Walker (1992): A Calculus of Mobile Processes, I & II. Inform. and Comput. 100(1), pp. 1–77, 10.1016/0890-5401(92)90008-4.
  • (17) Joachim Niehren, David Sabel, Manfred Schmidt-Schauß & Jan Schwinghammer (2007): Observational Semantics for a Concurrent Lambda Calculus with Reference Cells and Futures. Electron. Notes Theor. Comput. Sci. 173, pp. 313–337, 10.1016/j.entcs.2007.02.041.
  • (18) Dominic A. Orchard & Nobuko Yoshida (2016): Effects as sessions, sessions as effects. In: POPL 2016, ACM, pp. 568–581, 10.1145/2837614.2837634.
  • (19) Catuscia Palamidessi (1997): Comparing the Expressive Power of the Synchronous and the Asynchronous pi-calculus. In: POPL 1997, ACM Press, pp. 256–265, 10.1145/263699.263731.
  • (20) Catuscia Palamidessi (2003): Comparing The Expressive Power Of The Synchronous And Asynchronous Pi-Calculi. Math. Structures Comput. Sci. 13(5), pp. 685–719, 10.1017/S0960129503004043.
  • (21) Simon L. Peyton Jones, Andrew Gordon & Sigbjorn Finne (1996): Concurrent Haskell. In: POPL 1996, ACM, pp. 295–308, 10.1145/237721.237794.
  • (22) Corrado Priami (1995): Stochastic pi-Calculus. Comput. J. 38(7), pp. 578–589, 10.1093/comjnl/38.7.578.
  • (23) George Russell (2001): Events in Haskell, and How to Implement Them. In: ICFP 2001, ACM, pp. 157–168, 10.1145/507635.507655.
  • (24) David Sabel & Manfred Schmidt-Schauß (2011): A contextual semantics for Concurrent Haskell with futures. In: PPDP 2011, ACM, pp. 101–112, 10.1145/2003476.2003492.
  • (25) David Sabel & Manfred Schmidt-Schauß (2012): Conservative Concurrency in Haskell. In: LICS 2012, IEEE, pp. 561–570, 10.1109/LICS.2012.66.
  • (26) David Sabel & Manfred Schmidt-Schauß (2015): Observing Success in the Pi-Calculus. In: WPTE 2015, OASICS 46, pp. 31–46, 10.4230/OASIcs.WPTE.2015.31.
  • (27) Davide Sangiorgi & David Walker (2001): On Barbed Equivalences in pi-Calculus. In: CONCUR 200, LNCS 2154, Springer, pp. 292–304, 10.1007/3-540-44685-0_20.
  • (28) Davide Sangiorgi & David Walker (2001): The π\pi-calculus: a theory of mobile processes. Cambridge university press.
  • (29) Manfred Schmidt-Schauß, Joachim Niehren, Jan Schwinghammer & David Sabel (2008): Adequacy of Compositional Translations for Observational Semantics. In: IFIP TCS 2008, IFIP 273, Springer, pp. 521–535, 10.1007/978-0-387-09680-3_35.
  • (30) Manfred Schmidt-Schauß & David Sabel (2020): Embedding the Pi-Calculus into a Concurrent Functional Programming Language. Frank report 60, Institut für Informatik. Fachbereich Informatik und Mathematik. J. W. Goethe-Universität Frankfurt am Main. Available at http://www.ki.informatik.uni-frankfurt.de/papers/frank/frank-60v5.pdf.
  • (31) Manfred Schmidt-Schauß, David Sabel & Nils Dallmeyer (2018): Sequential and Parallel Improvements in a Concurrent Functional Programming Language. In: PPDP 2018, ACM, pp. 20:1–20:13, 10.1145/3236950.3236952.
  • (32) Manfred Schmidt-Schauß, David Sabel, Joachim Niehren & Jan Schwinghammer (2015): Observational program calculi and the correctness of translations. Theor. Comput. Sci. 577, pp. 98–124, 10.1016/j.tcs.2015.02.027.
  • (33) Jan Schwinghammer, David Sabel, Manfred Schmidt-Schauß & Joachim Niehren (2009): Correctly translating concurrency primitives. In: ML 2009, ACM, pp. 27–38, 10.1145/1596627.1596633.
  • (34) Ping Yang, C. R. Ramakrishnan & Scott A. Smolka (2004): A logical encoding of the pi-calculus: model checking mobile processes using tabled resolution. STTT 6(1), pp. 38–66, 10.1007/s10009-003-0136-3.