This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Deriving Distributive Laws for Graded Linear Types

Jack Hughes School of Computing, University of KentSchool of Computing, University of KentSchool of Computing, University of Kent    Michael Vollmer School of Computing, University of KentSchool of Computing, University of Kent    Dominic Orchard School of Computing, University of Kent
Abstract

The recent notion of graded modal types provides a framework for extending type theories with fine-grained data-flow reasoning. The Granule language explores this idea in the context of linear types. In this practical setting, we observe that the presence of graded modal types can introduce an additional impediment when programming: when composing programs, it is often necessary to ‘distribute’ data types over graded modalities, and vice versa. In this paper, we show how to automatically derive these distributive laws as combinators for programming. We discuss the implementation and use of this automated deriving procedure in Granule, providing easy access to these distributive combinators. This work is also applicable to Linear Haskell (which retrofits Haskell with linear types via grading) and we apply our technique there to provide the same automatically derived combinators. Along the way, we discuss interesting considerations for pattern matching analysis via graded linear types. Lastly, we show how other useful structural combinators can also be automatically derived.

1 Introduction

Linear type systems capture and enforce the idea that some data travels a linear dataflow path through a program, being consumed exactly once. This is enforced by disallowing the structural rules of weakening and contraction on variables carrying such values. Non-linear dataflow is then typically captured by a modality !A!A characterising values/variables on which all structural rules are permitted [13]. This binary characterisation (linear versus non-linear) is made more fine-grained in Bounded Linear Logic (BLL) via a family of modalities !rA!_{r}A where rr is a polynomial term in \mathbb{N} capturing the maximum number of times rr that a value AA can be used [14]. The proof rules of BLL track upper-bounds on non-linear use via these indices. Various works have generalised the indices of BLL’s modalities to arbitrary semirings [7, 12, 24] (often referred to as coeffect systems). These generalised systems provide a unified way to capture various program properties relating to dataflow via graded necessity modalities rA\Box_{r}A where rr is drawn from a semiring. The functional programming language Granule111https://granule-project.github.io/ provides a vehicle for exploring graded modal types (both graded necessity and graded possibility, which can model side effects [20]) alongside linear and indexed types (GADTs) [23]. There are various similar systems in the recent literature: Abel and Bernardy give an alternate graded modal calculus [2] and Linear Haskell retrofits linearity onto the Haskell type system [4] via a graded type system akin to the coeffect calculus of Petricek et al. [24]. The popularity of graded types is growing and their practical implications are being explored in research languages (e.g. Granule), and popular functional languages (e.g. Haskell [4] and Idris 2 [5]).

When programming with graded modal types, we have observed there is often a need to ‘distribute’ a graded modality over a type, and vice versa, in order to compose programs. That is, we may find ourselves in possession of a r(𝖥α)\Box_{r}(\mathsf{F}\alpha) value (for some parametric data type 𝖥\mathsf{F}) which needs to be passed to a pre-existing function (of our own codebase or a library) which requires a 𝖥(rα)\mathsf{F}(\Box_{r}\alpha) value, or perhaps vice versa. A distributive law (in the categorical sense, e.g., [28]) provides a conversion from one to the other. In this paper, we present a procedure to automatically synthesise these distributive operators, applying a generic programming methodology [15] to compute these operations given the base type (e.g., 𝖥α\mathsf{F}\alpha in the above description). This serves to ease the use of graded modal types in practice, removing boilerplate code by automatically generating these ‘interfacing functions’ on-demand, for user-defined data types as well as built-in types.

Throughout, we refer to distributive laws of the form r(𝖥α)𝖥(rα)\Box_{r}(\mathsf{F}\alpha)\rightarrow\mathsf{F}(\Box_{r}\alpha) as push operations (as they ‘push’ the graded modality inside the type constructor 𝖥\mathsf{F}), and dually 𝖥(rα)r(𝖥α)\mathsf{F}(\Box_{r}\alpha)\rightarrow\Box_{r}(\mathsf{F}\alpha) as pull operations (as they ‘pull’ the graded modality outside the type constructor 𝖥\mathsf{F}).

The primary contributions of this paper are then:

  • an overview of the application of distributive laws in the context of graded modal types;

  • an automatic procedure for calculating distributive laws from types and a formal analysis of their properties;

  • a realisation of this approach in both Granule (embedded into the compiler) and Linear Haskell (expressed within the language itself, leveraging Haskell’s advanced features);

  • and derivations of related combinators for structural use of values in a graded linear setting.

Along the way, we also analyse choices made around the typed-analysis of pattern matching in various recent works on graded modal types. Through the comparison between Granule and Linear Haskell, we also highlight ways in which Linear Haskell could be made more general and flexible in the future.

Section 2 defines a core calculus GrMiniP with linear and graded modal types which provides an idealised, simply-typed subset of Granule with which we develop the core contribution. Section 3 gives the procedures for deriving push and pull operators for the core calculus, and verifies that these are distributive laws of endofunctors over the r\Box_{r} graded comonadic modality. Section 4 describes the details of how these procedures are realised in the Granule language. Section 5 relates this work to Linear Haskell, and demonstrates how the push and pull combinators for user-defined data types may be automatically generated at compile-time using Template Haskell. Section 6 gives a comparison of the recent literature with regards the typed (graded) analysis of pattern matching, which is germane to the typing and derivation of our distributive laws. Section 7 covers how other structural combinators for Granule and Linear Haskell may be derived. Finally, Section 8 discusses more related and future work.

We start with an extended motivating example typifying the kind of software engineering impedance problem that distributive laws solve. We use Granule since it is the main vehicle for the developments here, and introduce some of the key concepts of graded modal types (in a linear context) along the way.

1.1 Motivating Example

Consider the situation of projecting the first element of a pair. In Granule, this first-projection is defined and typed as the following polymorphic function (whose syntax is reminiscent of Haskell or ML):

fst : \forall {a b : Type} . (a, b [0]) \rightarrow a
fst (x, [y]) = x

Linearity is the default, so this represents a linear function applied to linear values.222Granule uses \rightarrow syntax rather than \multimap for linear functions for the sake of familiarity with standard functional languages However, the second component of the pair has a graded modal type, written b [0], which means that we can use the value “inside” the graded modality 0 times by first ‘unboxing’ this capability via the pattern match [y] which allows weakening to be applied in the body to discard y of type b. In calculus of Section 2, we denote ‘b [0]’ as the type 0b\Box_{0}b (Granule’s graded modalities are written postfix with the ‘grade’ inside the box).

The type for fst is however somewhat restrictive: what if we are trying to use such a function with a value (call it myPair) whose type is not of the form (a, b [0]) but rather (a, b) [r] for some grading term r which permits weakening? Such a situation readily arises when we are composing functional code, say between libraries or between a library and user code. In this situation, fst myPair is ill-typed. Instead, we could define a different first projection function for use with myPair : (a,b) [r] as:

fst : \forall {a b : Type, s : Semiring, r : s} . {0 \leqslant r} \Rightarrow (a, b) [r] \rightarrow a
fst [(x, y)] = x

This implementation uses various language features of Granule to make it as general as possible. Firstly, the function is polymorphic in the grade r and in the semiring s of which r is an element. Next, a refinement constraint 0 \leqslant r specifies that by the preordering \leqslant associated with the semiring s, that 0 is approximated by r (essentially, that r permits weakening). The rest of the type and implementation looks more familiar for computing a first projection, but now the graded modality is over the entire pair.

From a software engineering perspective, it is cumbersome to create alternate versions of generic combinators every time we are in a slightly different situation with regards the position of a graded modality. Fortunately, this is an example to which a general distributive law can be deployed. In this case, we could define the following distributive law of graded modalities over products, call it pushPair:

pushPair : \forall {a b : Type, s : Semiring, r : s} . (a, b) [r] \rightarrow (a [r], b [r])
pushPair [(x, y)] = ([x], [y])

This ‘pushes’ the graded modality r into the pair (via pattern matching on the modality and the pair inside it, and then reintroducing the modality on the right hand side via [x] and [y]), distributing the graded modality to each component. Given this combinator, we can now apply fst (pushPair myPair) to yield a value of type a [r], on which we can then apply the Granule standard library function extract, defined:

extract : \forall {a : Type, s : Semiring, r : s} . {(1 : s) \leqslant r} \Rightarrow a [r] \rightarrow a
extract [x] = x

to get the original a value we desired:

extract (fst (pushPair myPair)) : a

The pushPair function could be provided by the standard library, and thus we have not had to write any specialised combinators ourselves: we have applied supplied combinators to solve the problem.

Now imagine we have introduced some custom data type List on which we have a map function:

data List a = Cons a (List a) | Nil
map : \forall {a b : Type} . (a \rightarrow b) [0..\infty] \rightarrow List a \rightarrow List b
map [f] Nil = Nil;
map [f] (Cons x xs) = Cons (f x) (map [f] xs)

Note that, via a graded modality, the type of map specifies that the parameter function, of type a \rightarrow b is non-linear, used between 0 and \infty times. Imagine now we have a value myPairList : (List (a, b)) [r] and we want to map first projection over it. But fst expects (a, b [0]) and even with pushPair we require (a, b) [r]. We need another distributive law, this time of the graded modality over the List data type. Since List was user-defined, we now have to roll our own pushList operation, and so we are back to having to make specialised combinators for our data types.

The crux of this paper is that such distributive laws can be automatically calculated given the definition of a type. With our Granule implementation of this approach (Section 4), we can then solve this combination problem via the following composition of combinators:

map (extract . fst . push @(,)) (push @List myPairList) : List a

where the push operations are written with their base type via @ (a type application) and whose definitions and types are automatically generated during type checking. Thus the push operation is a data-type generic function [15]. This generic function is defined inductively over the structure of types, thus a programmer can introduce a new user-defined algebraic data type and have the implementation of the generic distributive law derived automatically. This reduces both the initial and future effort (e.g., if an ADT definition changes or new ADTs are introduced).

Dual to the above, there are situations where a programmer may wish to pull a graded modality out of a structure. This is possible with a dual distributive law, which could be written by hand as:

pullPair : \forall {a b : Type, s : Semiring, m n : s} . (a [n], b [m]) \rightarrow (a, b) [n /\ m]
pullPair ([x], [y]) = [(x, y)]

Note that the resulting grade is defined by the greatest-lower bound (meet) of n and m, if it exists as defined by a preorder for semiring s (that is, \sqcap is not a total operation). This allows some flexibility in the use of the pull operation when grades differ in different components but have a greatest-lower bound which can be ‘pulled out’. Our approach also allows such operations to be generically derived.

2 Core Calculus, GrMiniP

We define here a simplified monomorphic subset of Granule, which we call the GrMiniP calculus.333GrMiniP is akin to the GrMini calculus given by Orchard et al. [23] but we include here the notions of data type constructors, eliminations, and pattern matching since these are important to the development here. This calculus extends the linear λ\lambda-calculus with a semiring graded necessity modality [23] where for a preordered semiring (,,1,+,0,)({\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathcal{R}},{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}*},{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1},{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}+},{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0},{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}) (that is, a semiring with a pre-order {\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq} where +{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}+} and {\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}*} are monotonic with respect to {\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}) there is a family of types {rA}r\{\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}\}_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}\in{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathcal{R}}}. Granule allows multiple graded modalities (and thus multiple grading semirings) to be used simultaneously within a program, but we focus here on just one graded modality, parameterising the language. We also include notions of data constructor and their elimination via case expressions as a way to unify the handling of regular type constructors.

The syntax of GrMiniP terms and types is given by:

t::=\displaystyle\mathit{t}::= xt1t2λx.t[t]Ct0tncasetofp1t1;..;pntnletrecx=t1int2\displaystyle\;\hskip 1.99997pt\mathit{x}\hskip 1.99997pt\mid\hskip 1.99997pt\mathit{t_{{\mathrm{1}}}}\,\mathit{t_{{\mathrm{2}}}}\hskip 1.99997pt\mid\hskip 1.99997pt\lambda\mathit{x}.\mathit{t}\hskip 1.99997pt\mid\hskip 1.99997pt[\mathit{t}]\hskip 1.99997pt\mid\hskip 1.99997ptC\,\mathit{t_{{\mathrm{0}}}}\,...\,\mathit{t_{\mathit{n}}}\hskip 1.99997pt\mid\hskip 1.99997pt\textbf{case}\ \mathit{t}\ \textbf{of}\ \mathit{p_{{\mathrm{1}}}}\mapsto\mathit{t_{{\mathrm{1}}}};..;\mathit{p_{\mathit{n}}}\mapsto\mathit{t_{\mathit{n}}}\hskip 1.99997pt\mid\hskip 1.99997pt\textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{2}}}}{\small{}} (terms)
p::=\displaystyle\mathit{p}::= x_[p]Cp0pn\displaystyle\;\hskip 1.99997pt\mathit{x}\hskip 1.99997pt\mid\hskip 1.99997pt\_\hskip 1.99997pt\mid\hskip 1.99997pt[\mathit{p}]\hskip 1.99997pt\mid\hskip 1.99997ptC\,\mathit{p_{{\mathrm{0}}}}\,...\,\mathit{p_{\mathit{n}}}{\small{}} (patterns)
A,B::=\displaystyle\mathit{A},\mathit{B}::= ABαABAB𝟏rAμX.AX\displaystyle\;\hskip 1.99997pt\mathit{A}\multimap\mathit{B}\hskip 1.99997pt\mid\hskip 1.99997pt\alpha\hskip 1.99997pt\mid\hskip 1.99997pt\mathit{A}\,\otimes\,\mathit{B}\hskip 1.99997pt\mid\hskip 1.99997pt\mathit{A}\,\oplus\,\mathit{B}\hskip 1.99997pt\mid\hskip 1.99997pt\mathbf{1}\hskip 1.99997pt\mid\hskip 1.99997pt\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}\hskip 1.99997pt\mid\hskip 1.99997pt\mu{X}.\mathit{A}\hskip 1.99997pt\mid\hskip 1.99997pt{X}{\small{}} (types)
C::=\displaystyle C::= 𝗎𝗇𝗂𝗍𝗂𝗇𝗅𝗂𝗇𝗋(,)\displaystyle\;\mathsf{unit}\mid\mathsf{inl}\mid\mathsf{inr}\mid(\,,\,){\small{}} (data constructors)

Terms of GrMiniP consist of those of the linear λ\lambda-calculus, plus a promotion construct [t][\mathit{t}] for introducing a graded modality, data constructor introduction Ct0tnC\,\mathit{t_{{\mathrm{0}}}}\,...\,\mathit{t_{\mathit{n}}} and elimination (case), as well as recursive let bindings (𝐥𝐞𝐭𝐫𝐞𝐜\mathbf{letrec}). We will give more insights into the syntax via their typing rules. Types comprise linear function types, type variables (α\alpha), multiplicative products, additive sums, unit types, graded modal types, recursive types, and recursion variables (XX) (which must be guarded by a μ\mu binding). Type variables are used in our derivation procedure but are treated here as constants by the core calculus, without any rules relating to their binding or unification.

2.1 Typing

Typing is via judgments of the form Γt:A\Gamma\vdash\mathit{t}:\mathit{A}, assigning a type AA to term tt under the context Γ\Gamma. Typing contexts Γ\Gamma contain linear or graded assumptions given by the grammar:

Γ::=Γ,x:AΓ,y:[A]r\Gamma::=\emptyset\mid\Gamma,\mathit{x}:\mathit{A}\mid\Gamma,\mathit{y}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}

where xx is a linear assumption and where yy is an assumption graded by r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}} drawn from the preordered semiring (called a graded assumption). This delineation of linear and graded assumptions avoids issues with substitution under a promotion, following the technique of Terui’s discharged assumptions [29]. Linear assumptions (including those of a graded modal type rA\Box_{r}A) must therefore be used exactly once, whilst graded assumptions may be used non-linearly subject to the constraint of rr.

\inferrule[right=var]x:Ax:A\inferrule[right=abs]Γ,x:At:BΓλx.t:AB\inferrule[right=app]Γ1t1:ABΓ2t2:AΓ1+Γ2t1t2:B\inferrule[right=der]Γ,x:At:BΓ,x:[A]1t:B\inferrule[right=weak]Γt:AΓ,[Δ]0t:A\inferrule[right=pr][Γ]t:Ar[Γ][t]:rA\displaystyle\begin{array}[]{c}\inferrule*[right=var]{\quad}{\mathit{x}:\mathit{A}\vdash\mathit{x}:\mathit{A}}\qquad\inferrule*[right=abs]{\Gamma,\mathit{x}:\mathit{A}\vdash\mathit{t}:\mathit{B}}{\Gamma\vdash\lambda\mathit{x}.\mathit{t}:\mathit{A}\multimap\mathit{B}}\qquad\inferrule*[right=app]{\Gamma_{{\mathrm{1}}}\vdash\mathit{t_{{\mathrm{1}}}}:\mathit{A}\multimap\mathit{B}\\ \Gamma_{{\mathrm{2}}}\vdash\mathit{t_{{\mathrm{2}}}}:\mathit{A}}{\Gamma_{{\mathrm{1}}}+\Gamma_{{\mathrm{2}}}\vdash\mathit{t_{{\mathrm{1}}}}\,\mathit{t_{{\mathrm{2}}}}:\mathit{B}}\\[6.99997pt] \inferrule*[right=der]{\Gamma,\mathit{x}:\mathit{A}\vdash\mathit{t}:\mathit{B}}{\Gamma,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}}}\vdash\mathit{t}:\mathit{B}}\qquad\inferrule*[right=weak]{\Gamma\vdash\mathit{t}:\mathit{A}}{\Gamma,[\Delta]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}\vdash\mathit{t}:\mathit{A}}\qquad\inferrule*[right=pr]{[\Gamma]\vdash\mathit{t}:\mathit{A}}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\,*\,}}[\Gamma]\vdash[\mathit{t}]:\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}}\end{array}
\inferrule[right=approx]Γ,x:[A]r,Γt:ArsΓ,x:[A]s,Γt:A\inferrule[right=letrec]Γ,x:At1:AΓ,x:At2:BΓ+Γletrecx=t1int2:B\inferrule[right=con](C:B1BnA)DC:B1BnA\inferrule[right=case]Γt:Api:AΔiΓ,Δiti:BΓ+Γcasetofp1t1;..;pntn:B\displaystyle\begin{array}[]{ccc}\inferrule*[right=approx]{\Gamma,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}},\Gamma^{\prime}\vdash\mathit{t}:\mathit{A}\\ \mathit{r}\sqsubseteq\mathit{s}}{\Gamma,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{s}}},\Gamma^{\prime}\vdash\mathit{t}:\mathit{A}}&&\inferrule*[right=letrec]{\Gamma,\mathit{x}:\mathit{A}\vdash\mathit{t_{{\mathrm{1}}}}:\mathit{A}\\ \Gamma^{\prime},\mathit{x}:\mathit{A}\vdash\mathit{t_{{\mathrm{2}}}}:\mathit{B}}{\Gamma+\Gamma^{\prime}\vdash\textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{2}}}}:\mathit{B}}\\[6.99997pt] \inferrule*[right=con]{(C:\mathit{B_{{\mathrm{1}}}}\multimap\!\ldots\!\multimap\mathit{B_{\mathit{n}}}\multimap\mathit{A})\in\textsc{D}}{\emptyset\ \vdash\ C:\mathit{B_{{\mathrm{1}}}}\multimap\!\ldots\!\multimap\mathit{B_{\mathit{n}}}\multimap\mathit{A}}&&\inferrule*[right=case]{\Gamma\vdash\mathit{t}:\mathit{A}\quad\;\cdot\vdash\,\mathit{p_{\mathit{i}}}:\mathit{A}\,\rhd\,\Delta_{\mathit{i}}\quad\;\Gamma^{\prime},\Delta_{\mathit{i}}\vdash\mathit{t_{\mathit{i}}}:\mathit{B}}{\Gamma+\Gamma^{\prime}\vdash\textbf{case}\ \mathit{t}\ \textbf{of}\ \mathit{p_{{\mathrm{1}}}}\mapsto\mathit{t_{{\mathrm{1}}}};..;\mathit{p_{\mathit{n}}}\mapsto\mathit{t_{\mathit{n}}}:\mathit{B}}\end{array}
Figure 1: Typing rules for GrMiniP

Figure 1 gives the typing rules. We briefly explain each rule in turn.

The var, abs and app rules follow the standard rules for the linear λ\lambda-calculus, modulo the context addition operation Γ+Γ\Gamma+\Gamma^{\prime} which is used in the rules any time the contexts of two subterms need to be combined. Context addition acts as union on disjoint linear and graded assumptions but is defined via semiring addition when a graded assumption appears in both contexts:

(Γ,x:[A]r)+(Γ,x:[A]s)=(Γ+Γ),x:[A]r+s\displaystyle(\Gamma,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}})+(\Gamma^{\prime},\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{s}}})=(\Gamma+\Gamma^{\prime}),\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}+\mathit{s}}}

Context addition ++ is undefined in the case of contexts whose linear assumptions are not-disjoint, enforcing the lack of contraction on linear assumptions.

The next rules relate to grading. Structural weakening is provided by (weak) for assumptions graded by 0 and ‘dereliction’ (der) converts a linear assumption to a graded assumption, graded by 1. Graded modalities are introduced by ‘promotion’ (pr), scaling graded assumptions in Γ\Gamma via semiring multiplication defined on contexts containing only graded assumptions as:

r=r(Γ,x:[A]s)=(rΓ),x:[A](rs)\displaystyle{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\,*\,}}\emptyset=\emptyset\qquad\qquad{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\,*\,}}(\Gamma,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{s}}})=({\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\,*\,}}\Gamma),\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}(\mathit{r}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\,*\,}\mathit{s})}}

In the (pr) rule, [Γ][\Gamma] denotes contexts with only graded assumptions. The approx rule captures grade approximation, allowing a grade rr in an assumption to be converted to another grade ss, on the condition that rr is approximated by ss, via the relation \sqsubseteq as provided by the semiring’s preorder.

The letrec rule provides recursive bindings in the standard way.

Data constructors with zero or more arguments are introduced via the con rule. Here, the constructors that concern us are units, products, and coproducts (sums), given by DD, a global set of data constructors with their types, defined:

D={𝗎𝗇𝗂𝗍:𝟏}{(,):ABABA,B}{𝗂𝗇𝗅:AABA,B}{𝗂𝗇𝗋:BABA,B}\displaystyle D=\{\mathsf{unit}:\mathbf{1}\}\cup\{(,):\mathit{A}\multimap\mathit{B}\multimap\mathit{A}\,\otimes\,\mathit{B}\mid\forall\mathit{A},\mathit{B}\}\cup\{\mathsf{inl}:\mathit{A}\multimap\mathit{A}\,\oplus\,\mathit{B}\mid\forall\mathit{A},\mathit{B}\}\cup\{\mathsf{inr}:\mathit{B}\multimap\mathit{A}\,\oplus\,\mathit{B}\mid\forall\mathit{A},\mathit{B}\}

Constructors are eliminated by pattern matching via the case rule. Patterns pp are typed by judgments of the form ?rp:AΔ?\textit{r}\vdash\,\mathit{p}:\mathit{A}\,\rhd\,\Delta meaning that a pattern pp has type AA and produces a context of typed binders Δ\Delta (used, e.g., in the typing of the case branches). The information to the left of the turnstile denotes optional grade information arising from being in an unboxing pattern and is syntactically defined as either:

?r::=r\displaystyle?\textit{r}\ ::=\cdot\mid\mathit{r} (enclosing grade)

where \cdot means the present pattern is not nested inside an unboxing pattern and r\mathit{r} that the present pattern is nested inside an unboxing pattern for a graded modality with grade r\mathit{r}.

\inferrule[right=Pvar]x:Ax:A\inferrule[right=Pcon]pi:BiΓiCp1..pn:AΓ1,..,Γn\inferrule[right=Pbox]rp:AΓ[p]:rAΓ\inferrule[right=[Pvar]]rx:Ax:[A]r\inferrule[right=[Pcon]]rpi:BiΓi|A|>11𝑟𝑟Cp1..pn:AΓ1,..,Γn\inferrule[right=[Pwild]]0𝑟𝑟_:A\displaystyle\begin{array}[]{ccc}\inferrule*[right=Pvar]{\quad}{\cdot\vdash\,\mathit{x}:\mathit{A}\,\rhd\,\mathit{x}:\mathit{A}}&\inferrule*[right=Pcon]{\cdot\vdash\,\mathit{p_{\mathit{i}}}:\mathit{B_{\mathit{i}}}\,\rhd\,\Gamma_{\mathit{i}}}{\cdot\vdash\,C\,\mathit{p_{{\mathrm{1}}}}\,..\,\mathit{p_{\mathit{n}}}:\mathit{A}\,\rhd\,\Gamma_{{\mathrm{1}}},\,..\,,\Gamma_{\mathit{n}}}&\inferrule*[right={Pbox}]{\mathit{r}\vdash\,\mathit{p}:\mathit{A}\,\rhd\,\Gamma}{\cdot\vdash\,[\mathit{p}]:\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}\,\rhd\,\Gamma}\\[10.00002pt] \inferrule*[right={[}Pvar{]}]{\quad}{\mathit{r}\vdash\,\mathit{x}:\mathit{A}\,\rhd\,\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}}&\;\;\inferrule*[right={[}Pcon{]}]{\mathit{r}\vdash\,\mathit{p_{\mathit{i}}}:\mathit{B_{\mathit{i}}}\,\rhd\,\Gamma_{\mathit{i}}\quad\quad|\mathit{A}|>1\Rightarrow{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}}{\mathit{r}\vdash\,C\,\mathit{p_{{\mathrm{1}}}}\,..\,\mathit{p_{\mathit{n}}}:\mathit{A}\,\rhd\,\Gamma_{{\mathrm{1}}},\,..\,,\Gamma_{\mathit{n}}}&\;\;\inferrule*[right={[}Pwild{]}]{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}}{\mathit{r}\vdash\,\_:\mathit{A}\,\rhd\,\emptyset}\end{array}
Figure 2: Pattern typing rules for GrMiniP

The rules of pattern typing are given in Figure 2. The rule (PBox) provides graded modal elimination (an ‘unboxing’ pattern), propagating grade information into the typing of the sub-pattern. Thus casetof[p]t\textbf{case}\ \mathit{t}\ \textbf{of}\ [\mathit{p}]\rightarrow\mathit{t^{\prime}} can be used to eliminate a graded modal value. Variable patterns are typed via two rules depending on whether the variable occurs inside an unbox pattern ([Pvar]) or not (Pvar), with the [Pvar] rule producing a binding with the grade of the enclosing box’s grade r\mathit{r}. As with variable patterns, constructor patterns are split between rules for patterns which either occur inside an unboxing pattern or not. In the former case, the grade information is propagated to the subpattern(s), with the additional constraint that if there is more than one data constructor for the type A\mathit{A} (written |A|>1|A|>1), then the grade rr must approximate 1 (written 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}) as pattern matching incurs a usage to inspect the constructor (discussed further in Section 6). The operation |A||\mathit{A}| counts the number of data constructors for a type:

|𝟏|=1|AB|=1|rA|=|A||AB|=2(|A|+|B|)|AB|=|A||B||μX.A|=|A[μX.A/X]|\displaystyle|\mathbf{1}|=1\;\;\;\;|\mathit{A}\multimap\mathit{B}|=1\;\;\;\,|\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}|=|\mathit{A}|\;\;\;\;|\mathit{A}\,\oplus\,\mathit{B}|=2(|A|+|B|)\;\;\;\;|\mathit{A}\,\otimes\,\mathit{B}|=|A||B|\;\;\;\;|\mu{X}.\mathit{A}|=|A[\mu{X}.\mathit{A}/X]|

and |X||{X}| is undefined (or effectively 0) since we do not allow unguarded recursion variables in types. A type AA must therefore involve a sum type for |A|>1|A|>1.

Since a wildcard pattern _\_\, discards a value, this is only allowed inside an unboxing pattern where the enclosing grade permits weakening, captured via 0r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} in rule [Pwild].

We now provide two examples of graded modalities. In addition to those shown here, Granule provides various other graded modalities, including grading by semirings of security-level lattices and “sensitivities” akin to DFuzz [10].

Example 2.1.

The natural numbers semiring (,,1,+,0,)(\mathbb{N},\ast,1,+,0,\equiv) provides exact usage where the preorder is equality \equiv. Graded modalities in this semiring count the number of times a value is used. As a simple example, we can define a function which copies its input value to produce a pair of values:

𝑐𝑜𝑝𝑦\displaystyle\mathit{copy} :2A(AA)\displaystyle:\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}2}}\mathit{A}\multimap(\mathit{A}\,\otimes\,\mathit{A})
𝑐𝑜𝑝𝑦\displaystyle\mathit{copy} =λy.caseyof[x](x,x)\displaystyle=\lambda\mathit{y}.\textbf{case}\ \mathit{y}\ \textbf{of}\ [\mathit{x}]\rightarrow(\mathit{x},\mathit{x})

The capability to use a value of type A\mathit{A} twice is captured by the graded modality. In the body of the abstraction, variable y:2Ay:\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}2}}\mathit{A} must be used linearly; yy is used linearly in the case which eliminates the graded modality to yield a graded assumption x:[A]2\mathit{x}:[\mathit{A}]_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}2}} which can be used twice to form the resulting pair.

Example 2.2.

BLL-style grading with upper bounds on usages can be obtained by replacing the equality relation in the \mathbb{N}-semiring with a less-than-equal ordering, giving a notion of approximation. This is useful for programs that use variables differently in control-flow branches, e.g., when eliminating a sum type. This notion is further refined by the semiring of natural number intervals, which captures lower and upper bounds on usages. Intervals are given as pairs ×\mathbb{N}\times\mathbb{N} written r..s{\mathit{r}}..{\mathit{s}}, where r\mathit{r} and s\mathit{s} represent the lower and upper bounds respectively, with rs\mathit{r}\leq\mathit{s}, 0=0..00={{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}} and 1=1..11={{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}}..{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}}. Ordering is given by (r..s)(r..s)=rrss({\mathit{r}}..{\mathit{s}})\leq({\mathit{r^{\prime}}}..{\mathit{s^{\prime}}})=\mathit{r^{\prime}}\leq\mathit{r}\wedge\mathit{s}\leq\mathit{s^{\prime}}. Using this, we can define the following function to eliminate a coproduct via a pair of functions each of which is used either 0 or 1 times:

e\displaystyle\oplus_{e} :[0..1](AC)[0..1](BC)(AB)C\displaystyle:\square_{[{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}}]}(A\multimap C)\multimap\square_{[{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}}]}(B\multimap C)\multimap(\mathit{A}\,\oplus\,\mathit{B})\multimap C
e\displaystyle\oplus_{e} =λf.λg.λz.casefof[f]casegof[g]casezof𝗂𝗇𝗅xfx;𝗂𝗇𝗋ygy\displaystyle=\lambda\mathit{f^{\prime}}.\lambda\mathit{g^{\prime}}.\lambda\mathit{z}.\textbf{case}\ \mathit{f^{\prime}}\ \textbf{of}\ [\mathit{f}]\rightarrow\textbf{case}\ \mathit{g^{\prime}}\ \textbf{of}\ [\mathit{g}]\rightarrow\textbf{case}\ \mathit{z}\ \textbf{of}\ \mathsf{inl}\ \mathit{x}\mapsto\mathit{f}\,\mathit{x};\mathsf{inr}\ \mathit{y}\rightarrow\mathit{g}\,\mathit{y}

2.2 Equational Theory

Figure 3 defines an equational theory for GrMiniP. The equational theory is typed, but for brevity we elide the typing for most cases since it follows exactly from the structure of the terms. The fully typed rules are provided in the appendix [18]. For those rules which are type restricted, we include the full typed-equality judgment here. The typability of these equations relies on previous work on the Granule language which proves that pattern matching and substitution are well typed [23].

(λx.t2)t1t2[t1/x]βλx.txt(x#t)ηletrecx=t1int2t2[letrecx=t1int1/x]βletrecf(letrecx=t1int2)letrecx=t1in(ft2)LetRecDsitribcasetofpiti¯(tpj)tj(minimal(j))βcasecaset1ofpit2[pi/z]¯t2[t1/z]ηcasecase(casetofpiti¯)ofpiti¯casetofpi(casetiofpiti¯)¯CaseAssocf(casetofpiti¯)casetofpi(fti)¯CaseDistribcase[casetofpiti¯]of[pi]ti¯case[t]of[pi]case[ti]of[pi]ti¯¯(lin(pi))[CaseAssoc]Γt:rArpi:AΔiΔipi:A1rΓcasetof[pi]pi¯casetof[x]x:ACaseGen\displaystyle\begin{array}[]{c}\begin{array}[]{rlr}(\lambda\mathit{x}.\mathit{t_{{\mathrm{2}}}})\,\mathit{t_{{\mathrm{1}}}}&\equiv\ \mathit{t_{{\mathrm{2}}}}[\mathit{t_{{\mathrm{1}}}}/\mathit{x}]&\ \beta\\ \lambda\mathit{x}.\mathit{t}\,\mathit{x}&\equiv\ \mathit{t}&(\mathit{x}\#\mathit{t})\ \eta\\ \\ \textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{2}}}}&\equiv\ \mathit{t_{{\mathrm{2}}}}[\textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{1}}}}/\mathit{x}]&\ \beta_{letrec}\\ \mathit{f}\,(\textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{2}}}})&\equiv\ \textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ (\mathit{f}\,\mathit{t_{{\mathrm{2}}}})&{\small{\textsc{LetRecDsitrib}}}\\ \\ \textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow\mathit{t_{\mathit{i}}}}&\equiv\ (t\rhd\mathit{p_{\mathit{j}}})\mathit{t_{\mathit{j}}}&\;\;(\textsf{minimal}(j))\ \beta_{case}\\ \textbf{case}\ \mathit{t_{{\mathrm{1}}}}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow\mathit{t_{{\mathrm{2}}}}[\mathit{p_{\mathit{i}}}/\mathit{z}]}&\equiv\ \mathit{t_{{\mathrm{2}}}}[\mathit{t_{{\mathrm{1}}}}/\mathit{z}]&\ \eta_{case}\\ \textbf{case}\ (\textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow\mathit{t_{\mathit{i}}}})\ \textbf{of}\ \overline{\mathit{p^{\prime}_{\mathit{i}}}\rightarrow\mathit{t^{\prime}_{\mathit{i}}}}&\equiv\ \textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow(\textbf{case}\ \mathit{t_{\mathit{i}}}\ \textbf{of}\ \overline{\mathit{p^{\prime}_{\mathit{i}}}\rightarrow\mathit{t^{\prime}_{\mathit{i}}}})}&\ {\small{\small{\textsc{CaseAssoc}}}{}}\\ \mathit{f}\,(\textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow\mathit{t_{\mathit{i}}}})&\equiv\ \textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow(\mathit{f}\,\mathit{t_{\mathit{i}}})}&\ {\small{\textsc{CaseDistrib}}}\\ \textbf{case}\ [\textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{\mathit{p_{\mathit{i}}}\rightarrow\mathit{t_{\mathit{i}}}}]\ \textbf{of}\ \overline{[\mathit{p^{\prime}_{\mathit{i}}}]\rightarrow\mathit{t^{\prime}_{\mathit{i}}}}&\equiv\ \textbf{case}\ [\mathit{t}]\ \textbf{of}\ \overline{[\mathit{p_{\mathit{i}}}]\rightarrow\textbf{case}\ [\mathit{t_{\mathit{i}}}]\ \textbf{of}\ \overline{[\mathit{p^{\prime}_{\mathit{i}}}]\rightarrow\mathit{t^{\prime}_{\mathit{i}}}}}&\;\;(\textsf{lin}(\mathit{p_{\mathit{i}}}))\ {\small{\small{[\textsc{CaseAssoc}]}}{}}\\ \end{array}\\ \\ {\displaystyle\frac{\begin{array}[]{l}\Gamma\vdash\mathit{t}:\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}\qquad\mathit{r}\vdash\,\mathit{p_{\mathit{i}}}:\mathit{A}\,\rhd\,\Delta_{\mathit{i}}\qquad\Delta_{\mathit{i}}\vdash\mathit{p_{\mathit{i}}}:\mathit{A}\qquad{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}\\ \end{array}}{\Gamma\vdash\textbf{case}\ \mathit{t}\ \textbf{of}\ \overline{[\mathit{p_{\mathit{i}}}]\rightarrow\mathit{p_{\mathit{i}}}}\equiv\textbf{case}\ \mathit{t}\ \textbf{of}\ [\mathit{x}]\rightarrow\mathit{x}:\mathit{A}}\quad\textsc{{\small{{CaseGen}}}{}}}\end{array}
Figure 3: Equational theory for GrMiniP

The β\beta and η\eta rules follow the standard rules from the λ\lambda-calculus, where #\# is a freshness predicate, denoting that variable xx does not appear inside term tt.

For recursive letrec bindings, the βletrec\beta_{letrec} rule substitutes any occurrence of the bound variable xx in t2\mathit{t_{{\mathrm{2}}}} with letrecx=t1int1\textbf{letrec}\ \mathit{x}\ =\ \mathit{t_{{\mathrm{1}}}}\ \textbf{in}\ \mathit{t_{{\mathrm{1}}}}, ensuring that recursive uses of xx inside t1\mathit{t_{{\mathrm{1}}}} can be substituted with t1\mathit{t_{{\mathrm{1}}}} through subsequent βletrec\beta_{letrec} reduction. The LetRecDistrib rule allows distributivity of functions over letrec expressions, stating that if a function ff can be applied to the entire letrec expression, then this is equivalent to applying ff to just the body term t2\mathit{t_{{\mathrm{2}}}}.

Term elimination is via case, requiring rules for both β\beta- and η\eta-equality on case expressions, as well as rules for associativity and distributivity. In βcase\beta_{case}, a term tt is matched against a pattern pj\mathit{p_{\mathit{j}}} in the context of the term tj\mathit{t_{\mathit{j}}} through the use of the partial function (tpj)tj=t(t\ \rhd\ \mathit{p_{\mathit{j}}})\mathit{t_{\mathit{j}}}=t^{\prime} which may substitute terms bound in pj\mathit{p_{\mathit{j}}} into tj\mathit{t_{\mathit{j}}} to yield tt^{\prime} if the match is successful. This partial function is defined inductively:

\inferrule[right=⊳_-](t_)t=t\inferrule[right=⊳_var](tx)t=[t/x]t\inferrule[right=⊳_□](tp)t=t′′([t][p])t=t′′\inferrule[right=⊳_C](tipi)ti=ti+1(Ct1,..,tnCp1pn)t1=tn+1\displaystyle\begin{array}[]{cccc}\inferrule*[right=$\rhd_{-}$]{\quad}{(\mathit{t}\rhd\_)\mathit{t^{\prime}}=\mathit{t^{\prime}}}&\inferrule*[right=$\rhd_{var}$]{\quad}{(\mathit{t}\rhd\mathit{x})\mathit{t^{\prime}}=[\mathit{t}/\mathit{x}]\mathit{t^{\prime}}}&\inferrule*[right=$\rhd_{\Box}$]{(\mathit{t}\rhd\mathit{p})\mathit{t^{\prime}}=\mathit{t^{\prime\prime}}}{([\mathit{t}]\rhd[\mathit{p}])\mathit{t^{\prime}}=\mathit{t^{\prime\prime}}}&\inferrule*[right=$\rhd_{C}$]{(\mathit{t_{\mathit{i}}}\rhd\mathit{p_{\mathit{i}}})\mathit{t}^{\prime}_{i}=\mathit{t}^{\prime}_{i+1}}{(C{t_{1},..,t_{n}}\rhd C\,\mathit{p_{{\mathrm{1}}}}\,...\,\mathit{p_{\mathit{n}}})\mathit{t}^{\prime}_{1}=\mathit{t}^{\prime}_{n+1}}\end{array}

As a predicate to the βcase\beta_{case} rule, we require that jj be minimal, i.e. the first pattern pj\mathit{p_{\mathit{j}}} in p1pn\mathit{p_{{\mathrm{1}}}}...\mathit{p_{\mathit{n}}} for which (tpj)tj=t(t\ \rhd\ \mathit{p_{\mathit{j}}})\mathit{t_{\mathit{j}}}=t^{\prime} is defined. Rule ηcase\eta_{case} states that if all branches of the case expression share a common term t2t_{2} which differs between branches only in the occurrences of terms that match the pattern used, then we can substitute t1t_{1} for the pattern inside t2t_{2}.

Associativity of case expressions is provided by the CaseAssoc rule. This rule allows us to restructure nested case expressions such that the output terms ti\mathit{t_{\mathit{i}}} of the inner case may be matched against the patterns of the outer case, to achieve the same resulting output terms ti\mathit{t^{\prime}_{\mathit{i}}}. The [CaseAssoc] rule provides a graded alternative to the CaseAssoc rule, where the nested case expression is graded, provided that the patterns pi\mathit{p^{\prime}_{\mathit{i}}} of the outer case expression are also graded. Notably, this rule only holds when the patterns of the inner case expression are linear (i.e., variable or constant) so that there are no nested box patterns, represented via the lin(pi)\textsf{lin}(\mathit{p_{\mathit{i}}}) predicate. As with letrec, distributivity of functions over a case expression is given by CaseDistrib.

Lastly generalisation of an arbitrary boxed pattern to a variable is permitted through the CaseGen rule. Here, a boxed pattern [pi][\mathit{p_{\mathit{i}}}] and the output term of the case may be converted to a variable if the output term is equivalent to the pattern inside the box. The term t\mathit{t} being matched against must therefore have a grade approximatable by 1, as witnessed by the predicate 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} in the typing derivation.

3 Automatically Deriving push and pull

Now that we have established the core theory, we describe the algorithmic calculation of distributive laws in GrMiniP. Note that whilst GrMiniP is simply typed (monomorphic), it includes type variables (ranged over by α\alpha) to enable the distributive laws to be derived on parametric types. In the implementation, these will really be polymorphic type variables, but the derivation procedure need only treat them as some additional syntactic type construct.

Notation

Let 𝖥:𝖳𝗒𝗉𝖾n𝖳𝗒𝗉𝖾\mathsf{F}:\mathsf{Type}^{n}\rightarrow\mathsf{Type} be an nn-ary type constructor (i.e. a constructor which takes nn type arguments), whose free type variables provide the nn parameter types. We write 𝖥αi¯\mathsf{F}\overline{\alpha_{i}} for the application of 𝖥\mathsf{F} to type variables αi\alpha_{i} for all 1in1\leq i\leq n.

Push

We automatically calculate push for 𝖥\mathsf{F} applied to nn type variables αi¯\overline{\alpha_{i}} as the operation:

𝖥αi¯𝗉𝗎𝗌𝗁:r𝖥αi¯𝖥(rαi¯)\displaystyle\llbracket{\mathsf{F}\overline{\alpha_{i}}}\rrbracket_{\mathsf{push}}:\Box_{r}\mathsf{F}\overline{\alpha_{i}}\multimap\mathsf{F}(\overline{\Box_{r}\alpha_{i}})

where we require 1rif|𝖥αi¯|>1{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}\ \textit{if}\ |\mathsf{F}\overline{\alpha_{i}}|>1 due to the [Pcon][\textsc{Pcon}] rule (e.g., if 𝖥\mathsf{F} contains a sum).

For types AA closed with respect to recursion variables, let A𝗉𝗎𝗌𝗁=λz.A𝗉𝗎𝗌𝗁z\llbracket{A}\rrbracket_{\mathsf{push}}=\lambda z.\llbracket{A}\rrbracket_{\mathsf{push}}^{\emptyset}\ z given by an intermediate interpretation A𝗉𝗎𝗌𝗁Σ\llbracket{A}\rrbracket_{\mathsf{push}}^{\Sigma} where Σ\Sigma is a context of push combinators for the recursive type variables:

𝟏𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{\mathbf{1}}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =casezof[𝗎𝗇𝗂𝗍]𝗎𝗇𝗂𝗍\displaystyle=\textbf{case}\ \mathit{z}\ \textbf{of}\ [\mathsf{unit}]\rightarrow\mathsf{unit}
α𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{\alpha}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =z\displaystyle=z
X𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{X}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =Σ(X)z\displaystyle=\Sigma(X)\ z
AB𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{A\oplus B}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =𝐜𝐚𝐬𝐞z𝐨𝐟[𝗂𝗇𝗅x]𝗂𝗇𝗅A𝗉𝗎𝗌𝗁Σ[x];[𝗂𝗇𝗋y]𝗂𝗇𝗋B𝗉𝗎𝗌𝗁Σ[y]\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ \begin{array}[t]{l}[\mathsf{inl}\ x]\rightarrow\mathsf{inl}\ \llbracket{A}\rrbracket_{\mathsf{push}}^{\Sigma}[x];\;\;[\mathsf{inr}\ y]\rightarrow\mathsf{inr}\ \llbracket{B}\rrbracket_{\mathsf{push}}^{\Sigma}[y]\end{array}
AB𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{A\otimes B}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =𝐜𝐚𝐬𝐞z𝐨𝐟[(x,y)](A𝗉𝗎𝗌𝗁Σ[x],B𝗉𝗎𝗌𝗁Σ[y])\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ [(x,y)]\rightarrow(\llbracket{A}\rrbracket_{\mathsf{push}}^{\Sigma}[x],\llbracket{B}\rrbracket_{\mathsf{push}}^{\Sigma}[y])
AB𝗉𝗎𝗌𝗁z\displaystyle\llbracket{A\multimap B}\rrbracket_{\mathsf{push}}\ z =λy.𝐜𝐚𝐬𝐞z𝐨𝐟[f]𝐜𝐚𝐬𝐞A𝗉𝗎𝗅𝗅Σy𝐨𝐟[u]B𝗉𝗎𝗌𝗁Σ[(fu)]\displaystyle=\lambda y.\mathbf{case}\ z\ \mathbf{of}\ [f]\rightarrow\mathbf{case}\ \llbracket{A}\rrbracket_{\mathsf{pull}}^{\Sigma}\ y\ \mathbf{of}\ [u]\rightarrow\llbracket{B}\rrbracket_{\mathsf{push}}^{\Sigma}[(f\ u)]
μX.A𝗉𝗎𝗌𝗁Σz\displaystyle\llbracket{\mu X.A}\rrbracket_{\mathsf{push}}^{\Sigma}\ z =𝐥𝐞𝐭𝐫𝐞𝐜f=A𝗉𝗎𝗌𝗁Σ,Xf:μX.rA(μX.A)[rαi/αi]𝐢𝐧fz\displaystyle=\mathbf{letrec}\ f=\llbracket{A}\rrbracket_{\mathsf{push}}^{\Sigma,X\mapsto f:\mu{X}.\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\mathit{A}\multimap(\mu{X}.\mathit{A})\overrightarrow{[\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}\alpha_{\mathit{i}}/\alpha_{\mathit{i}}]}}\ \mathbf{in}\ f\ z

In the case of push on a value of type 𝟏\mathbf{1}, we pattern match on the value, eliminating the graded modality via the unboxing pattern match and returning the unit value. For type variables, push is simply the identity of the value, while for recursion variables we lookup the XX’s binding in Σ\Sigma and apply it to the value. For sum and product types, push works by pattern matching on the type’s constructor(s) and then inductively applying push to the boxed arguments, re-applying them to the constructor(s). Unlike pull below, the push operation can be derived for function types, with a contravariant use of pull. For recursive types, we inductively apply push to the value with a fresh recursion variable bound in Σ\Sigma, representing a recursive application of push.

There is no derivation of a distributive law for types which are themselves graded modalities (see further work discussion in Section 8).

The appendix [18] proves that A𝗉𝗎𝗌𝗁\llbracket{A}\rrbracket_{\mathsf{push}} is type sound, i.e., its derivations are well-typed.

Pull

We automatically calculate pull for 𝖥\mathsf{F} applied to nn type variables αi¯\overline{\alpha_{i}} as the operation:

𝖥αi¯𝗉𝗎𝗅𝗅:𝖥(riαi¯)i=1nri(𝖥αi¯)\displaystyle\llbracket{\mathsf{F}\ \overline{\alpha_{i}}}\rrbracket_{\mathsf{pull}}:\mathsf{F}\ (\overline{\Box_{r_{i}}\alpha_{i}})\multimap\Box_{\bigwedge^{n}_{i=1}r_{i}}(\mathsf{F}\ \overline{\alpha_{i}})

Type constructor 𝖥\mathsf{F} here is applied to nn arguments each of the form riαi\Box_{r_{i}}\alpha_{i}, i.e., each with a different grading of which the greatest-lower bound444The greatest-lower bound \wedge is partial operation which can be defined in terms of the semiring’s pre-order: rs=tr\wedge s=t if trt\sqsubseteq r, tst\sqsubseteq s and there exists no other tt^{\prime} where trt^{\prime}\sqsubseteq r and tst^{\prime}\sqsubseteq s and ttt\sqsubseteq t^{\prime}. i=1nri\bigwedge^{n}_{i=1}r_{i} is the resulting grade (see pullPair from Section 1.1).

For types AA closed with respect to recursion variables, let A𝗉𝗎𝗅𝗅=λz.A𝗉𝗎𝗅𝗅z\llbracket{A}\rrbracket_{\mathsf{pull}}=\lambda z.\llbracket{A}\rrbracket_{\mathsf{pull}}^{\emptyset}\ z where:

𝟏𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{\mathbf{1}}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =casezof𝗎𝗇𝗂𝗍[𝗎𝗇𝗂𝗍]\displaystyle=\textbf{case}\ \mathit{z}\ \textbf{of}\ \mathsf{unit}\rightarrow[\mathsf{unit}]
α𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{\alpha}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =z\displaystyle=z
X𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{X}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =Σ(X)z\displaystyle=\Sigma(X)\ z
AB𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{A\oplus B}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =𝐜𝐚𝐬𝐞z𝐨𝐟𝗂𝗇𝗅x𝐜𝐚𝐬𝐞A𝗉𝗎𝗅𝗅Σx𝐨𝐟[u][𝗂𝗇𝗅u];𝗂𝗇𝗋y𝐜𝐚𝐬𝐞B𝗉𝗎𝗅𝗅Σy𝐨𝐟[v][𝗂𝗇𝗋v]\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ \begin{array}[t]{l}\mathsf{inl}\ x\rightarrow\mathbf{case}\ \llbracket{A}\rrbracket_{\mathsf{pull}}^{\Sigma}\ x\ \mathbf{of}\ [u]\rightarrow[\mathsf{inl}\ u];\\ \mathsf{inr}\ y\rightarrow\mathbf{case}\ \llbracket{B}\rrbracket_{\mathsf{pull}}^{\Sigma}\ y\ \mathbf{of}\ [v]\rightarrow[\mathsf{inr}\ v]\end{array}
AB𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{A\otimes B}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =𝐜𝐚𝐬𝐞z𝐨𝐟(x,y)𝐜𝐚𝐬𝐞(A𝗉𝗎𝗅𝗅Σx,B𝗉𝗎𝗅𝗅Σy)𝐨𝐟([u],[v])[(u,v)]\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ (x,y)\rightarrow\mathbf{case}\ (\llbracket{A}\rrbracket_{\mathsf{pull}}^{\Sigma}\ x,\llbracket{B}\rrbracket_{\mathsf{pull}}^{\Sigma}\ y)\ \mathbf{of}\ ([u],[v])\rightarrow[(u,v)]
μX.A𝗉𝗎𝗅𝗅Σz\displaystyle\llbracket{\mu X.A}\rrbracket_{\mathsf{pull}}^{\Sigma}\ z =𝐥𝐞𝐭𝐫𝐞𝐜f=A𝗉𝗎𝗅𝗅Σ,Xf:μX.A[riαi/αi]i=1nri(μX.A)𝐢𝐧fz\displaystyle=\mathbf{letrec}\ f=\llbracket{A}\rrbracket_{\mathsf{pull}}^{\Sigma,X\mapsto f:\mu{X}.\mathit{A}\overrightarrow{[\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r_{\mathit{i}}}}}\alpha_{\mathit{i}}/\alpha_{\mathit{i}}]}\multimap\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\bigwedge_{i=1}^{n}\mathit{r_{\mathit{i}}}}}(\mu{X}.\mathit{A})}\ \mathbf{in}\ f\ z

Just like push, we cannot apply pull to graded modalities themselves. Unlike push, we cannot apply pull to function types. That is, we cannot derive a distributive law of the form (rArB)r(AB)(\Box_{r}A\multimap\Box_{r}B)\multimap\Box_{r}(A\multimap B) since introducing the concluding r\Box_{r} would require the incoming function (rArB)(\Box_{r}A\multimap\Box_{r}B) to itself be inside r\Box_{r} due to the promotion rule (pr), which does not match the type scheme for pull.

The rest of the derivation above is similar but dual to that of push.

The appendix [18] proves that A𝗉𝗎𝗅𝗅\llbracket{A}\rrbracket_{\mathsf{pull}} is type sound, i.e., its derivations are well-typed.

Example 3.1.

To illustrate the above procedures, the derivation of (αα)β𝗉𝗎𝗌𝗁\llbracket{(\alpha\otimes\alpha)\multimap\beta}\rrbracket_{\mathsf{push}} is:

λz.(αα)β𝗉𝗎𝗌𝗁z:r((αα)β)((rαrα)rβ)\displaystyle\lambda z.\llbracket{(\alpha\otimes\alpha)\multimap\beta}\rrbracket_{\mathsf{push}}^{\emptyset}\ z:\Box_{r}((\alpha\otimes\alpha)\multimap\beta)\multimap((\Box_{r}\alpha\otimes\Box_{r}\alpha)\multimap\Box_{r}\beta)
=\displaystyle=\; λz.λy.𝐜𝐚𝐬𝐞z𝐨𝐟[f]𝐜𝐚𝐬𝐞αα𝗉𝗎𝗅𝗅y𝐨𝐟[u]β𝗉𝗎𝗌𝗁[(fu)]\displaystyle\lambda z.\lambda y.\mathbf{case}\ z\ \mathbf{of}\ [f]\rightarrow\mathbf{case}\ \llbracket{\alpha\otimes\alpha}\rrbracket_{\mathsf{pull}}^{\emptyset}\ y\ \mathbf{of}\ [u]\rightarrow\llbracket{\beta}\rrbracket_{\mathsf{push}}^{\emptyset}[(f\ u)]
=\displaystyle=\; λz.λy.𝐜𝐚𝐬𝐞z𝐨𝐟[f]𝐜𝐚𝐬𝐞(𝐜𝐚𝐬𝐞y𝐨𝐟(x,y)\displaystyle\lambda z.\lambda y.\mathbf{case}\ z\ \mathbf{of}\ [f]\rightarrow\mathbf{case}\ (\mathbf{case}\ y\ \mathbf{of}\ (x^{\prime},y^{\prime})\rightarrow
𝐜𝐚𝐬𝐞(α𝗉𝗎𝗅𝗅x,α𝗉𝗎𝗅𝗅y)𝐨𝐟([u],[v])[(u,v)])𝐨𝐟[u]β𝗉𝗎𝗌𝗁[(fu)]\displaystyle\hskip 110.00017pt\mathbf{case}\ (\llbracket{\alpha}\rrbracket_{\mathsf{pull}}^{\emptyset}\ x^{\prime},\llbracket{\alpha}\rrbracket_{\mathsf{pull}}^{\emptyset}\ y^{\prime})\ \mathbf{of}\ ([u],[v])\rightarrow[(u,v)])\ \mathbf{of}\ [u]\rightarrow\llbracket{\beta}\rrbracket_{\mathsf{push}}^{\emptyset}[(f\ u)]
=\displaystyle=\; λz.λy.𝐜𝐚𝐬𝐞z𝐨𝐟[f]𝐜𝐚𝐬𝐞(𝐜𝐚𝐬𝐞y𝐨𝐟(x,y)𝐜𝐚𝐬𝐞(x,y)𝐨𝐟([u],[v])[(u,v)])𝐨𝐟[u][(fu)]\displaystyle\lambda z.\lambda y.\mathbf{case}\ z\ \mathbf{of}\ [f]\rightarrow\mathbf{case}\ (\mathbf{case}\ y\ \mathbf{of}\ (x^{\prime},y^{\prime})\rightarrow\mathbf{case}\ (x^{\prime},y^{\prime})\ \mathbf{of}\ ([u],[v])\rightarrow[(u,v)])\ \mathbf{of}\ [u]\rightarrow[(f\ u)]
Remark 1.

One might ponder whether linear logic’s exponential !A!A [13] is modelled by the graded necessity modality over \mathbb{N}_{\infty} intervals, i.e., with !A0..A!A\triangleq\Box_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{\infty}}}A. This is a reasonable assumption, but 0..A\Box_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{\infty}}}A has a slightly different meaning to !A!A, exposed here: whilst AB𝗉𝗎𝗌𝗁:0..(AB)(0..A0..B)\llbracket{A\otimes B}\rrbracket_{\mathsf{push}}:\Box_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{\infty}}}(A\otimes B)\multimap(\Box_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{\infty}}}A\otimes\Box_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}}..{\infty}}}B) is derivable in GrMiniP, linear logic does not permit !(AB)(!A!B)!(A\otimes B)\multimap(!A\otimes!B). Models of !! provide only a monoidal functor structure which gives pull for \otimes, but not push [3]. This structure can be recovered in Granule through the introduction of a partial type-level operation which selectively disallows push for \otimes in semirings which model the !! modality of linear logic555The work in  [16] arose as a result of the writing of this paper.  [16].

The algorithmic definitions of ‘push’ and ‘pull’ can be leveraged in a programming context to automatically yield these combinators for practical purposes. We discuss how this is leveraged inside the Granule compiler in Section 4 and two techniques for leveraging it for Linear Haskell in Section 5. Before that, we study the algebraic behaviour of the derived distributive laws.

3.1 Properties

We consider here the properties of these derived operations. Prima facie, the above push and pull operations are simply distributive laws between two (parametric) type constructors 𝖥\mathsf{F} and r\Box_{r}, the latter being the graded modality. However, both 𝖥\mathsf{F} and r\Box_{r} have additional structure. If the mathematical terminology of ‘distributive laws’ is warranted, then such additional structure should be preserved by push and pull (e.g., as in how a distributive law between a monad and a comonad must preserve the behaviour of the monad and comonad operations after applying the distributive law [26]); we explain here the relevant additional structure and verify the distributive law properties.

Firstly, we note that these distributive laws are mutually inverse:

Proposition 3.1 (Pull is right inverse to push).

For all nn-arity types 𝖥\mathsf{F} which do not contain function types, then for type variables (αi)i[1..n](\alpha_{i})_{i\in[1..n]} and for all grades rr\in\mathcal{R} where 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} if |𝖥αi¯|>1|\mathsf{F}\overline{\alpha_{i}}|>1, then:

𝖥αi¯𝗉𝗎𝗅𝗅(𝖥αi¯𝗉𝗎𝗌𝗁)=id:r𝖥αi¯r𝖥αi¯\displaystyle\llbracket{\mathsf{F}\ \overline{\alpha_{i}}}\rrbracket_{\mathsf{pull}}(\llbracket{\mathsf{F}\ \overline{\alpha_{i}}}\rrbracket_{\mathsf{push}})=id\ :\Box_{r}\mathsf{F}\overline{\alpha_{i}}\multimap\Box_{r}\mathsf{F}\overline{\alpha_{i}}
Proposition 3.2 (Pull is left inverse to push).

For all nn-arity types 𝖥\mathsf{F} which do not contain function types, then for type variables (αi)i[1..n](\alpha_{i})_{i\in[1..n]} and for all grades rr\in\mathcal{R} where 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} if |𝖥αi¯|>1|\mathsf{F}\overline{\alpha_{i}}|>1, then:

𝖥αi¯𝗉𝗎𝗌𝗁(𝖥αi¯𝗉𝗎𝗅𝗅)=id:𝖥(rαi¯)𝖥(rαi¯)\displaystyle\llbracket{\mathsf{F}\ \overline{\alpha_{i}}}\rrbracket_{\mathsf{push}}(\llbracket{\mathsf{F}\ \overline{\alpha_{i}}}\rrbracket_{\mathsf{pull}})=id\ :\mathsf{F}(\Box_{r}\overline{\alpha_{i}})\multimap\mathsf{F}(\Box_{r}\overline{\alpha_{i}})

The appendix [18] gives the proofs, leveraging the equational theory of GrMiniP.

Applying a mathematical perspective, r\Box_{r} is also an endofunctor with its object mapping provided by the type constructor itself and its morphism mapping behaviour defined as follows:

Definition 3.1 (r\Box_{r} functor).

Given a function f:αβf:\alpha\multimap\beta (a closed function term) then rf:rαrβ\Box_{r}f:\Box_{r}\alpha\multimap\Box_{r}\beta is the morphism mapping of the endofunctor r\Box_{r} defined:

rf=λx.casexof[y][fy]\displaystyle\Box_{r}\ f=\lambda x.\textbf{case}\ \mathit{x}\ \textbf{of}\ [\mathit{y}]\rightarrow[\mathit{f}\,\mathit{y}]

For types 𝖥α\mathsf{F}\alpha we can also automatically derive the morphism mapping of a covariant functor, which we write as 𝖥α𝖿𝗆𝖺𝗉\llbracket{\mathsf{F}\alpha}\rrbracket_{\mathsf{fmap}} whose definition is standard (e.g., applied in Haskell [22]) given in the appendix [18]. Distributive laws between endofunctors should be natural transformations, which is indeed the case for our derivations:

Proposition 3.3 (Naturality of push).

For all unary type constructors 𝖥\mathsf{F} such that 𝖥α𝗉𝗎𝗌𝗁\llbracket{\mathsf{F}\alpha}\rrbracket_{\mathsf{push}} is defined, and given a closed function term f:αβf:\alpha\multimap\beta, then: 𝖥𝖿𝗆𝖺𝗉rf𝖥α𝗉𝗎𝗌𝗁=𝖥β𝗉𝗎𝗌𝗁r𝖥𝖿𝗆𝖺𝗉f{\llbracket{\mathsf{F}}\rrbracket_{\mathsf{fmap}}\Box_{r}f}\circ\llbracket{\mathsf{F}{\alpha}}\rrbracket_{\mathsf{push}}=\llbracket{\mathsf{F}{\beta}}\rrbracket_{\mathsf{push}}\circ\Box_{r}\llbracket{\mathsf{F}}\rrbracket_{\mathsf{fmap}}f, i.e.:

Proposition 3.4 (Naturality of pull).

For all unary type constructors 𝖥\mathsf{F} such that 𝖥α𝗉𝗎𝗅𝗅\llbracket{\mathsf{F}\alpha}\rrbracket_{\mathsf{pull}} is defined, and given a closed function term f:αβf:\alpha\multimap\beta, then: r𝖥𝖿𝗆𝖺𝗉f𝖥α𝗉𝗎𝗅𝗅=𝖥β𝗉𝗎𝗅𝗅𝖥𝖿𝗆𝖺𝗉rf\Box_{r}\llbracket{\mathsf{F}}\rrbracket_{\mathsf{fmap}}f\circ\llbracket{\mathsf{F}{\alpha}}\rrbracket_{\mathsf{pull}}=\llbracket{\mathsf{F}{\beta}}\rrbracket_{\mathsf{pull}}\circ{\llbracket{\mathsf{F}}\rrbracket_{\mathsf{fmap}}\Box_{r}f}, i.e.:

The appendix [18] gives the proofs. Note that the naturality results here are for cases of unary type constructors 𝖥\mathsf{F} that are covariant functors, written with a single parameter α\alpha. This can easily generalise to nn-ary type constructors.

Not only is r\Box_{r} an endofunctor but it also has the structure of a graded comonad [11, 21, 24, 25].

Definition 3.2 (Graded comonadic operations).

The GrMiniP calculus (and Granule) permits the derivation of graded comonadic operations [23] for the semiring graded necessity r\Box_{r}, defined:

εA\displaystyle\varepsilon_{A} :1AA=λx.casexof[z]z\displaystyle:\Box_{1}A\multimap A=\lambda x.\textbf{case}\ \mathit{x}\ \textbf{of}\ [\mathit{z}]\rightarrow\mathit{z}
δr,s,A\displaystyle\delta_{r,s,A} :rsArsA=λx.casexof[z][[z]]\displaystyle:\Box_{r\ast{}s}A\multimap\Box_{r}\Box_{s}A=\lambda x.\textbf{case}\ \mathit{x}\ \textbf{of}\ [\mathit{z}]\rightarrow[[\mathit{z}]]

The derived distributive laws preserve these graded comonadic operations i.e., the distributive laws are well-behaved with respect to the graded comonadic structure of r\Box_{r}, captured by the following properties:

Proposition 3.5 (Push preserves graded comonads).

For all 𝖥\mathsf{F} such that 𝖥αi¯𝗉𝗎𝗌𝗁\llbracket{\mathsf{F}\overline{\alpha_{i}}}\rrbracket_{\mathsf{push}} is defined and 𝖥\mathsf{F} does not contain \multimap (to avoid issues of contravariance in 𝖥\mathsf{F}) then:

Proposition 3.6 (Pull preserves graded comonads).

For all 𝖥\mathsf{F} such that 𝖥αi¯𝗉𝗎𝗅𝗅\llbracket{\mathsf{F}\overline{\alpha_{i}}}\rrbracket_{\mathsf{pull}} is defined then:

The appendix [18] gives the proofs.

4 Implementation in Granule

The Granule type checker implements the algorithmic derivation of push and pull distributive laws as covered in the previous section. Whilst the syntax of GrMiniP types had unit, sum, and product types as primitives, in Granule these are provided by a more general notion of type constructor which can be extended by user-defined, generalized algebraic data types (GADTs). The procedure outlined in Section 3 is therefore generalised slightly so that it can be applied to any data type: the case for ABA\oplus B is generalised to types with an arbitrary number of data constructors.

Our deriving mechanism is exposed to programmers via explicit (visible) type application (akin to that provided in GHC Haskell [9]) on reserved names push and pull. Written push @T or pull @T, this signals to the compiler that we wish to derive the corresponding distributive laws at the type T. For example, for the List : Type \rightarrow Type data type from the standard library, we can write the expression push @List which the type checker recognises as a function of type:

push @List : \forall {a : Type, s : Semiring, r : s} . {1 \leqslant r} \Rightarrow (List a) [r] \rightarrow List (a [r])

Note this function is not only polymorphic in the grade, but polymorphic in the semiring itself. Granule identifies different graded modalities by their semirings, and thus this operation is polymorphic in the graded modality. When the type checker encounters such a type application, it triggers the derivation procedure of Section 3, which also calculates the type. The result is then stored in the state of the frontend to be passed to the interpreter (or compiler) after type checking. The derived operations are memoized so that they need not be re-calculated if a particular distributive law is required more than once. Otherwise, the implementation largely follows Section 3 without surprises, apart from some additional machinery for specialising the types of data constructors coming from (generalized) algebraic data types.

Examples

Section 1 motivated the crux of this paper with a concrete example, which we can replay here in concrete Granule, using its type application technique for triggering the automatic derivation of the distributive laws. Previously, we defined pushPair by hand which can now be replaced with:

push @(,) : \forall {a, b : Type, s : Semiring, r : s} . (a, b) [r] \rightarrow (a [r], b [r])

Note that in Granule (,) is an infix type constructor for products as well as terms. We could then replace the previous definition of fst from Section 1 with:

fst : \forall {a, b : Type, r : Semiring} . {0 \leqslant r} \Rightarrow (a, b) [r] \rightarrow a
fst = let [x] = fst (push @(,) x) in x

The point however in the example is that we need not even define this intermediate combinator, but can instead write the following wherever we need to compute the first projection of myPair : (a, b) [r]:

extract (fst (push @(,) myPair)) : a

We already saw that we can then generalise this by applying this first projection inside of the list
myPairList : (List (a, b))[r] directly, using push @List.

In a slightly more elaborate example, we can use the pull combinator for pairs to implement a function that duplicates a pair (given that both elements can be consumed twice):

copyPair : \forall {a, b : Type} . (a [0..2], b [2..4]) \rightarrow ((a, b), (a, b))
copyPair x = copy (pull @(,) x) -- where, copy : a [2] \rightarrow (a, a)

Note pull here computes the greated-lower bound of intervals 0..2 and 2..4 which is 2..2, i.e., we can provide a pair of a and b values which can each be used exactly twice, which is what is required for copy.

As another example, interacting with Granule’s indexed types (GADTs), consider a simple programming task of taking the head of a sized-list (vector) and duplicating it into a pair. The head operation is typed: head : \forall {a : Type, n : Nat} . (Vec (n + 1) a)[0..1] \rightarrow a which has a graded modal input with grade 0..1 meaning the input vector is used 0 or 1 times: the head element is used once (linearly) for the return but the tail is discarded.

This head element can then be copied if it has this capability via a graded modality, e.g., a value of type (Vec (n + 1) (a [2]))[0..1] permits:

copyHead : \forall {a : Type, n : Nat} . (Vec (n + 1) (a [2])) [0..1] \rightarrow (a, a)
copyHead xs = let [y] = head xs in (y, y) -- [y] unboxes (a [2]) to y:a usable twice

Here we “unbox” the graded modal value of type a [2] to get a non-linear variable y which we can use precisely twice. However, what if we are in a programming context where we have a value Vec (n + 1) a with no graded modality on the type a? We can employ two idioms here: (i) take a value of type (Vec (n + 1) a) [0..2] and split its modality in two: (Vec (n + 1) a) [2] [0..1] (ii) then use push on the inner graded modality [2] to get (Vec (n + 1) (a [2])) [0..1].

Using push @Vec we can thus write the following to duplicate the head element of a vector:

copyHead : \forall {a : Type, n : Nat} . (Vec (n + 1) a) [0..2] \rightarrow (a, a)
copyHead = copy . head . boxmap [push @Vec] . disject

which employs combinators from the standard library and the derived distributive law, of type:

boxmap : \forall {a b : Type, s : Semiring, r : s} . (a \rightarrow b) [r] \rightarrow a [r] \rightarrow b [r]
disject : \forall {a : Type, s : Semiring, n m : s} . a [m * n] \rightarrow (a [n]) [m]
push @Vec : \forall {a : Type, n : Nat, s : Semiring, r : s} . (Vec n a) [r] \rightarrow Vec n (a [r])

5 Application to Linear Haskell

While Granule has been pushing the state-of-the-art in graded modal types, similar features have been added to more mainstream languages. Haskell has recently added support for linear types via an underlying graded system which enables linear types as a smooth extension to GHC’s current type system [4].666Released as part of GHC v9.0.1 in February 2021 https://www.haskell.org/ghc/download_ghc_9_0_1.html Functions may be linear with respect to their input (meaning, the function will consume its argument exactly once if the result is consumed exactly once), but they can also consume their argument r times for some multiplicity r via explicit ‘multiplicity polymorphism’. Unlike Granule, Linear Haskell limits this multiplicity to the set of either one or many (the paper speculates about extending this with zero)—full natural numbers or other semirings are not supported.

In Linear Haskell, the function type (a %r \rightarrow b) can be read as “a function from a to b which uses a according to r” where r is either 11 (also written as One) or ω\omega (written as Many). For example, the following defines and types the linear and non-linear functions swap and copy in Linear Haskell:

1{-# LANGUAGE LinearTypes #-}
2import GHC.Types
3
4swap :: (a %1 \rightarrow b %1 \rightarrow c) %1 \rightarrow (b %1 \rightarrow a %1 \rightarrow c)
5swap f x y = f y x
6
7copy :: a %’Many \rightarrow (a, a)
8copy x = (x, x)

Assigning the type a %1 \rightarrow (a, a) to copy would result in a type error due to a mismatch in multiplicity.

The approach of Linear Haskell (as formally defined by Bernardy et al. [4]) resembles earlier coeffect systems [12, 24] and more recent work on graded systems which have appeared since [2, 8]. In these approaches there is no underlying linear type system (as there is in Granule) but grading is instead pervasive with function arrows carrying a grade describing the use of their parameter. Nevertheless, recent systems also provide a graded modality as this enables more fine-grained usage information to be ascribed to compound data. For example, without graded modalities it cannot be explained that the first projection on a pair uses the first component once and its second component not at all (instead a single grade would have to be assigned to the entire pair).

We can define a graded modality in Linear Haskell via the following Box data type that is parameterized over the multiplicity r and the value type a, defined as:

1data Box r a where { Box :: a %r \rightarrow Box r a }

A Box type is necessary to make explicit the notion that a value may be consumed a certain number of times, where ordinarily Linear Haskell is concerned only with whether individual functions consume their arguments linearly or not. Thus, distributive laws of the form discussed in this paper then become useful in practice when working with Linear Haskell.

The pushPair and pullPair functions from Section 1 can then be implemented in Linear Haskell with the help of this Box type:

1pushPair :: Box r (a, b) %1 \rightarrow (Box r a, Box r b)
2pushPair (Box (x, y)) = (Box x, Box y)
1pullPair :: (Box r a, Box r b) %1 \rightarrow Box r (a, b)
2pullPair (Box x, Box y) = Box (x, y)

Interestingly, pushPair could also be implemented as a function of type (a, b) %r \rightarrow (Box r a, Box r b), and in general we can formulate push combinators typed (f a) %r \rightarrow f (Box r a), i.e., consuming the input r times, returning a box with multiplicity r, but we stick with the above formulation for consistency.

While more sophisticated methods are outlined in this paper for automatically synthesizing these functions in the context of Granule, in the context of Linear Haskell (which has a simpler notion of grading) distributive laws over unary and binary type constructors can be captured with type classes:

1class Pushable f where
2 push :: Box r (f a) %1\rightarrow f (Box r a)
3class Pushable2 f where
4 push2 :: Box r (f a b) %1\rightarrow f (Box r a) (Box r b)
5
6class Pullable f where
7 pull :: f (Box r a) %1\rightarrow Box r (f a)
8class Pullable2 f where
9 pull2 :: f (Box r a) (Box r b) %1\rightarrow Box r (f a b)

Separate classes here are defined for unary and binary cases, as working generically over both is tricky in Haskell. Implementing an instance of Pushable2 for pairs is then:

1instance Pushable2 (,) where
2 push2 (Box (x, y)) = (Box x, Box y)

This implementation follows the procedure of Section 3. A pair type ABA\otimes B is handled by pattern matching on (Box (x, y)) and boxing both fields in the pair (Box x, Box y).

A Haskell programmer may define instances of these type classes for their own data types, but as discussed in Section 1, this is tedious from a software engineering perspective. Template Haskell is a meta-programming system for Haskell that allows compile-time generation of code, and one of the use cases for Template Haskell is generating boilerplate code [27]. The instances of Pushable and Pullable for algebraic data types are relatively straightforward, so we implemented procedures that will automatically generate these type class instances for arbitrary user-defined types (though types with so-called “phantom” parameters are currently not supported).

For example, if a programmer wanted to define a Pushable instance for the data type of a linear List a, they would write:

1data List a where
2 Cons :: a %1\rightarrow List a %1\rightarrow List a
3 Nil :: List a
4  
(derivePushable ’’List)

Here, derivePushable is a Template Haskell procedure777Available online: https://github.com/granule-project/deriving-distributed-linear-haskell that takes a name of a user-defined data type and emits a top-level declaration, following the strategy outlined in Section 3. For the List a data type above, we can walk through the type as derivePushable would. Because List a has two constructors, a case statement is necessary (our code generator will use a ‘case lambda’). This case will have branches for each of the Cons and Nil constructors—in the body of the former it must box the first field (of type a) and recursively apply push to the second field (of type List a) after boxing it, and in the body of the latter it must simply return Nil. The full code generated by derivePushable ’’List is given below.

1derivePushable ’’List
2=====\Rightarrow
3instance Pushable List where
4 push
5 = \case
6 Box (Cons match_a4BD match_a4BE)
7 \rightarrow (Cons (Box match_a4BD)) (push (Box match_a4BE))
8 Box Nil \rightarrow Nil

Later, in Section 7, we will discuss other combinators and type classes that are useful in Linear Haskell.

The applicability of our proposed deriving method to Haskell shows that it is useful beyond the context of the Granule project. Despite Haskell not having a formal semantics, we believe the same equational reasoning can be applied to show that the properties in Section 3.1 also hold for these distributive laws in Linear Haskell. As more programming languages adopt type system features similar to Granule and Linear Haskell, we expect that deriving/synthesizing or otherwise generically implementing combinators derived from distributive laws will be increasingly useful.

6 Typed-analysis of Consumption in Pattern Matching

This paper’s study of distributive laws provides an opportunity to consider design decisions for the typed analysis of pattern matching since the operations of Section 3 are derived by pattern matching in concert with grading. We compare here the choices made surrounding the typing of pattern matching in four works (1) Granule and its core calculus [23] (2) the graded modal calculus Λp\Lambda^{p} of Abel and Bernardy [2] (3) the dependent graded system GraD of Choudhury et al. [8] and (4) Linear Haskell [4].

Granule

Pattern matching against a graded modality rA\Box_{r}A (with pattern [p][p]) in Granule is provided by the Pbox rule (Figure 2) which triggers typing pattern pp ‘under’ a grade rr at type AA. This was denoted via the optional grade information rp:AΓ\mathit{r}\vdash\,\mathit{p}:\mathit{A}\,\rhd\,\Gamma which then pushes grading down onto the variables bound within pp. Furthermore, it is only under such a pattern that wildcard patterns are allowed ([Pwild]), requiring 0r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}}, i.e., r\mathit{r} can approximate 0{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0} (where 0{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}0} denotes weakening). None of the other systems considered here have such a facility for weakening via pattern matching.

For a type AA with more than one constructor (|A|>1|\mathit{A}|>1), pattern matching its constructors underneath an rr-graded box requires 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}. For example, eliminating sums inside an r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}-graded box r(AB)\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}(\mathit{A}\,\oplus\,\mathit{B}) requires 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} as distinguishing 𝗂𝗇𝗅\mathsf{inl} or 𝗂𝗇𝗋\mathsf{inr} constitutes a consumption which reveals information (i.e., pattern matching on the ‘tag’ of the data constructors). By contrast, a type with only one constructor cannot convey any information by its constructor and so matching on it is not counted as a consumption: eliminating r(AB)\square_{{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\mathit{r}}}(\mathit{A}\,\otimes\,\mathit{B}) places no requirements on rr. The idea that unary data types do not incur consumption (since no information is conveyed by its constructor) is a refinement here to the original Granule paper as described by Orchard et al. [23], which for [Pcon] had only the premise 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} rather than |A|>11r|\mathit{A}|>1\implies{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} here (although the implementation already reflected this idea).

The Λp\Lambda^{p} calculus

Abel and Bernardy’s unified modal system Λp\Lambda^{p} is akin to Granule, but with pervasive grading (rather than base linearity) akin to the coeffect calculus [24] and Linear Haskell [4] (discussed in Section 5). Similarly to the situation in Granule, Λp\Lambda^{p} also places a grading requirement when pattern matching on a sum type, given by the following typing rule in their syntax [2, Fig 1, p.4]:

γΓt:A1+A2δΓ,xi:qAiui:Cq1(qγ+δ)Γ𝖼𝖺𝗌𝖾qt𝗈𝖿{𝗂𝗇𝗃1x1u1;𝗂𝗇𝗃2x2u2}:C+-elim\displaystyle\dfrac{\gamma\Gamma\vdash t:A_{1}+A_{2}\qquad\delta\Gamma,x_{i}:^{q}A_{i}\vdash u_{i}:C\qquad q\leq 1}{(q\gamma+\delta)\Gamma\vdash\mathsf{case}^{q}\ t\ \mathsf{of}\ \{\mathsf{inj}_{1}x_{1}\mapsto u_{1};\mathsf{inj}_{2}x_{2}\mapsto u_{2}\}:C}\textsc{$+$-elim}

The key aspects here are that variables xix_{i} bound in the case are used with grade qq as denoted by the graded assumption xi:qAix_{i}:^{q}A_{i} in the context of typing uiu_{i} and then that q1q\leq 1 which is exactly our constraint that 1r{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r} (their ordering just runs in the opposite direction to ours). For the elimination of pair and unit types in Λp\Lambda^{p} there is no such constraint, matching our idea that arity affects usage, captured in Granule by |A|>11r|\mathit{A}|>1\implies{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}1}\,{\color[rgb]{0.0390625,0.25,0.6328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0390625,0.25,0.6328125}\sqsubseteq}\,\mathit{r}. Their typed-analysis of patterns is motivated by their parametricity theorems.

GraD

The dependent graded type system GraD of Choudhury et al. also considers the question of how to give the typing of pattern matching on sum types, with a rule in their system [8, p.8] which closely resembles the ++-elim rule for Λp\Lambda^{p}:

Δ;Γ1q:A1A2Δ;Γ2b1:A1𝑞BΔ;Γ2b2:A2𝑞B1qΔ;qΓ1+Γ2𝐜𝐚𝐬𝐞qa𝐨𝐟b1;b2:BSTcase\displaystyle\dfrac{\Delta;\Gamma_{1}\vdash q:A_{1}\oplus A_{2}\qquad\Delta;\Gamma_{2}\vdash b_{1}:A_{1}\xrightarrow{q}B\qquad\Delta;\Gamma_{2}\vdash b_{2}:A_{2}\xrightarrow{q}B\qquad 1\leq q}{\Delta;q\cdot\Gamma_{1}+\Gamma_{2}\vdash\mathbf{case}_{q}\ a\ \mathbf{of}\ b_{1};b_{2}:B}\textsc{STcase}

The direction of the preordering in GraD is the same as that in Granule but, modulo this ordering and some slight restructuring, the case rule captures the same idea as Λp\Lambda^{p}: “both branches of the base analysis must use the scrutinee at least once, as indicated by the 1q1\leq q constraint.” [8, p.8]. Choudhury et al., also provide a heap-based semantics which serves to connect the meaning of grades with a concrete operational model of usage, which then motivates the grading on sum elimination here. In the simply-typed version of GraD, matching on the components of a product requires that each component is consumed linearly.

Linear Haskell

The paper on Linear Haskell by Bernardy et al. [4] has a case expression for eliminating arbitrary data constructors, with grading similar to the rules seen above. Initially, this rule is for the setting of a semiring over ={1,ω}\mathcal{R}=\{1,\omega\} (described in Section 5) and has no requirements on the grading to represent the notion of inspection, consumption, or usage due to matching on (multiple) constructors. This is reflected in the current implementation where we can define the following sum elimination:

1match :: (Either a b) %r \rightarrow (a %1 \rightarrow c) \rightarrow (b %1 \rightarrow c) \rightarrow c
2match (Left x) f _ = f x
3match (Right x) _ g = g x

However, later when considering the generalisation to other semirings they state that “typing rules are mostly unchanged with the caveat that 𝖼𝖺𝗌𝖾π\mathsf{case}_{\pi} must exclude π=0\pi=0” [4, §7.2, p.25] where π\pi is the grade of the case guard. This appears a more coarse-grained restriction than the other three systems, excluding even the possibility of Granule’s weakening wildcard pattern which requires 0π0\leq\pi. Currently, such a pattern must be marked as Many in Linear Haskell (i.e., it cannot explain that first projection on a pair does not use the pair’s second component). Furthermore, the condition π0\pi\neq 0 does not require that π\pi actually represents a consumption, unlike the approaches of the other three systems. The argument put forward by Abel and Bernardy for their restriction to mark a consumption (q1q\leq 1) for the sake of parametricity is a compelling one, and the concrete model of Choudhury et al. gives further confidence that this restriction captures well an operational model. Thus, it seems there is a chance for fertilisation between the works mentioned here and Linear Haskell’s vital work, towards a future where grading is a key tool in the typed-functional programmer’s toolbox.

7 Deriving Other Useful Structural Combinators

So far we have motivated the use of distributive laws, and demonstrated that they are useful in practice when programming in languages with linear and graded modal types. The same methodology we have been discussing can also be used to derive other useful generic combinators for programming with linear and graded modal types. In this section, we consider two structural combinators, drop and copyShape, in Granule as well as related type classes for dropping, copying, and moving resources in Linear Haskell.

7.1 A Combinator for Weakening (“drop”)

The built-in type constants of Granule can be split into those which permit structural weakening C𝗐C^{\mathsf{w}} such as Int, Char, String, and those which do not C𝗅C^{\mathsf{l}} such as Handle (file handles) and Chan (concurrent channels). Those that permit weakening contain non-abstract values that can in theory be systematically inspected in order to consume them. Granule provides a built-in implementation of drop for C𝗐C^{\mathsf{w}} types, which is then used by the following derivation procedure to derive weakening on compound types:

A𝖽𝗋𝗈𝗉:A1\displaystyle\llbracket{A}\rrbracket_{\mathsf{drop}}:A\multimap 1

for closed types AA defined A𝖽𝗋𝗈𝗉=λz.A𝖽𝗋𝗈𝗉z\llbracket{A}\rrbracket_{\mathsf{drop}}=\lambda z.\llbracket{A}\rrbracket_{\mathsf{drop}}^{\emptyset}z by an intermediate derivation A𝖽𝗋𝗈𝗉Σ\llbracket{A}\rrbracket_{\mathsf{drop}}^{\Sigma}:

Cw𝖽𝗋𝗈𝗉Σz\displaystyle\llbracket{C^{w}}\rrbracket_{\mathsf{drop}}^{\Sigma}z =drop z{\displaystyle=\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@numbers\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\lst@@@set@language\footnotesize{\@listingGroup{ltx_lst_identifier}{drop}}}}} $z$}
1𝖽𝗋𝗈𝗉Σz\displaystyle\llbracket{1}\rrbracket_{\mathsf{drop}}^{\Sigma}z =casezof𝗎𝗇𝗂𝗍𝗎𝗇𝗂𝗍\displaystyle=\textbf{case}\ \mathit{z}\ \textbf{of}\ \mathsf{unit}\rightarrow\mathsf{unit}
X𝖽𝗋𝗈𝗉Σz\displaystyle\llbracket{X}\rrbracket_{\mathsf{drop}}^{\Sigma}z =Σ(X)z\displaystyle=\Sigma(X)z
AB𝖽𝗋𝗈𝗉Σz\displaystyle\llbracket{A\oplus B}\rrbracket_{\mathsf{drop}}^{\Sigma}z =𝐜𝐚𝐬𝐞z𝐨𝐟𝗂𝗇𝗅xA𝖽𝗋𝗈𝗉(x);𝗂𝗇𝗋yB𝖽𝗋𝗈𝗉(y)\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ \begin{array}[t]{l}\mathsf{inl}\ x\rightarrow\llbracket{A}\rrbracket_{\mathsf{drop}}(x);\;\;\mathsf{inr}\ y\rightarrow\llbracket{B}\rrbracket_{\mathsf{drop}}(y)\end{array}
AB𝖽𝗋𝗈𝗉Σz\displaystyle\llbracket{A\otimes B}\rrbracket_{\mathsf{drop}}^{\Sigma}z =𝐜𝐚𝐬𝐞z𝐨𝐟(x,y)𝐜𝐚𝐬𝐞A𝖽𝗋𝗈𝗉(x)𝐨𝐟𝗎𝗇𝗂𝗍𝐜𝐚𝐬𝐞B𝖽𝗋𝗈𝗉(y)𝐨𝐟𝗎𝗇𝗂𝗍𝗎𝗇𝗂𝗍\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ (x,y)\rightarrow\mathbf{case}\ \llbracket{A}\rrbracket_{\mathsf{drop}}(x)\ \mathbf{of}\ \mathsf{unit}\rightarrow\mathbf{case}\ \llbracket{B}\rrbracket_{\mathsf{drop}}(y)\ \mathbf{of}\ \mathsf{unit}\rightarrow\mathsf{unit}
μX.A𝖽𝗋𝗈𝗉Σ\displaystyle\llbracket{\mu X.A}\rrbracket_{\mathsf{drop}}^{\Sigma} z=𝐥𝐞𝐭𝐫𝐞𝐜f=A𝖽𝗋𝗈𝗉Σ,Xf:A1𝐢𝐧fz\displaystyle z=\mathbf{letrec}\ f=\llbracket{A}\rrbracket_{\mathsf{drop}}^{\Sigma,X\mapsto f:A\multimap 1}\ \mathbf{in}\ f\ z

Note we cannot use this procedure in a polymorphic context (over type variables α\alpha) since type polymorphism ranges over all types, including those which cannot be dropped like C𝗅C^{\mathsf{l}}.

7.2 A Combinator for Copying “shape”

The “shape” of values for a parametric data types 𝖥\mathsf{F} can be determined by a function 𝑠ℎ𝑎𝑝𝑒:𝖥A𝖥1\mathit{shape}:\mathsf{F}A\rightarrow\mathsf{F}1, usually derived when 𝖥\mathsf{F} is a functor by mapping with A1A\rightarrow 1 (dropping elements) [19]. This provides a way of capturing the size, shape, and form of a data structure. Often when programming with data structures which must be used linearly, we may wish to reason about properties of the data structure (such as the length or “shape” of the structure) but we may not be able to drop the contained values. Instead, we wish to extract the shape but without consuming the original data structure itself.

This can be accomplished with a function which copies the data structure exactly, returning this duplicate along with a data structure of the same shape, but with the terminal nodes replaced with values of the unit type 11 (the ‘spine’). For example, consider a pair of integers: (1, 2). Then applying copyShape to this pair would yield (((), ()), (1, 2)). The original input pair is duplicated and returned on the right of the pair, while the left value contains a pair with the same structure as the input, but with values replaced with (). This is useful, as it allows us to use the left value of the resulting pair to reason about the structure of the input (e.g., its depth / size), while preserving the original input. This is particularly useful for deriving size and length combinators for collection-like data structures.

As with “drop”, we can derive such a function automatically:

𝖥α𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾:𝖥α𝖥1𝖥α\displaystyle\llbracket{\mathsf{F}\alpha}\rrbracket_{\mathsf{copyShape}}:\mathsf{F}\alpha\multimap\mathsf{F}1\otimes\mathsf{F}\alpha

defined by A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾=λz.A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾z\llbracket{A}\rrbracket_{\mathsf{copyShape}}=\lambda z.\llbracket{A}\rrbracket_{\mathsf{copyShape}}^{\emptyset}z by an intermediate derivation A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ\llbracket{A}\rrbracket_{\mathsf{copyShape}}^{\Sigma}:

Cw𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{C^{w}}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =(𝗎𝗇𝗂𝗍,z)\displaystyle=(\mathsf{unit},\ z)
1𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{1}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =𝐜𝐚𝐬𝐞z𝐨𝐟𝗎𝗇𝗂𝗍(𝗎𝗇𝗂𝗍,𝗎𝗇𝗂𝗍)\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ \mathsf{unit}\rightarrow(\mathsf{unit},\mathsf{unit})
α𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{\alpha}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =(𝗎𝗇𝗂𝗍,z)\displaystyle=(\mathsf{unit},z)
X𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{X}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =Σ(X)z\displaystyle=\Sigma(X)z
AB𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{A\oplus B}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =𝐜𝐚𝐬𝐞z𝐨𝐟𝗂𝗇𝗅x𝐜𝐚𝐬𝐞A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ(x)𝐨𝐟(s,x)(𝗂𝗇𝗅s,𝗂𝗇𝗅x);𝗂𝗇𝗋y𝐜𝐚𝐬𝐞B𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ(y)𝐨𝐟(s,y)(𝗂𝗇𝗋s,𝗂𝗇𝗋y)\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ \begin{array}[t]{l}\mathsf{inl}\ x\rightarrow\mathbf{case}\ \llbracket{\mathit{A}}\rrbracket_{\mathsf{copyShape}}^{\Sigma}(\mathit{x})\ \mathbf{of}\ (\mathit{s},\ \mathit{x^{\prime}})\rightarrow(\mathsf{inl}\ \mathit{s},\mathsf{inl}\ \mathit{x^{\prime}});\\ \mathsf{inr}\ y\rightarrow\mathbf{case}\ \llbracket{\mathit{B}}\rrbracket_{\mathsf{copyShape}}^{\Sigma}(\mathit{y})\ \mathbf{of}\ (\mathit{s},\ \mathit{y^{\prime}})\rightarrow(\mathsf{inr}\ \mathit{s},\mathsf{inr}\ \mathit{y^{\prime}})\end{array}
AB𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σz\displaystyle\llbracket{A\otimes B}\rrbracket_{\mathsf{copyShape}}^{\Sigma}z =𝐜𝐚𝐬𝐞z𝐨𝐟(x,y)𝐜𝐚𝐬𝐞A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ(x)𝐨𝐟(s,x)\displaystyle=\mathbf{case}\ z\ \mathbf{of}\ (x,y)\rightarrow\mathbf{case}\ \llbracket{\mathit{A}}\rrbracket_{\mathsf{copyShape}}^{\Sigma}(\mathit{x})\ \mathbf{of}\ (\mathit{s},\ \mathit{x^{\prime}})\rightarrow
𝐜𝐚𝐬𝐞B𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ(y)𝐨𝐟(s,y)((s,s),(x,y))\displaystyle\qquad\mathbf{case}\ \llbracket{\mathit{B}}\rrbracket_{\mathsf{copyShape}}^{\Sigma}(\mathit{y})\ \mathbf{of}\ (\mathit{s^{\prime}},\ \mathit{y^{\prime}})\rightarrow((\mathit{s},\ \mathit{s^{\prime}}),\ (\mathit{x^{\prime}},\ \mathit{y^{\prime}}))
μX.A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ\displaystyle\llbracket{\mu X.A}\rrbracket_{\mathsf{copyShape}}^{\Sigma} z=𝐥𝐞𝐭𝐫𝐞𝐜f=A𝖼𝗈𝗉𝗒𝖲𝗁𝖺𝗉𝖾Σ,Xf:A1A𝐢𝐧fz\displaystyle z=\mathbf{letrec}\ f=\llbracket{A}\rrbracket_{\mathsf{copyShape}}^{\Sigma,X\mapsto f:A\multimap 1\otimes A}\ \mathbf{in}\ f\ z

The implementation recursively follows the structure of the type, replicating the constructors, reaching the crucial case where a polymorphically type z:αz:\alpha is mapped to (𝗎𝗇𝗂𝗍,z)(\mathsf{unit},\mathit{z}) in the third equation.

Granule implements both these derived combinators in a similar way to push/pull providing copyShape and drop which can be derived for a type T via type application, e.g. drop @T : T \rightarrow () if it can be derived. Otherwise, the type checker produces an error, explaining why drop is not derivable at type T.

7.3 Other Combinators in Linear Haskell

As previously covered in Section 5, we demonstrated that the push and pull combinators derived from distributed laws are useful in Haskell with its linear types extension, and we demonstrated that they can be automatically derived using compile-time meta-programming with Template Haskell.

To the best of our knowledge, nothing comparable to the Pushable and Pullable type classes proposed here has been previously discussed in the literature on Linear Haskell. However, several other type classes have been proposed for inclusion in the standard library to deal with common use cases when programming with linear function types.888See the linear-base library: https://github.com/tweag/linear-base. One of these classes, Consumable, roughly corresponds to the drop combinator above, while the other two, Dupable and Movable, are for when a programmer wants to allow a data type to be duplicated or moved in linear code.

1class Consumable a where
2 consume :: a %1\rightarrow ()
3
4class Consumable a \Rightarrow Dupable a where
5 dup2 :: a %1\rightarrow (a, a)
6
7class Dupable a \Rightarrow Movable a where
8 move :: a %1\rightarrow Ur a

The consume function is a linear function from a value to unit, whereas dup is a linear function from a value to a pair of that same value. The move linear function maps a to Ur a, where Ur is the “unrestricted” modality. Thus, move can be used to implement both consume and dup2:

1case move x of {Ur _ \rightarrow ()} -- consume x
2case move x of {Ur x \rightarrow x} -- x
3case move x of {Ur x \rightarrow (x, x)} -- dup2 x

A ‘copy shape’ class may also be a useful addition to Linear Haskell in the future.

8 Discussion and Conclusion

Section 6 considered, in some detail, systems related to Granule and different typed analyses of pattern matching. Furthermore, in applying our approach to both Granule and Linear Haskell we have already provided some detailed comparison between the two. This section considers some wider related work alongside ideas for future work, then concludes the paper.

Generic Programming Methodology

The deriving mechanism for Granule is based on the methodology of generic functional programming [15], where functions may be defined generically for all possible data types in the language; generic functions are defined inductively on the structure of the types. This technique has notably been used before in Haskell, where there has been a strong interest in deriving type class instances automatically. Particularly relevant to this paper is the work on generic deriving [22], which allows Haskell programmers to automatically derive arbitrary class instances using standard datatype-generic programming techniques as described above. In this paper we opted to rely on compile-time metaprogramming using Template Haskell [27] instead, but it is possible that some of the combinators we describe could be implemented using generic deriving as well, which is future work.

Non-graded Distributive Laws

Distributive laws are standard components in abstract mathematics. Distributive laws between categorical structures used for modelling modalities (like monads and comonads) are well explored. For example, Brookes and Geva defined a categorical semantics using monads combined with comonads via a distributive law capturing both intensional and effectful aspects of a program [6]. Power and Watanabe study in detail different ways of combining comonads and monads via distributive laws [26]. Such distributive laws have been applied in the programming languages literature, e.g., for modelling streams of partial elements [30].

Graded Distributive Laws

Gaboardi et al. define families of graded distributive laws for graded monads and comonads [11]. They include the ability to interact the grades, e.g., with operations such as ι(r,f)fAκ(r,f)rA\Box_{\iota(r,f)}\Diamond_{f}A\rightarrow\Diamond_{\kappa(r,f)}\Box_{r}A between a graded comonad r\Box_{r} and graded monad f\Diamond_{f} where ι\iota and κ\kappa capture information about the distributive law in the grades. In comparison, our distributive laws here are more prosaic since they involve only a graded comonad (semiring graded necessity) distributed over a functor and vice versa. That said, the scheme of Gaboardi et al. suggests that there might be interesting graded distributive laws between r\Box_{r} and the indexed types, for example, r(𝖵𝖾𝖼nA)𝖵𝖾𝖼(rn)(1A)\Box_{r}(\mathsf{Vec}\,n\,A)\rightarrow\mathsf{Vec}\,(r*n)\,(\Box_{1}A) which internally replicates a vector. However, it is less clear how useful such combinators would be in general or how systematic their construction would be. In contrast, the distributive laws explained here appear frequently and have a straightforward uniform calculation.

We noted in Section 3 that neither of our distributive laws can be derived over graded modalities themselves, i.e., we cannot derive push:rsAsrA\textit{push}:\Box_{r}\Box_{s}A\rightarrow\Box_{s}\Box_{r}A. Such an operation would itself be a distributive law between two graded modalities, which may have further semantic and analysis consequences beyond the normal derivations here for regular types. Exploring this is future work, for which the previous work on graded distributive laws can provide a useful scheme for considering the possibilities here. Furthermore, Granule has both graded comonads and graded monads so there is scope for exploring possible graded distributive laws between these in the future following Gaboardi et al. [11].

In work that is somewhat adjacent to this paper, we define program synthesis procedures for graded linear types [17]. This program synthesis approach can synthesis pull distributive laws that are equivalent to the algorithmically derived operations of this paper. Interestingly the program synthesis approach cannot derive the push laws though as its core theory has a simplified notion of pattern typing that doesn’t capture the full power of Granule’s pattern matches as described in this paper.

Conclusions

The slogan of graded types is to imbue types with information reflecting and capturing the underlying program semantics and structure. This provides a mechanism for fine-grained intensional reasoning about programs at the type level, advancing the power of type-based verification. Graded types are a burgeoning technique that is gaining traction in mainstream functional programming and is being explored from multiple different angles in the literature. The work described here addresses the practical aspects of applying these techniques in real-world programming. Our hope is that this aids the development of the next generation of programming languages with rich type systems for high-assurance programming.

Acknowledgments

Thanks to Harley Eades III and Daniel Marshall for discussion and comments on a draft and to the reviewers and participants at TLLA-Linearity 2020. This work was supported by the EPSRC, grant EP/T013516/1 Verifying Resource-like Data Use in Programs via Types. The first author is also supported by an EPSRC DTA.

References

  • [1]
  • [2] Andreas Abel & Jean-Philippe Bernardy (2020): A Unified View of Modalities in Type Systems. Proc. ACM Program. Lang. 4(ICFP), 10.1145/3408972.
  • [3] Nick Benton, Gavin Bierman, Valeria De Paiva & Martin Hyland (1992): Linear lambda-calculus and categorical models revisited. In: International Workshop on Computer Science Logic, Springer, pp. 61–84, 10.1007/3-540-56992-8_6.
  • [4] Jean-Philippe Bernardy, Mathieu Boespflug, Ryan R. Newton, Simon Peyton Jones & Arnaud Spiwack (2017): Linear Haskell: Practical Linearity in a Higher-Order Polymorphic Language. Proc. ACM Program. Lang. 2(POPL), 10.1145/3158093.
  • [5] Edwin Brady (2021): Idris 2: Quantitative Type Theory in Practice. In Anders Møller & Manu Sridharan, editors: 35th European Conference on Object-Oriented Programming (ECOOP 2021), Leibniz International Proceedings in Informatics (LIPIcs) 194, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl, Germany, pp. 9:1–9:26, 10.4230/LIPIcs.ECOOP.2021.9. Available at https://drops.dagstuhl.de/opus/volltexte/2021/14052.
  • [6] Stephen Brookes & Kathryn V Stone (1993): Monads and Comonads in Intensional Semantics. Technical Report, Pittsburgh, PA, USA. Available at https://dl.acm.org/doi/10.5555/865105.
  • [7] Aloïs Brunel, Marco Gaboardi, Damiano Mazza & Steve Zdancewic (2014): A core quantitative coeffect calculus. In: European Symposium on Programming Languages and Systems, Springer, pp. 351–370, 10.1007/978-3-642-54833-8_19.
  • [8] Pritam Choudhury, Harley Eades III, Richard A. Eisenberg & Stephanie Weirich (2021): A Graded Dependent Type System with a Usage-Aware Semantics. Proc. ACM Program. Lang. 5(POPL), 10.1145/3434331.
  • [9] Richard A Eisenberg, Stephanie Weirich & Hamidhasan G Ahmed (2016): Visible type application. In: European Symposium on Programming, Springer, pp. 229–254, 10.1007/978-3-662-49498-1_10.
  • [10] Marco Gaboardi, Andreas Haeberlen, Justin Hsu, Arjun Narayan & Benjamin C Pierce (2013): Linear dependent types for differential privacy. In: Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 357–370, 10.1145/2429069.2429113.
  • [11] Marco Gaboardi, Shin-ya Katsumata, Dominic Orchard, Flavien Breuvart & Tarmo Uustalu (2016): Combining Effects and Coeffects via Grading. In: Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, ICFP 2016, Association for Computing Machinery, New York, NY, USA, p. 476–489, 10.1145/2951913.2951939.
  • [12] Dan R. Ghica & Alex I. Smith (2014): Bounded linear types in a resource semiring. In: Programming Languages and Systems, Springer, pp. 331–350, 10.1007/978-3-642-54833-8_18.
  • [13] Jean-Yves Girard (1987): Linear logic. Theoretical computer science 50(1), pp. 1–101, 10.1016/0304-3975(87)90045-4.
  • [14] Jean-Yves Girard, Andre Scedrov & Philip J Scott (1992): Bounded linear logic: a modular approach to polynomial-time computability. Theoretical computer science 97(1), pp. 1–66, 10.1016/0304-3975(92)90386-T.
  • [15] Ralf Hinze (2000): A new approach to generic functional programming. In: Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 119–132, 10.1145/325694.325709.
  • [16] Jack Hughes, Daniel Marshall, James Wood & Dominic Orchard (2021): Linear Exponentials as Graded Modal Types. In: 5th International Workshop on Trends in Linear Logic and Applications (TLLA 2021), Rome (virtual), Italy. Available at https://hal-lirmm.ccsd.cnrs.fr/lirmm-03271465.
  • [17] Jack Hughes & Dominic Orchard (2020): Resourceful Program Synthesis from Graded Linear Types. In: Logic-Based Program Synthesis and Transformation - 30th International Symposium, LOPSTR 2020, Bologna, Italy, September 7-9, 2020, Proceedings, pp. 151–170, 10.1007/978-3-030-68446-4_8.
  • [18] Jack Hughes, Michael Vollmer & Dominic Orchard (2021): Deriving Distributive Laws for Graded Linear Types (Additional Material), 10.5281/zenodo.5575771. Available at https://doi.org/10.5281/zenodo.5575771.
  • [19] C Barry Jay & J Robin B Cockett (1994): Shapely types and shape polymorphism. In: European Symposium on Programming, Springer, pp. 302–316, 10.1007/3-540-57880-3_20.
  • [20] Shin-ya Katsumata (2014): Parametric effect monads and semantics of effect systems. In Suresh Jagannathan & Peter Sewell, editors: The 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’14, San Diego, CA, USA, January 20-21, 2014, ACM, pp. 633–646, 10.1145/2535838.2535846.
  • [21] Shin-ya Katsumata (2018): A Double Category Theoretic Analysis of Graded Linear Exponential Comonads. In Christel Baier & Ugo Dal Lago, editors: Foundations of Software Science and Computation Structures - 21st International Conference, FOSSACS 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings, Lecture Notes in Computer Science 10803, Springer, pp. 110–127, 10.1007/978-3-319-89366-2_6.
  • [22] José Pedro Magalhães, Atze Dijkstra, Johan Jeuring & Andres Löh (2010): A Generic Deriving Mechanism for Haskell. SIGPLAN Not. 45(11), p. 37–48, 10.1145/2088456.1863529.
  • [23] Dominic Orchard, Vilem-Benjamin Liepelt & Harley Eades III (2019): Quantitative program reasoning with graded modal types. PACMPL 3(ICFP), pp. 110:1–110:30, 10.1145/3341714.
  • [24] Tomas Petricek, Dominic Orchard & Alan Mycroft (2014): Coeffects: a calculus of context-dependent computation. In: Proceedings of the 19th ACM SIGPLAN international conference on Functional programming, ACM, pp. 123–135, 10.1145/2692915.2628160.
  • [25] Tomas Petricek, Dominic A. Orchard & Alan Mycroft (2013): Coeffects: Unified Static Analysis of Context-Dependence. In Fedor V. Fomin, Rusins Freivalds, Marta Z. Kwiatkowska & David Peleg, editors: Automata, Languages, and Programming - 40th International Colloquium, ICALP 2013, Riga, Latvia, July 8-12, 2013, Proceedings, Part II, Lecture Notes in Computer Science 7966, Springer, pp. 385–397, 10.1007/978-3-642-39212-2_35.
  • [26] John Power & Hiroshi Watanabe (2002): Combining a monad and a comonad. Theoretical Computer Science 280(1-2), pp. 137–162, 10.1016/S0304-3975(01)00024-X.
  • [27] Tim Sheard & Simon Peyton Jones (2002): Template Meta-Programming for Haskell. In: Proceedings of the 2002 ACM SIGPLAN Workshop on Haskell, Haskell ’02, Association for Computing Machinery, New York, NY, USA, p. 1–16, 10.1145/581690.581691.
  • [28] Ross Street (1972): The formal theory of monads. Journal of Pure and Applied Algebra 2(2), pp. 149–168, 10.1016/0022-4049(72)90019-9.
  • [29] K. Terui (2001): Light Affine Lambda Calculus and Polytime Strong Normalization. In: LICS ’01, IEEE Computer Society, pp. 209–220, 10.1109/LICS.2001.932498.
  • [30] Tarmo Uustalu & Varmo Vene (November 2006): The Essence of Dataflow Programming. Lecture Notes in Computer Science 4164, pp. 135–167, 10.1007/11894100_5.