Certified Mergeable Replicated Data Types
(Extended Version)
Abstract.
Replicated data types (RDTs) are data structures that permit concurrent modification of multiple, potentially geo-distributed, replicas without coordination between them. RDTs are designed in such a way that conflicting operations are eventually deterministically reconciled ensuring convergence. Constructing correct RDTs remains a difficult endeavour due to the complexity of reasoning about independently evolving states of the replicas. With the focus on the correctness of RDTs (and rightly so), existing approaches to RDTs are less efficient compared to their sequential counterparts in terms of time- and space-complexity of local operations. This is unfortunate since RDTs are often used in an local-first setting where the local operations far outweigh remote communication.
In this paper, we present Peepul, a pragmatic approach to building and verifying efficient RDTs. To make reasoning about correctness easier, we cast RDTs in the mould of distributed version control system, and equip it with a three-way merge function for reconciling conflicting versions. Further, we go beyond just verifying convergence, and provide a methodology to verify arbitrarily complex specifications. We develop a replication-aware simulation relation to relate RDT specifications to their efficient purely functional implementations. We have developed Peepul as an F* library that discharges proof obligations to an SMT solver. The verified efficient RDTs are extracted as OCaml code and used in Irmin, a Git-like distributed database.
1. Introduction
Modern cloud-based software services often replicate data across multiple geographically distributed locations in order to tolerate against partial failures of servers and to minimise latency by bringing data closer to the user. While services like Google Docs allow several users to concurrently edit the document, the conflicts are resolved with the help of a centralised server. On the other hand, services like Github and Gitlab, built on the decentralised version control system Git, avoid the need for a centralised server, and permit the different replicas (forks) to synchronize with each other in a peer-to-peer fashion. By avoiding centralised server, local-first software (Kleppmann et al., 2019) such as Git bring in additional benefits of security, privacy and user ownership of data.
While Git is designed for line-based editing of text files and requires manual intervention in the presence of merge conflicts, RDTs generalise this concept to arbitrary general purpose data structures such as lists and hash maps, and ensure convergence without manual intervention. Convergent Replicated Data Types (CRDTs) (Shapiro et al., 2011), which arose from distributed systems research, are complete reimplementations of sequential counterparts aimed at providing convergence without user intervention, and have been deployed in distributed data bases such as AntidoteDB (Shapiro et al., 2018) and Riak (Riak, 2021).
In order to resolve conflicting updates, CRDTs generally need to carry their causal contexts as metadata (Yu and Rostad, 2020). Managing this causal context is often expensive and complicated. For example, consider the observed-removed set CRDT (OR-set) (Shapiro et al., 2011), where, in the case of concurrent addition and removal, the addition wins. A typical OR-set implementation uses two grow-only sets, one for elements added to the set and another for elements that are removed . An element is removed from the OR-set by adding it to the set , and thus creating a tombstone for . The set membership is given by the difference between the two: , and two concurrent versions can be merged by unioning the individual and sets. Observe that the tombstones for removed elements cannot be garbage collected as that would require all the replicas to remove the element at the same time, which requires global coordination. This leads to an inefficient implementation. Several techniques have been proposed to minimise this metadata overhead (Yu and Rostad, 2020; Almeida et al., 2018), but the fundamental problem still remains.
1.1. Mergeable Replicated Data Types
As an alternative to CRDTs, mergeable replicated data types (MRDTs) (Kaki et al., 2019) have been proposed, which extend the idea of distributed version control for arbitrary data types. The causal context necessary for resolving the conflicts is maintained by the MRDT middleware. MRDTs allow ordinary purely functional data structures (Okasaki, 1999) to be promoted to RDTs by equipping them with a three-way merge function that describes the conflict resolution policy. When conflicting updates need to be reconciled, the causal history is used to determine the lowest common ancestor (lca) for use in the three-way merge function along with the conflicting states. The MRDT middleware garbage collects the causal histories when appropriate (Dubey, 2021), and is no longer a concern for the RDT library developer. This branch-consistent view of replication not only makes it easier to develop individual data types, but also leads to a natural transactional semantics (Crooks et al., 2016; Dubey et al., 2020).
An efficient OR-set MRDT that avoids tombstones can be implemented as follows. We represent the OR-set as a list of pairs of the element and a unique id, which is generated per operation. The list may have duplicate elements with different ids. Adding an element appends the element and the id pair to the head of the list ( operation). Removing an element removes all the occurrences of the element from the list ( operation). Given two concurrent versions of the OR-set and , and their lowest common ancestor , the merge is implemented as , where stands for list append. Intuitively, we append the lists formed by newly added elements in and with the list of elements that are present on all the three versions. The unique id associated with the element ensures that in the presence of concurrent addition and removal of the same element, the newly added element with the fresh id, which has not been seen by the concurrent remove, will remain in the merged result. The merge operation can be implemented in time by sorting the individual lists. In §2.1.2, we show how to make this implementation even more efficient by removing the duplicate elements with different ids from the OR-set.
1.2. Efficiency and correctness
The key question is how do we guarantee that such efficient implementations still preserve the intent of the OR-Set in a sound manner? Optimisations such as removing duplicate elements are notoriously difficult to get right since the replica states evolve independently. Moreover, individually correct RDTs may fail to preserve convergence when put together (Kleppmann, 2020). Kaki et al. (Kaki et al., 2019) opine that merge functions should not be written by hand, but automatically derived from a relational representation of the sequential data type. Their idea is to capture the key properties of the algebraic data type as relations over its constituent elements. Then, the merge function devolves to a merge of these relations (sets) expressed as MRDTs. During merge, the concrete implementations are reified to their relational representations expressed in terms of sets, merged using set semantics, and the final concrete state is reconstructed from the relational set representation.
Unfortunately, mapping complex data types to sets does not lead to efficient implementations. For example, a queue in Kaki et al. is represented by two characteristic relations – a unary relation for membership and a binary relation for ordering. For a queue with elements, the ordering relation contains elements. Reifying the queue to its characteristic relations and back to its concrete representation for every merge is inefficient and impractical. This technique does not scale as the structure of the data type gets richer (Red-Black tree, JSON, file systems, etc.). The more complex the data type, more complex the characteristic relations become, having an impact on the cost of merge. Further Kaki et al. do not consider functional correctness of MRDT implementations, but instead only focuses on the correctness of convergence.
1.3. Certified MRDTs
Precisely specifying and verifying the functional correctness of efficient RDT implementations is not straightforward due to the complexity of handling conflicts between divergent versions. This results in a huge gap between the high-level specifications and efficient implementations. In this work, we propose to bridge this gap by using Burckhardt et al.’s replication-aware simulation relation (Burckhardt et al., 2014). However, Burckhardt et al.’s simulation is only applicable to CRDTs and cannot be directly extended to MRDTs which assume a different system model.
We first propose a system model and an operational semantics for MRDTs, and precisely define the problem of convergence and functional correctness for MRDTs. We also introduce a new notion of convergence modulo observable behaviour, which allows replicas to converge to different states, as long as their observable behaviour to clients remains the same. This notion allows us to build and verify even more efficient MRDTs.
Further, we go beyond Burckhardt et al.’s work (Burckhardt et al., 2014) in the use of simulation relations by mechanizing and automating (to an extent) the complete verification process. We instantiate our technique as an F* library named Peepul and mechanically verify the implementation of a number efficient purely functional MRDT implementations including an efficient replicated two-list queue. Our replicated queue supports constant time push and pop operations, a linear time merge operation, and does not have any tombstones. To the best of our knowledge, ours is the first formal declarative specification of a distributed queue (§6), and its mechanised proof of correctness.
Being a SMT-solver-aided programming language, F* allows us to discharge many of the proof obligations automatically through the SMT solver. Even though our approach requires the simulation relation as input, we also observe that in most cases, the simulation relation directly follows from the declarative specification.
Our technique also supports composition, and we demonstrate how parametric polymorphism allows composition of not just the MRDT implementations but also their proofs of correctness. From our MRDT implementations in F*, we extract verified OCaml implementations and execute them on top of Irmin, a Git-like distributed database. Our experimental evaluation shows that our efficient MRDT implementations scale significantly better than other RDT implementations.
To summarize, we make the following contributions:
-
•
We propose a store semantics for MRDT implementations and formally define the convergence and functional correctness problem for MRDTs, including a new notion of convergence modulo observable behaviour.
-
•
We propose a technique to verify both convergence and functional correctness of MRDTs by adapting the notion of replication-aware simulation relation (Burckhardt et al., 2014) to the MRDT setting.
-
•
We mechanize and automate the complete verification process using F*, and apply our technique on a number of complex MRDT implementations, including a new time and space-efficient ORSet and a queue MRDT.
-
•
We provide experimental results which demonstrate that our efficient MRDT implementations perform much better as compared with previous implementations, and also show the tradeoff between proof automation and verification time in F*.
The rest of the paper is organized as follows. §2 presents the implementation model and the declarative specification framework for MRDTs. §3 presents the formal semantics of the git-like replicated store on which MRDT implementations run. In §4, we present a new verification strategy for MRDTs based on the notion of replication-aware simulation. §5 highlights the compositionality of our technique in verifying complex verified MRDTs reusing the proofs of simpler ones. §6 presents the formally verified efficient replicated queue. §7 presents the experimental evaluation and §8 presents the related work.
2. Implementing and Specifying MRDTs
In this section, we present the formal model for describing MRDT implementations and their specifications.
2.1. Implementation
Our model of replicated datastore is similar to a distributed version control system like Git (Git, 2021), with replication centred around versioned states in branches and explicit merges. A typical replicated datastore will have a key-value interface with the capability to store arbitrary objects as values (Riak, 2021; Irmin, 2021). Since our goal is to verify correct implementations of individual replicated objects, our formalism models a store with a single object.
A replicated datastore consists of an object which is replicated across multiple branches . Clients interact with the store by performing operations on the object at a specified branch, modifying its local state. The different branches may concurrently update their local states and progress independently. We also allow dynamic creation of a new branch by copying the state of an existing branch. A branch at any time can get updates from any other branch by performing a merge with that branch, updating its local copy to reflect the merge. Conflicts might arise when the same object is modified in two or more branches, and these are resolved in an data type specific way.
An object has a type , whose type signature determines the set of supported operations and the set of their return values . A special value is used for operations that return no value.
Definition 2.1.
A mergeable replicated data type (MRDT) implementation for a data type is a tuple where:
-
•
is the set of all possible states at a branch,
-
•
is the initial state,
-
•
implements every data type operation,
-
•
implements the three-way merge strategy.
An MRDT implementation provides two methods: do and merge that the datastore will invoke appropriately. We assume that these methods execute atomically. A client request to perform an operation at a branch triggers the call . This takes the current state of the object at the branch where the request is issued and a timestamp provided by the datastore, and produces the updated object state and the return value of the operation.
The datastore guarantees that the timestamps are unique across all of the branches, and for any two operations and , with timestamps and , if happens-before , then . The data type implementation can use the timestamp provided to implement the conflict-resolution strategy, but is also free to ignore it. For simplicity of presentation, we assume that the timestamps are positive integers, . The datastore may choose to implement the timestamp using Lamport clocks (Lamport, 1978), along with the unique branch id to provide uniqueness of timestamps.
A branch may get updates from another branch by performing a merge, which modifies the state of the object in branch . In this case, the datastore will invoke where and are the current states of branch and respectively, and is the lowest common ancestor (LCA) of the two branches. The LCA of two branches is the most recent state from which the two branches diverged. We assume that execution of the store will begin with a single branch, from which new branches may be dynamically created. Hence, for any two branches, the LCA will always exist.
2.1.1. OR-set
We illustrate MRDT implementations using the example of an OR-set. Recall from §1 that the OR-set favours the addition in the case where there is a concurrent addition and removal of the same element on different branches.
Let us assume that the elements in the OR-set are natural numbers. Its type signature would be . Figure 1 shows an MRDT implementation of the OR-set data type. The state of the object is a set of pairs of the element and the timestamp. The operations and the merge remain the same as described in §1.1. Note that we use and functions to obtain the first and second elements respectively from a tuple. This implementation may have duplicate entries of the same element with different timestamps.
2.1.2. Space-efficient OR-set (OR-set-space)
One possibility to make this OR-set implementation more space-efficient is by removing the duplicate entries from the set. A duplicate element will appear in the set if the client calls for an element which is already in the set. Can we reimplement such that we leave the set as is if the set already has ? Unfortunately, this breaks the intent of the OR-set. In particular, if there were a concurrent remove of on a different branch, then will be removed when the branches are merged. The key insight is that the effect of the duplicate add has to be recorded so as to not lose additions.
Figure 2 provides the implementation of the space-efficient OR-set. The read and the remove operations remain the same as the earlier implementation. If the element being added is not already present in the set, then the element is added to the set along with the timestamp. Otherwise, the timestamp of the existing entry is updated to the new timestamp. Given that our timestamps are unique, the new operation’s timestamp will be distinct from the old timestamp. This prevents a concurrent remove from deleting this new addition.
Another possibility of duplicates is that the same element may concurrently be added on two different branches. The implementation of the merge function now has to take care of this possibility and not include duplicates. An element in the merged set was either in the lca and the two concurrent states (line 8), or was only added in one of the branches (lines 9 and 10), or was added in both the branches in which case we pick the entry with the larger timestamp (lines 11–14).
2.2. Specification
Given that there are several candidates for implementing an MRDT, we need a way to specify the behaviour of an MRDT so that we may ask the question of whether the given implementation satisfies the specification. We now present a declarative framework for specifying MRDTs which closely follows the framework presented by Burckhardt et al. (Burckhardt et al., 2014). We define our specifications on an abstract state, which capture the state of the distributed store. It consists of events in a execution of the distributed store, along with a visibility relation among them.
Definition 2.2.
An abstract state for a data type is a tuple , where
-
•
is a set of events,
-
•
associates the data type operation with each event,
-
•
associates the return value with each event,
-
•
associates the timestamp at which an event was performed,
-
•
is an irreflexive, asymmetric and transitive visibility relation.
Given , is said to causally precede . In our setting, it may be the case that the operation of follows the operation of on the same branch, or the operations of and were performed on different branches and , but before the operation of , the branch on which the operation of was performed was merged info .
We specify a data type by a function which determines the return value of an operation based on prior operations applied on that object. also takes as a parameter the abstract state that is visible to the operation. Note that the abstract state contains all the information that is necessary to specify the return-value of .
Definition 2.3.
A replicated data type specification for a type is a function that given an operation and an abstract state for , specifies a return value .
2.2.1. OR-set specification
As an illustration of the specification language, let us consider the OR-set. For the OR-set, both and operations always return . We can formally specify the ‘add-wins’ conflict resolution strategy as follows:
In words, the read operation returns all those elements for which there exists an operation of the element which is not visible to a operation of the same element. Hence, if an and operation are concurrent, then the would win. Notice that the specification, while precisely encoding the required semantics, is far removed from the MRDT implementations of the OR-set that we saw earlier. Providing a framework for bridging this gap in an automated and mechanized manner is one of the principal contributions of this work.
3. Store Semantics and MRDT Correctness
In this section, we formally define the semantics of a replicated datastore consisting of a single object with data type implementation . Note that the store semantics can be easily generalized to multiple objects (with possibly different data types), since the store treats each object independently. We then define formally what it means for data type implementations to satisfy their specifications. We also introduce a novel notion of convergence across all the branches called convergence modulo observable behaviour that differs from the standard notions of eventual consistency. This property allows us to have more efficient but verified merges.
The semantics of the store is a set of all its executions. In order to easily relate the specifications which are in terms of abstract states to the implementation, we maintain both the concrete state (as given by the data type implementation) and the abstract state at every branch in our store semantics. Formally, the semantics of the store are parametrised by a data type and its implementation . They are represented by a labelled transition system . Assume that is the set of all possible branches. Each state in is a tuple where,
-
•
is a partial function that maps branches to their concrete states,
-
•
is a partial function that maps branches to their abstract states,
-
•
maintains the current timestamp to be supplied to operations.
The initial state of the labelled transition system consists of only one branch , and is represented by where and .
Here, is the initial state as given by the implementation , while is the empty abstract state, whose event set is empty. In order to describe the transition rules, we first introduce abstract operations , and which perform a data type operation, merge operation and find the lowest common ancestor respectively on abstract states:
In terms of abstract states, simply adds the new event to the set of events, appropriately setting the various event properties and visibility relation. of two abstract states simply takes a union of the events in the two states. Similarly, the of two abstract states would be the intersection of events in the two states.
Figure 3 describes the transition function . The first rule describes the creation of new branch from the current branch . Both the concrete and abstract states of the new branch will be the same as that of . The second rule describes a branch performing an operation which triggers a call to the method of the corresponding data type implementation. The return value is recorded using the function . A similar update is also performed on abstract state of branch using . The third rule describes the merging of branch into branch which triggers a call to the method of the data type implementation. We assume that the store provides another branch whose abstract and concrete states correspond to the lowest common ancestor of the two branches.
Definition 3.1.
An execution of is a finite but unbounded sequence of transitions starting from the initial state .
(1) |
Definition 3.2.
An execution satisfies the specification for the data type , written as , if for every transition in , such that , then .
That is for every operation , the return value computed by the implementation on the concrete state must be equal to the return value of the specification function computed on the abstract state. Next we define the notion of convergence (i.e. strong eventual consistency) in our setting:
Definition 3.3.
An execution (as in equation 1) is convergent,
if for every state and
That is, two branches with the same abstract states–which corresponds to having seen the same set of events–must also have the same concrete state. We note that even though eventual consistency requires two branches to converge to the same state, from the point of view of a client that uses the data store, this state is never directly visible. That is, a client only notices the operations and their return values. Based on this insight, we define the notion of observational equivalence between states, and a new notion of convergence modulo observable behaviour that requires branches to converge to states that are observationally equivalent.
Definition 3.4.
Two states and are observationally equivalent, written as , if the return value of every operation supported by the data type applied on the two states is the same. Formally,
Definition 3.5.
An execution (as in equation 1) is convergent modulo observable behavior, if for every state and
(2) |
The idea behind convergence modulo observable behaviout is that the state of the object at different replicas may not converge to the same (structurally equal) representation, but the object has the same observable behaviour in terms of its operations. For example, in the OR-set implementation, if the set is implemented internally as a binary search tree (BST), then branches can independently decide to perform balancing operations on the BST to improve the complexity of the subsequent read operations. This would mean that the actual state of the BSTs at different branches may eventually not be structurally equal, but they would still contain the same set of elements, resulting in same observable behaviour. Note that the standard notion of eventual consistency implies convergence modulo observable behaviour.
Definition 3.6.
A data type implementation is correct, if every execution of satisfies the specification and is convergent modulo observable behavior.
4. Proving Data Type Implementations Correct
In the previous section, we have defined what it means for an MRDT implementation to be correct with respect to the specification. In this section, we show how to prove the correctness of an MRDT implementation with the help of replication-aware simulation relations.
4.1. Replication-aware simulation
For proving the correctness of a data type implementation , we use replication-aware simulation relations . While similar to the simulation relations used in Burckhardt et al. (Burckhardt et al., 2014), in this work, we apply them to MRDTs rather than CRDTs. Further, we also mechanize and automate simulation-based proofs by deriving simple sufficient conditions which can easily discharged by tools such as F*. Finally, we apply our proof technique on a wide range of MRDTs, with substantially complex specifications (e.g. queue MRDT described in §6).
The relation essentially associates the concrete state of the object at a branch with the abstract state at the branch. This abstract state would consist of all events which were applied on the branch. Verifying the correctness of a MRDT through simulation relations involves two steps: (i) first, we show that the simulation relation holds at every transition in every execution of the replicated store, and (ii) the simulation relation meets the requirements of the data type specification and is sufficient for convergence. The first step is essentially an inductive argument, for which we require the simulation relation between the abstract and concrete states to hold for every data type operation instance and merge instance. These two steps are depicted pictorially in figures 4 and 5, respectively.


Figure 4 considers the application of a data type operation (through the function) at a branch. Assuming that the simulation relation holds between the abstract state and the concrete state at the branch, we would have to show that continues to hold after the application of the operation through the concrete function of the implementation and the abstract function on the abstract state.
Figure 5 considers the application of a merge operation between branches and . In this case, assuming between the abstract and concrete states at the two branches and for the LCA, we would then show that continues to hold between the concrete and abstract states obtained after merge. Note that since the concrete merge operation also uses the concrete LCA state , we also assume that holds between the concrete and abstract LCA states.
These conditions consider the effect of concrete and abstract operations locally and thus enable automated verification. In order to discharge these conditions, we also consider two store properties, and that hold across all executions (shown in Table 1). pertains to the nature of the timestamps associated with each operation, while characterizes the lowest common ancestor used for merge. These properties hold naturally due to the semantics of the replicated store. These properties play an important role in discharging the conditions required for validity of the simulation relation.
In particular, asserts that in the abstract state , causally related events have increasing timestamps, and no two events have the same timestamp. will be instantiated with the LCA of two abstract states and (i.e. ), and asserts that the visibility relation between events which are present in both and (and hence also in ) will be the same in all three abstract states. Further, every event in the LCA will be visible to newly added events in either of the two branches. These properties follow naturally from the definition of LCA and are also maintained by the store semantics.
Table 2 shows the conditions required for proving the validity of the simulation relation . In particular, and exactly encode the scenarios depicted in the figures 4 and 5. Note that for , we assume for the input abstract state on which the operation will be performed. Similarly, for , we assume for all events in the merged abstract state (thus ensuring also holds for events in the original branches) and for the LCA of the abstract states.
Once we show that the simulation relation is maintained at every transition in every execution inductively, we also have to show that it is strong enough to imply the data type specification as well as guarantee convergence. For this, we define two more conditions and (also in table 2). says that if abstract state and concrete state are related by , then the return value of operation performed on should match the value of the specification function on the abstract state. Since the relation is maintained at every transition, if is valid, then the implementation will clearly satisfy the specification. Finally, for convergence, we require that if two concrete states are related to the same abstract state, then they should be observationally equivalent. This corresponds to our proposed notion of convergence modulo observable behavior.
Definition 4.1.
Given a MRDT implementation of data type , a replication-aware simulation relation is valid if .
Theorem 4.2 (Soundness).
Given a MRDT implementation of data type , if there exists a valid replication-aware simulation , then the data type implementation is correct 111The proof of the soundness theorem can be found in Appendix A.
4.2. Verifying OR-sets using simulation relations
Let us look at the simulation relations for verifying OR-set implementations in §2.1 against the specification in §2.2.1.
OR-set.
Following is a candidate valid simulation relation for the unoptimized OR-set from §2.1.1:
(3) |
The simulation relation says that for every pair of an element and a timestamp in the concrete state, there should be an event in the abstract state which adds the element with the same timestamp, and there should not be a remove event of the same element which witnesses that add event. This simulation relation is maintained by all the set operations as well as by the merge operation, and it also matches the OR-set specification and guarantees convergence. We use F* to automatically discharge all the proof obligations of Table 2.
Space-efficient OR-set.
Following is a candidate valid simulation relation for the space-efficient OR-set (OR-set-space) from §2.1.2:
(4) |
The simulation relation in this case captures all the constraints of the one for OR-set with duplicates, but has additional constraints on the timestamp of the elements in the concrete state. In particular, for an element in the concrete state, the timestamp associated with that element will be the greatest timestamp of all the events of the same element in the abstract state, which has not been witnessed by a event. Finally, we also need to capture the constraint in the abstract to concrete direction. If there is an event not seen by a event on the same element, then the element is a member of the concrete state. As before, the proof obligations of Table 2 are through F*.
5. Composing MRDTs
A key benefit of our technique is that compound data types can be constructed by the composition of simpler data types through parametric polymorphism. The proofs of correctness of the compound data types can be constructed from the proofs of the underlying data types.
5.1. IRC-style chat
To illustrate the benefits of compositionality, we consider a decentralised IRC-like chat application with multiple channels. Each channel maintains the list of messages in reverse chronological order so that the most recent message may be displayed first. For simplicity, we assume that the channels are append-only; while new messages can be posted to the channels, old messages cannot be deleted. We also assume that while new channels may be created, existing channels may not be deleted.
The chat application supports sending a message to a channel and reading messages from a channel: . The specification of this chat application is given in Figure 6. For this we define a predicate such that holds iff and occurs before in list . The specification essentially says the log of messages contains all (and only those) messages that were sent, and messages are ordered in reverse chronological order.
Rather than implement this chat application from scratch, we may quite reasonably build it using existing MRDTs. We may use a MRDT map to store the association between the channel names and the list of messages. Given that the conversations take place in a decentralized manner, the list of messages in each channel should also be mergeable. For this purpose, we use a mergeable log, an MRDT list that totally orders the messages based on the message timestamp, to store the messages in each of the channels. As mentioned earlier, for simplicity we will assume that the map and the log are grow-only.
5.2. Mergeable log
The mergeable log MRDT supports operations to append messages to the log and to read the log: . The log maintains messages in reverse chronological order. Figure 7 presents the specification, implementation and the simulation relation of the mergeable log. The function sorts the list in reverse chronological order based on the timestamps associated with the messages.
5.3. Generic map
We introduce a generic map MRDT, -map, which associates string keys with a value, where the value stored in the map is itself an MRDT. This -map is parameterised on an MRDT and its implementation , and supports and operations: , where denotes the set of operations on the underlying value MRDT.
Figure 8 shows the specification, implementation and the simulation relation of -map. The implementations for get and operations both fetch the current value associated with the key (and the initial state of if the key is not present in the map), and apply the given operation from the implementation on this value. While updates the binding in the map, does not do so and simply returns the value returned by . The merge operation merges the values for each key using the merge function of . The specification and simulation relation of -map use the specification and simulation relation of the underlying MRDT , by projecting the events associated with each key to an abstract execution of . We now provide the details of this projection function.
5.4. Projection function
Figure 9 gives the projection function which when given an abstract execution of -map, projects all the -events associated with a particular key to define an abstract execution . There is a one-to-one correspondence between -events to in and events in , with the corresponding events in preserving the operation type, return values, timestamps and the visibility relation. The project function as used in the specification of ensures that the return value of -events obey the specification as applied to the projected -execution.
Similarly, the simulation relation of -map requires the simulation relation of to hold for every key, between the value associated with the key and the corresponding projected execution for the key. We can now verify the correctness of the generic -map MRDT by relying on the correctness of . That is, if is a valid simulation relation for the implementation , then is a valid simulation relation for . This allows us to build the proof of correctness of -map using the proof of correctness of .
For our chat application, we instantiate -map with the mergeable log . The chat application itself is a wrapper around the log-map MRDT as shown in Fig. 10. In order to verify the correctness of , we only need to separetely verify and . Note that one can instantiate with any verified MRDT implementation to obtained a verified -map MRDT.
6. Case study: A Verified Queue MRDT
Okasaki (Okasaki, 1999) describes a purely functional queue with amortized time complexity of for enqueue and dequeue operations. This queue is made up of two lists that hold the front and rear parts of the queue. Elements are enqueued to the rear queue and dequeued from the front queue (both are operations). If the front queue is found to be empty at dequeue, then the rear queue is reversed and made to be the front queue ( operation). Since each element is part of exactly one reverse operation, the enqueue and the dequeue have an amortized time complexity of . In this section, we show how to convert this efficient sequential queue into an MRDT by providing additional semantics to handle concurrent operations.
For simplicity of specification, we tag each enqueued element with the unique timestamp of the enqueue operation, which ensures that all the elements in the queue are unique. The queue supports two operations: , where is some value domain. Unlike a sequential queue, we follow an at-least-once dequeue semantics – an element inserted into the queue may be consumed by concurrent dequeues on different branches. At-least-once semantics is common for distributed queueing services such as Amazon Simple Queue Service(SQS) (Amazon, 2006) and RabbitMQ (RabbitMQ, 2007). At a merge, concurrent enqueues are ordered according to their timestamps.
6.1. Merge function of the replicated queue

To illustrate the three-way merge function, consider the execution presented in figure 11. For simplicity, we assume that the timestamps are the same as the values enqueued. Starting from the LCA, each branch performs a sequence of dequeue and enqueue operations. The resulting versions are then merged. Observe that in the merged result, the elements 1 and 2 which were dequeued (with 1 dequeued on both the branches!) are not present. Elements 3, 4 and 5 which are present in all three versions are present in the merged result. Newly inserted elements appear at the suffix, sorted according to their timestamps.
The merge function first converts each of the queues to a list, and finds the longest common contiguous subsequence between the three versions ([3,4,5]). The newly enqueued elements are suffixes of this common subsequence – [8,9] and [6,7] in the queues A and B, respectively. The final merged result is obtained by appending the common subsequence to the suffixes merged according to their timestamps. Each of these operations has a time complexity of where is the length of the longest list. Hence, the merge function is also an operation 222The implementation of the queue operations and the merge function is available in Appendix B.
6.2. Specification of the replicated queue
We now provide the specification for the queue MRDT, which is based on the declarative queue specification in Kartik et al. (Nagar et al., 2020). In particular, compared to the sequential queue, the only constraint that we relax is allowing multiple dequeues of the same element.
In order to describe the specification, we first introduce a number of axioms which declaratively specify different aspects of queue behaviour. Consider the predicate defined for a pair of events in an abstract execution :
Let be the value returned by a dequeue when the queue is empty. We define the following axioms:
-
•
:
-
•
:
-
•
:
-
•
:
These axioms essentially encode queue semantics. says that for every dequeue event which does not return , there must exist a matching enqueue event. says that if a dequeue event returns , there should not be an unmatched enqueue visible to it. Finally, and encode the first-in-first-out nature of the queue. These axioms ensure that if an enqueue event was visible to another enqueue event , then the element inserted by will be dequeued first. Notice that sequential queue would also have an injectivity axiom, which disallows multiple dequeues to be matched to an enqueue, but we do not enforce this requirement for the replicated queue.
To define , we first note that enqueue operation always returns . For an abstract state , returns such that if we add the new event for the dequeue to the abstract state , then the resulting abstract state must satisfy all the queue axioms.
Notice how the queue axioms are substantially different from the way the MRDT queue is actually implemented. The simulation relation that we use to bridge this gap and relate the implementation with the abstract state is actually very straightforward: we simply say that for every element present in the concrete state of the queue, there must be an enqueue event without a matching dequeue. We also assert the other direction, and enforce the queue axioms on the abstract state. The complete simulation relation can be found in the supplemental material. We were able to successfully discharge the conditions for validity of the simulation relation using F*.
7. Evaluation
In this section, we evaluate the instantiation of the formalism developed thus far in Peepul, an F* library of certified efficient MRDTs. We first discuss the verification effort followed by the performance evaluation of efficient MRDTs compared to existing work. These results were obtained on a 2-socket Intel®Xeon®Gold 5120 x86-64 (Intel Xeon Gold 5120, 2020) server running Ubuntu 18.04 with 64GB of main memory.
7.1. Verification in F*
F*’s core is a functional programming language inspired by ML, with support for program verification using refinement types and monadic effects. Though F* has support for built-in effects, Peepul library only uses the pure fragment of the language. Given that we can extract OCaml code from our verified implementations in F*, we are able to directly utilise our MRDTs on top of Irmin (Irmin, 2021), a Git-like distributed database, whose execution model fits the MRDT system model.
As part of the Peepul library, we have implemented and verified 9 MRDTs – increment-only counter, PN counter, enable-wins flag, last-writer-wins register, grows-only set, grows-only map, mergeable log, observed-remove set and functional queue. Our specifications capture both the functional correctness of local operations as well as the semantics of the concurrent conflicting operations.
F*’s support for type classes provides a modular way to implement and verify MRDTs. The Peepul library defines a MRDT type class that captures the sufficient conditions to be proved for each MRDT as given in Table 2. This library contains 124 lines of F* code. Each MRDT is a specific instance of the type class which satisfy the conditions. It is useful to note that our MRDTs sets, maps and queues are polymorphic in their contents and may be plugged with other MRDTs to construct more complex MRDTs as seen in §5.
MRDTs verified | #Lines code | #Lines proof | #Lemmas | Verif. time (s) |
Increment-only counter | 6 | 43 | 2 | 3.494 |
PN counter | 8 | 43 | 2 | 23.211 |
Enable-wins flag | 20 | 58 | 3 | 1074 |
81 | 6 | 171 | ||
89 | 7 | 104 | ||
LWW register | 5 | 44 | 1 | 4.21 |
G-set | 10 | 23 | 0 | 4.71 |
28 | 1 | 2.462 | ||
33 | 2 | 1.993 | ||
G-map | 48 | 26 | 0 | 26.089 |
Mergeable log | 39 | 95 | 2 | 36.562 |
OR-set (§2.1.1) | 30 | 36 | 0 | 43.85 |
41 | 1 | 21.656 | ||
46 | 2 | 8.829 | ||
OR-set-space (§2.1.2) | 59 | 108 | 7 | 1716 |
OR-set-spacetime | 97 | 266 | 7 | 1854 |
Queue | 32 | 1123 | 75 | 4753 |
Table 3 tabulates the verification effort for each MRDT in the Peepul library. We include three versions of OR-sets:
-
•
OR-set: the unoptimized one from §2.1.1 which uses a list for storing the elements and contains duplicates.
-
•
OR-set-space: the space-optimized one from §2.1.2 which also uses a list but does not have duplicates.
-
•
OR-set-spacetime: a space- and time-optimized one which uses a binary search tree for storing the elements and has no duplicates. The merge function produces a height balanced binary tree.
The lines of code represents the number of lines for implementing the data structure without counting the lines for refinements, lemmas, theorems and proofs. This is approximately the number of lines of code there will be if the data structures were implemented in OCaml. Everything else that has to do with verification is included in the lines of proofs. It is useful to note that the lines of proof for simple MRDTs such as counter and last-writer-wins (LWW) register is high compared to the lines of code since we also specify and prove their full functional correctness.
For many of the proofs, F* is able to automatically verify the properties either without any lemmas or just a few, thanks to F* discharging the proof obligations to the SMT solver. Most of the proofs are a few tens of lines of code with the exception of queues. In queues, the implementation is far removed from the specification, and hence, additional lemmas were necessary to bridge this gap.
F* allows the user to provide additional lemmas that help the solver arrive to the proof faster. We illustrate this for enable-wins flag, G-set and OR-set by adding additional lemmas. Correspondingly, we observe that the verification time reduces significantly. Thanks to F*, the developers of new MRDTs in Peepul can strike a balance between verification times and manual verification effort.
In this work, we have not used F* support for tactics and interactive proofs. We believe that some of the time consuming calls to the SMT solver may be profitably replaced by a few interactive proofs. On the whole, the choice of F* for Peepul reduces manual effort and most of the proofs are checked within few seconds.
7.2. Performance evaluation
In this section, we evaluate the runtime performance of efficient MRDTs in Peepul.
7.2.1. Peepul vs Quark
We first compare the performance of Peepul MRDTs against the MRDTs presented in Kaki et al. (Kaki et al., 2019) (Quark). Recall that Quark lifts sequential data types to MRDTs by equipping them with a merge function, which converts the concrete state of the MRDT to a relational (set-based) representation that captures the characteristic relations of the data type. The actual merge is implemented as a merge of these sets for each of the characteristic relations. After merge of the relational representations, the final result is obtained by a concretization function. Compared to this, Peepul merges are implemented directly on the concrete representations.
To highlight the impact of the efficient merge function in Peepul, we evaluate the performance of merge in queues. Both Peepul and Quark uses the same sequential queue representation, and the only difference is the merge function between the two. For this experiment, we start with an empty queue, and perform a series of randomly generated operations with 75:25 split between enqueues and dequeues. We use this version as the LCA and subsequently perform two different sets of operations to get the two divergent versions. We then merge these versions to measure the time taken for the merge.

The results are reported in figure 12. For a queue, Quark needs to reify the ordering relation as a set which will contain elements for a queue of size . In addition, there is also the cost of abstracting and concretising the queue to and from relational representation. As a result, the merge function takes 10 seconds for 1000 operations, increasing to 178 seconds for 5000 operations. On the other hand, Peepul’s linear-time merge took less than a millisecond in all of the cases. This shows the that Quark merge is unacceptably slow even reasonably sized queues, while Peepul remains fast and practical.

We also compare the performance of OR-set in Peepul and Quark. Since the merge function in Quark is based on automatic relational reification, Quark does not allow duplicate elements to be removed from the OR-set. To highlight the impact of duplicate elements, we perform an experiment similar to the queue one except that we pick a 50:50 split between add and remove operations. The values added are randomly picked in the range (0:1000). For Peepul, we pick the space-optimized OR-set (OR-set-space). We report the number of elements in the final set including duplicates.
The results are presented in figure 13. Due to the duplicates, the size of the Quark set increases with increasing number of operations; the growth is not linear due to the stochastic interplay between add and remove. For Peepul, the set size always remains below 1000 which is the range of the values picked. The results show that MRDTs in Peepul are much more efficient than in Quark.
7.2.2. Peepul OR-set performance


We also compare the overall performance of the three OR-set implementations in Peepul. Our workload consists of 70% lookups, 20% adds and 10% remove operations starting from an initial empty set on two different branches. We trigger a merge every 500 operations. We measure the overall execution time for the entire workload and the maximum size of the set during the execution.
The results are reported in figures 14 and 15. The results show that OR-set-spacetime is the fastest, and is around 5 faster than OR-set-space due to the fast reads and writes thanks to the binary search tree in OR-set-spacetime. Both OR-set-space and OR-set-spacetime consume similar amount of memory. The unoptimized OR-set is both slower and consumes more memory than the other variants due to the duplicates. The results show that Peepul enables construction of efficient certified MRDTs that have significant performance benefits compared to unoptimised ones.
8. Related Work
Reconciling concurrent updates is an important problem in distributed systems. Some of the works proposing new designs and implementations of RDTs (Shapiro et al., 2011; Roh et al., 2011; Bieniusa et al., 2012) neither provide their formal specification nor verify them. Due to the concurrently evolving state of the replicas, informally reasoning about the correctness of even simple RDTs is tedious and error prone. In this work, our focus is on mechanically verifying efficient implementations of MRDTs.
There are several works that focus on specification and verification of CRDTs (Burckhardt et al., 2014; Liu et al., 2020; Zeller et al., 2014; Nair et al., 2020; Attiya et al., 2016; Gomes et al., 2017; Nagar and Jagannathan, 2019). CRDTs typically assume a system model which involves several replicas communicating over network with asynchronous message passing. Correspondingly, the specification and verification techniques for CRDTs will have to take into account of the properties of message passing such as message ordering and delivery guarantees. On the other hand, MRDTs are described over a Git-like distributed store with branching and merging, which in turn may be implemented over asynchronous message passing. We believe that, by lifting the level of abstraction, MRDTs are easy to specify, implement and verify compared to CRDTs.
In terms of mechanised verification of RDTs, prior work has used both automated and interactive verification. Zeller et al. (Zeller et al., 2014) verify state-based CRDTs with the help of interactive theorem prover Isabelle/HOL. Gomes et al. (Gomes et al., 2017) develop a foundational framework for proving the correctness of operation-based CRDTs. In particular, they construct a formal network model that may delay, drop or reorder messages sent between replicas. Under these assumptions, they verify several op-based CRDTs using Isabelle/HOL. Nair et al. (Nair et al., 2020) presents an SMT-based verification tool to specify state-based CRDTs and verify invariants over its state. Kartik et al. (Nagar and Jagannathan, 2019) also utilise SMT-solver to automatically verify the convergence of CRDTs under different weak consistency policies. Liu et al. (Liu et al., 2020) present an extension of the SMT-solver-aided Liquid Haskell to allow refinement types on type classes and use to implement a framework to verify operation-based CRDTs. Similar to Liu et al., Peepul also uses an SMT-solver-aided programming language F*. We find that SMT-solver-aided programming language offers a useful trade off between manual verification effort and verification time.
Our verification framework for MRDTs builds on the concept of replication-aware simulation introduced by Burckhardt et al. (Burckhardt et al., 2014). Burckhardt et al. present precise specifications for RDTs and (non-mechanized) proof of correctness for a few CRDT implementations. Burckhardt et al.’s specifications are presented over the CRDT system model with explicit message passing between replicas. In this work, we lift these specifications to a higher level by abstracting out the guarantees provided by the low-level store ( and ). Further, we also observe that the simulation relation cannot be used as an inductive invariant on its own, and instead, a conjunction of with and is required (see conditions and in Table 2). In order to enable mechanised verification, we identify the relationship between and the functional correctness and convergence of MRDTs. This leads to a formal specification framework that is suitable for mechanized and automated verification. We demonstrate this by successfully verifying a number of complex MRDT implementations in F* including the first, formally verified replicated queue.
MRDTs were first introduced by Farnier et al. (Farinier et al., 2015) for Irmin (Irmin, 2021), a distributed database built on the principles of Git. Quark (Kaki et al., 2019) automatically derives merge functions for MRDTs using invertible relational specification. However, their merge semantics focused only on convergence, and not the functional correctness of the data type. Our evaluation (§7.2.1) shows that merges through automatically derived invertible relational specification is prohibitively expensive for data types with rich structure such as queues. Tardis (Crooks et al., 2016) also uses branch-and-merge approach to weak consistency, but does not focus on verifying the correctness of the RDTs.
Not all application logic can be expressed only using eventually consistent and convergent RDTs. For example, a replicated bank account which guarantees non-negative balance requires coordination between concurrent withdraw operations. Several previous works have explored RDTs that utilize on-demand coordination based on application invariants (Sivaramakrishnan et al., 2015; Gotsman et al., 2016; Kaki et al., 2018; Najafzadeh et al., 2016; Houshmand and Lesani, 2019; De Porre et al., 2021). We leave the challenge of extending Peepul to support on-demand coordination to future work.
Acknowledgements.
We thank our shepherd, Constantin Enea, and the anonymous reviewers for their reviewing effort and high-quality reviews. We also thank Aseem Rastogi and the F* Zulip community for helping us with F* related queries.References
- (1)
- Almeida et al. (2018) Paulo Sérgio Almeida, Ali Shoker, and Carlos Baquero. 2018. Delta state replicated data types. J. Parallel and Distrib. Comput. 111 (Jan 2018), 162–173. https://doi.org/10.1016/j.jpdc.2017.08.003
- Amazon (2006) Amazon. 2006. Simple Queue Service by Amazon. https://aws.amazon.com/sqs/
- Attiya et al. (2016) Hagit Attiya, Sebastian Burckhardt, Alexey Gotsman, Adam Morrison, Hongseok Yang, and Marek Zawirski. 2016. Specification and Complexity of Collaborative Text Editing. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing (Chicago, Illinois, USA) (PODC ’16). Association for Computing Machinery, New York, NY, USA, 259–268. https://doi.org/10.1145/2933057.2933090
- Bieniusa et al. (2012) Annette Bieniusa, Marek Zawirski, Nuno Preguiça, Marc Shapiro, Carlos Baquero, Valter Balegas, and Sérgio Duarte. 2012. An optimized conflict-free replicated set. arXiv:1210.3368 [cs.DC]
- Burckhardt et al. (2014) Sebastian Burckhardt, Alexey Gotsman, Hongseok Yang, and Marek Zawirski. 2014. Replicated Data Types: Specification, Verification, Optimality. In Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (San Diego, California, USA) (POPL ’14). Association for Computing Machinery, New York, NY, USA, 271–284. https://doi.org/10.1145/2535838.2535848
- Crooks et al. (2016) Natacha Crooks, Youer Pu, Nancy Estrada, Trinabh Gupta, Lorenzo Alvisi, and Allen Clement. 2016. TARDiS: A Branch-and-Merge Approach To Weak Consistency. In Proceedings of the 2016 International Conference on Management of Data (San Francisco, California, USA) (SIGMOD ’16). Association for Computing Machinery, New York, NY, USA, 1615–1628. https://doi.org/10.1145/2882903.2882951
- De Porre et al. (2021) Kevin De Porre, Carla Ferreira, Nuno Preguiça, and Elisa Gonzalez Boix. 2021. ECROs: Building Global Scale Systems from Sequential Code. Proc. ACM Program. Lang. 5, OOPSLA, Article 107 (oct 2021), 30 pages. https://doi.org/10.1145/3485484
- Dubey (2021) Shashank Shekhar Dubey. 2021. Banyan: Coordination-free Distributed Transactions over Mergeable Types. Ph. D. Dissertation. Indian Institute of Technology, Madras, India. https://thesis.iitm.ac.in/thesis?type=FinalThesis&rollno=CS17S025
- Dubey et al. (2020) Shashank Shekhar Dubey, K. C. Sivaramakrishnan, Thomas Gazagnaire, and Anil Madhavapeddy. 2020. Banyan: Coordination-Free Distributed Transactions over Mergeable Types. In Programming Languages and Systems, Bruno C. d. S. Oliveira (Ed.). Springer International Publishing, Cham, 231–250.
- Farinier et al. (2015) Benjamin Farinier, Thomas Gazagnaire, and Anil Madhavapeddy. 2015. Mergeable persistent data structures. In Vingt-sixièmes Journées Francophones des Langages Applicatifs (JFLA 2015), David Baelde and Jade Alglave (Eds.). JFLA, Le Val d’Ajol, France, 1–13. https://hal.inria.fr/hal-01099136
- Git (2021) Git. 2021. Git: A distributed version control system. https://git-scm.com/
- Gomes et al. (2017) Victor B. F. Gomes, Martin Kleppmann, Dominic P. Mulligan, and Alastair R. Beresford. 2017. Verifying Strong Eventual Consistency in Distributed Systems. Proc. ACM Program. Lang. 1, OOPSLA, Article 109 (Oct. 2017), 28 pages. https://doi.org/10.1145/3133933
- Gotsman et al. (2016) Alexey Gotsman, Hongseok Yang, Carla Ferreira, Mahsa Najafzadeh, and Marc Shapiro. 2016. ’Cause I’m Strong Enough: Reasoning about Consistency Choices in Distributed Systems. In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (St. Petersburg, FL, USA) (POPL ’16). Association for Computing Machinery, New York, NY, USA, 371–384. https://doi.org/10.1145/2837614.2837625
- Houshmand and Lesani (2019) Farzin Houshmand and Mohsen Lesani. 2019. Hamsaz: Replication Coordination Analysis and Synthesis. Proc. ACM Program. Lang. 3, POPL, Article 74 (jan 2019), 32 pages. https://doi.org/10.1145/3290387
- Intel Xeon Gold 5120 (2020) Intel 2020. Intel® Xeon® Gold 5120 Processor Specification. Intel. https://ark.intel.com/content/www/us/en/ark/products/120474/intel-xeon-gold-5120-processor-19-25m-cache-2-20-ghz.html
- Irmin (2021) Irmin. 2021. Irmin: A distributed database built on the principles of Git. https://irmin.org/
- Kaki et al. (2018) Gowtham Kaki, Kapil Earanky, KC Sivaramakrishnan, and Suresh Jagannathan. 2018. Safe Replication through Bounded Concurrency Verification. Proc. ACM Program. Lang. 2, OOPSLA, Article 164 (Oct. 2018), 27 pages. https://doi.org/10.1145/3276534
- Kaki et al. (2019) Gowtham Kaki, Swarn Priya, KC Sivaramakrishnan, and Suresh Jagannathan. 2019. Mergeable Replicated Data Types. Proc. ACM Program. Lang. 3, OOPSLA, Article 154 (Oct. 2019), 29 pages. https://doi.org/10.1145/3360580
- Kleppmann (2020) Martin Kleppmann. 2020. CRDT composition failure. University of Cambridge. https://twitter.com/martinkl/status/1327020435419041792
- Kleppmann et al. (2019) Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. 2019. Local-First Software: You Own Your Data, in Spite of the Cloud. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Athens, Greece) (Onward! 2019). Association for Computing Machinery, New York, NY, USA, 154–178. https://doi.org/10.1145/3359591.3359737
- Lamport (1978) Leslie Lamport. 1978. Time, Clocks, and the Ordering of Events in a Distributed System. Commun. ACM 21, 7 (jul 1978), 558–565. https://doi.org/10.1145/359545.359563
- Liu et al. (2020) Yiyun Liu, James Parker, Patrick Redmond, Lindsey Kuper, Michael Hicks, and Niki Vazou. 2020. Verifying Replicated Data Types with Typeclass Refinements in Liquid Haskell. Proc. ACM Program. Lang. 4, OOPSLA, Article 216 (Nov. 2020), 30 pages. https://doi.org/10.1145/3428284
- Nagar and Jagannathan (2019) Kartik Nagar and Suresh Jagannathan. 2019. Automated Parameterized Verification of CRDTs. In Computer Aided Verification - 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part II (Lecture Notes in Computer Science, Vol. 11562), Isil Dillig and Serdar Tasiran (Eds.). Springer, New York City, NY, USA, 459–477. https://doi.org/10.1007/978-3-030-25543-5_26
- Nagar et al. (2020) Kartik Nagar, Prasita Mukherjee, and Suresh Jagannathan. 2020. Semantics, Specification, and Bounded Verification of Concurrent Libraries in Replicated Systems. In Computer Aided Verification - 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21-24, 2020, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 12224), Shuvendu K. Lahiri and Chao Wang (Eds.). Springer, Los Angeles, CA, USA, 251–274. https://doi.org/10.1007/978-3-030-53288-8_13
- Nair et al. (2020) Sreeja S. Nair, Gustavo Petri, and Marc Shapiro. 2020. Proving the Safety of Highly-Available Distributed Objects. In Programming Languages and Systems, Peter Müller (Ed.). Springer International Publishing, Cham, 544–571.
- Najafzadeh et al. (2016) Mahsa Najafzadeh, Alexey Gotsman, Hongseok Yang, Carla Ferreira, and Marc Shapiro. 2016. The CISE Tool: Proving Weakly-Consistent Applications Correct. In Proceedings of the 2nd Workshop on the Principles and Practice of Consistency for Distributed Data (London, United Kingdom) (PaPoC ’16). Association for Computing Machinery, New York, NY, USA, Article 2, 3 pages. https://doi.org/10.1145/2911151.2911160
- Okasaki (1999) Chris Okasaki. 1999. Purely Functional Data Structures. Cambridge University Press, USA.
- RabbitMQ (2007) RabbitMQ. 2007. Message Brokering Service by RabbitMQ. https://www.rabbitmq.com/queues.html
- Riak (2021) Riak. 2021. Resilient NoSQL Databases. https://riak.com/
- Roh et al. (2011) Hyun-Gul Roh, Myeongjae Jeon, Jin-Soo Kim, and Joonwon Lee. 2011. Replicated Abstract Data Types: Building Blocks for Collaborative Applications. J. Parallel Distrib. Comput. 71, 3 (March 2011), 354–368. https://doi.org/10.1016/j.jpdc.2010.12.006
- Shapiro et al. (2018) Marc Shapiro, Annette Bieniusa, Nuno Preguiça, Valter Balegas, and Christopher Meiklejohn. 2018. Just-Right Consistency: reconciling availability and safety. arXiv:1801.06340 [cs.DC]
- Shapiro et al. (2011) Marc Shapiro, Nuno Preguiça, Carlos Baquero, and Marek Zawirski. 2011. Conflict-Free Replicated Data Types. In Stabilization, Safety, and Security of Distributed Systems, Xavier Défago, Franck Petit, and Vincent Villain (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 386–400.
- Sivaramakrishnan et al. (2015) KC Sivaramakrishnan, Gowtham Kaki, and Suresh Jagannathan. 2015. Declarative Programming over Eventually Consistent Data Stores. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (Portland, OR, USA) (PLDI ’15). Association for Computing Machinery, New York, NY, USA, 413–424. https://doi.org/10.1145/2737924.2737981
- Yu and Rostad (2020) Weihai Yu and Sigbjørn Rostad. 2020. A Low-Cost Set CRDT Based on Causal Lengths. In Proceedings of the 7th Workshop on Principles and Practice of Consistency for Distributed Data (Heraklion, Greece) (PaPoC ’20). Association for Computing Machinery, New York, NY, USA, Article 5, 6 pages. https://doi.org/10.1145/3380787.3393678
- Zeller et al. (2014) Peter Zeller, Annette Bieniusa, and Arnd Poetzsch-Heffter. 2014. Formal Specification and Verification of CRDTs. In 34th Formal Techniques for Networked and Distributed Systems (FORTE) (Formal Techniques for Distributed Objects, Components, and Systems, Vol. LNCS-8461), Erika Ábrahám and Catuscia Palamidessi (Eds.). Springer, Berlin, Germany, 33–48. https://doi.org/10.1007/978-3-662-43613-4_3 Part 1: Specification Languages and Type Systems.
Appendix
Appendix A Proof of Theorem 4.2
Theorem 4.2. Given a MRDT implementation of data type , if there exists a valid replication-aware simulation , then the data type implementation is correct.
Proof.
We first show that for all executions of the store LTS , holds at every transition. At every step, we will also show that the return value of every operation obeys the data-type specification and convergence modulo observable behavior is satisfied. The proof is by induction on the length of the execution.
BASE CASE:
Consider that the labelled transition system is in the initial state with only one branch .
To prove:
and is convergent modulo observable behavior).
Proof:
For every operation o of , let
Then according to in Table 2,
(5) | |||
which is the necessary condition for .
The condition for eq (2) in Section. 3 is also satisfied, since there is only one branch in the execution. Hence is convergent modulo observable behavior.
Thus the base case is proved.
INDUCTIVE CASE:
Consider an execution
.
To prove:
For an execution , if and is convergent modulo observable behavior, then on applying a single step of the store execution,
the new execution obtained , satisfies the specification and is convergent modulo observable behavior.
Proof ():
We prove it by case-analysis on labels in the labelled transition system.
Case 1:
The first case is the label being CREATEBRANCH.
We need to prove that .
For every operation o, of the data type, let
Then according to in Table 2,
(6) | |||
which is the necessary condition for .
Case 2:
The second case is the label being DO.
We need to prove that .
For every operation o, of the data type, let
Then according to in Table 2,
(7) | |||
which is the necessary condition for .
Case 3:
The second case is the label being MERGE.
We need to prove that .
For every operation o, of the data type, let
Then according to in Table 2,
(8) | |||
which is the necessary condition for .
Proof ( is convergent modulo observable behavior):
Let be the execution obtained after applying any of the transitions (CREATEBRANCH, DO, MERGE) to .
For proving is convergent modulo observable behavior, we need to show,
(9) |
We know that there is a valid simulation relation between and .
(10) | |||
On substituting for according to ( 9) in ( 10) we get,
According to in Table 2,
Hence is convergent modulo observable behavior.
∎
Appendix B Functional queue simulation relation and implementation
B.1. Simulation relation
Consider the predicate as defined in Section 5.1:
(11) |
We define the simulation relation, for an abstract state and a concrete state as follows:
(12) |
where is the predicate that states occurs before in the concrete state .
The simulation relation consists of two parts. The first part states that for any element in the concrete state of the queue, there exists an enqueue operation, in the abstract state, that is not matched with any dequeue operation, and the converse of this. The second part of the relation states that for any two elements and in the concrete state of the queue, such that occurs before , there exist two enqueue operations, and in the abstract state, that are not matched with any dequeue operation, such that ( was performed after ) or ( and are concurrent operations), and the converse of this. The first part takes care of the membership of the elements in the queue, while the second part takes care of the ordering of the elements in the queue.
B.2. Functional queue implementation
The definition of the state of the functional queue is as follows:
The enqueue and dequeue functions for the functional queue are defined as follows:
Following are the definitions of some functions used as helper functions for the three-way merge. In all of these definitions, the first element of the pair is taken to be the timestamp, and the second element to be the actual enqueued element. union1 is used to merge two lists of pairs (with unique first elements) that are sorted according to the first element of the pair:
diff_s is used to find the difference between two lists of pairs with unique first elements, that are sorted according to the first element of the pair. Additionally, in the context where diff_s is used, a is a child of l. This function is used to find the newly enqueued elements in a. This simplifies the function to finding the longest contiguous subsequence that is present in a but not l. This also can be interpreted as finding the longest suffix of a that is not present in l, since all the newly enqueued elements occur after the existing elements. This task can be completed in time where is the length of the longest of the two lists.
intersection is used to find the longest common contiguous subsequence between l, a and b. Again, in the context where intersection is used, a and b are children of l. This finds the list of elements that have not been dequeued in either a or b. Hence, the problem is simplified to finding the longest contiguous subsequence of l that is a prefix of a and b. Since all the three lists are sorted according to the first element of the pair, this task can be completed in time where is the length of the longest of the three lists.
tolist is used to convert a functional queue to a single list. This function takes time where is the length of the functional queue.
For the three-way merge between l, a and b, we first find the elements that are not dequeued in a or b. Then we find the list of newly enqueued elements in a and b, and append it to the list of undequeued elements. The core of the three-way merge is defined as follows:
The three-way merge for l, a and b is defined as follows:
Since all the tasks involved in merge take linear time in terms of the length of the longest list, which is equal to that of the longest queue, merge takes time where is the length of the longest queue.