Preprint
Modeling and Control of bittide Synchronization
Abstract
Distributed system applications rely on a fine-grain common sense of time. Existing systems maintain this by keeping each independent machine as close as possible to wall-clock time through a combination of software protocols and precision hardware references. This approach is expensive, requiring protocols to deal with asynchrony and its performance consequences. Moreover, at data-center scale it is impractical to distribute a physical clock as is done on a chip or printed circuit board. In this paper we introduce a distributed system design that removes the need for physical clock distribution or mechanisms for maintaining close alignment to wall-clock time, and instead provides applications with a perfectly synchronized logical clock. We discuss the abstract frame model (AFM), a mathematical model that underpins the system synchronization. The model is based on the rate of communication between nodes in a topology without requiring a global clock. We show that there are families of controllers that satisfy the properties required for existence and uniqueness of solutions to the AFM, and give examples.
1S. Lall is Professor of Electrical Engineering at Stanford University, Stanford, CA 94305, USA, and is a Visiting Researcher at Google. [email protected]
2Călin Caşcaval, Martin Izzard, and Tammo Spalink are with Google.
1 Introduction
The bittide system is designed to enable synchronous execution at large scale without the need for a global clock. Synchronous communication and processing offers significant benefits for determinism, performance, utilization, and robustness. Synchronization in bittide is decentralized – every node in the system adjusts its frequency based on the observed communication with its neighbors. This mechanism, first proposed in [12], defines a synchronous logical clock that is resilient to variations in physical clock frequencies.
The design objective is for bittide systems to possess shared logical time. It is not a requirement for this logical time to perfectly match physical time. All machines on the network share a logical discrete clock that ticks in lockstep. This idea is called logical synchronization, to distinguish it from physical synchronization. Viewed from the inside, the behavior of a logically synchronized system is identical to that of a system with a single shared physical clock. Viewed from the outside, the logical time is fully disconnected from physical wall-clock time, meaning that logical time steps can vary in physical duration, both over time and between nodes. Applications running on the system use the logical time to coordinate their actions, which replaces the need to reference physical time. Thus, bittide enables perfect logical synchronization, using imperfect physical synchronization.
The decentralized nature of the bittide synchronization mechanism enables building large-scale systems that are synchronized with an accuracy that is otherwise hard or prohibitively expensive to achieve. Simply overlaying synchronization information onto asynchronous communication layers is possible, but in practice has led to large communication requirements and limited accuracy [10, 3, 9]. The bittide system instead achieves synchronization using low-level data flows inherent to serial data links. There is no communication overhead (in-band signaling) required by the synchronization mechanism, as the continuous data (meaningful or not) exchanged at the physical layer is sufficient to provide the necessary input to our control system. As we will demonstrate in this paper, the logical synchronization is accurate, even though the underlying substrate is only approximately synchronized. This enables building a much wider class of synchronized systems.
Prior work.
This particular scheme for synchronization using low-level network mechanisms originates in [12]. However, other synchronous network protocols exist, including the heavily-used SONET [15]. High-level protocols for clock-synchronization such as NTP are also widely used [10]. Large systems such as Spanner use proprietary hardware and protocols to keep clocks as close to each other as possible [3, 9].
The literature discussing synchronization dynamics is large, and we can only touch upon it briefly here. The behavior of networked systems that achieve synchronization via feedback control mechanisms has been widely studied, and the clock frequency behavior here is similar to that of several other systems which have been analyzed in the literature. These include flocking models [11], Markov chain averaging models [5, 2], congestion control protocols [7], power networks [13], vehicle platooning [14], and flocking [6]. The earliest work to study such coupled oscillator models of synchronization is Winfree [16]. Several aspects of the bittide control system are still open challenges, and we discuss them in Section 8.
2 The bittide synchronization mechanism
We now describe in more detail the structure of a bittide system. We give here for the sake of clarity the simplest implementation and omit several possible variations and extensions.
We have a network of computers, represented as an undirected graph, each node being a computer with a single processor. Each edge connects a pair of computers, and corresponds to a pair of links, one in each direction. These connections are direct; in this simple case there are no switches or routers between neighboring nodes. Bits are sent across these links grouped into frames. In the simplest case, all frames are the same (fixed) size. Implementations may choose the frame size, which is unrestricted by the bittide system definition. Because the frames are of fixed size, determining the boundaries between frames is straightforward and has low overhead.
Consider two neighboring nodes. At each node there is a queue (called the elastic buffer). Frames are added to the tail of the elastic buffer as they arrive. At the head of the elastic buffer, frames are removed from the buffer and read by the processor. Whenever a frame is removed from the buffer, a new frame is sent on the outgoing link back to the sender. Thus each edge between two nodes on the graph corresponds to four objects; a link in each direction, and an elastic buffer at each node. Because removing a frame from the head of the queue is always consequent with sending a new frame on the outgoing link, if we are interested solely in the network dynamics (and not the actual data in the frames) we can conceptually view these frames as identical; it would be the same if each node simply sent back the frames it received, after they propagated through the elastic buffer. In this sense, the two links and two buffers form a closed cycle, with frames flowing around perpetually, as illustrated in Figure 1.
\begin{overpic}[width=91.0631pt]{buffers2} \put(50.0,15.0){\hbox to0.0pt{\hss$\to$ link\hss}} \put(12.0,-4.0){\hbox to0.0pt{\hss$i$\hss}} \put(90.0,-4.0){\hbox to0.0pt{\hss$j$\hss}} \put(50.0,80.0){\hbox to0.0pt{\hss link $\leftarrow$\hss}} \put(100.0,50.0){\parbox[t]{72.26999pt}{elastic\\ buffer}} \put(0.0,50.0){\hbox to0.0pt{\hss\parbox[t]{28.90755pt}{elastic\\ buffer}}} \end{overpic}
There is one more wrinkle to this behavior. If a node has, say neighbors, then it has incoming links (with elastic buffers), and outgoing links. The critical feature here is the timing; frames are sent simultaneously on all outgoing links. It is worth pointing out that exact simultaneity is not necessary. Instead we simply need the different transmissions to remain in discrete lockstep with each other, so that transmissions occur in stages of frames, one on each link. The next stage does not start until the previous stage has finished.
We can view each edge as a ring with two buffers and two links, and frames moving around the ring. Because of the above timing behavior, each ring turns in lockstep with all other rings. This holds even though the links may have different physical latency and the buffer occupancies may vary. The rings behave somewhat analogously to mechanical gears; even if gears have different sizes and hence different angular rates, at any point where two gears meet, the teeth of both gears pass the point at the same rate.
At each node there is an oscillator, which drives the processor clock, which on a modern system might run at a few GHz. On a bittide system, the oscillator also drives the network (possibly via frequency multiplication or division), so that with each clock tick a set of frames is sent, one to each neighbor. Each network clock tick corresponds to a stage above.
Because the same oscillator is used for the transmission of frames as for the processor, the lockstep behavior of the network induces an identical lockstep behavior for all processors. All nodes coordinate to synchronize these clocks, as will be discussed below. At a node, with each tick of the clock, a frame is read from each elastic buffer, a frame is sent on each outgoing link, and a processor instruction is executed.
The bittide system operates at Layer 1 (physical level) in the OSI network layer model. This is a significant advantage for synchronization, as it ensures that there is minimal buffering. This is in contrast with higher-level protocols such as NTP [10], where synchronization is at a comparatively coarser level.
The frames that are being transmitted around the network have a dual purpose; they are used for communication, and they are used for synchronization. For the latter, the content of the frames is irrelevant to the synchronization, and a processor must simply send a frame whenever it reads one. If there is no data to be communicated, it must therefore send a frame containing garbage data. (This is not generally wasteful, because most standard wired network links operate this way transparently to the user by default.) Additionally, since each link is between directly-connected neighbors, it is the responsibility of higher-level network protocols to perform multi-hop communication and routing.
The bittide system enables an entire datacenter, and possibly even larger networks, to operate in logical synchrony. Such behavior is a significant departure from how current datacenters work, where large applications run via networks of asynchronous processes. For many applications, one of the disadvantages of implementing systems in this way is tail-latency [4], where delays caused by a few occasionally-slow processes compound to limit overall performance. The bittide system offers a way to eliminate sources of latency variability via deterministic ahead-of-time scheduling of processes.
3 Using feedback control to maintain synchrony
Consider two neighboring nodes and . Each node transmits a frame on an outgoing link whenever it removes a frame from the corresponding elastic buffer, thus conserving the total number of frames in the cycle of two links and two buffers. If the oscillator of is faster than that of , then will pull frames from its buffer more frequently than they are being supplied, and consequently its elastic buffer will become emptier. Conversely, ’s receive buffer will become fuller. The number of frames in each buffer is called buffer occupancy. Each node has a controller process that measures buffer occupancy, which provides a feedback signal indirectly capturing the relative speed of its oscillator compared to the other. It will then adjust its oscillator speed appropriately.
A node which has neighbors has elastic buffers, which are all emptied according to the local clock frequency, and filled according to the frequency of the corresponding neighbor. The collection of occupancies of these buffers is therefore a feedback signal, providing information regarding the relative frequencies of all of the neighboring nodes. The question that remains is how best to adjust the local oscillator frequency in response to these signals. This is the responsibility of the control algorithm. We also need to address the challenge that the information that each node has about its neighbor is delayed by the corresponding latency of the physical link; that is, the amount of time between a frame being sent from the head of the sender’s elastic buffer and it being received into the tail of the recipient’s elastic buffer. When a node changes its frequency, the effects are felt by all of its neighbors delayed by the corresponding latencies, and this in turn leads to the neighbors adjusting their frequencies, and so on. The effects of frequency adjustment therefore propagate through the network. As a result of these dynamics, both the latencies and the graph topology can have a significant effect on the overall behavior of our decentralized synchronization mechanism.
4 Modeling the dynamics of frames
Let the set of all nodes in the graph be . Each node has a clock, whose value at any time is a real number , called the phase of the clock. It is driven by an oscillator whose frequency is , and so satisfies the dynamics
The value of is called the local time at node , or the time in local ticks. The frequency may vary over time, both as a result of physical variations such as temperature, and as a result of adjustments by the controller at node . This clock drives the frame reading and transmission processes in the following way. Every time at which the clock is an integer, node removes a frame from the head of the corresponding elastic buffer and sends a frame on the link from to . Therefore, if does not change over time, node sends frames per second.
This simple model is enough to determine the location of all of the frames within the system, as follows. Suppose , then the number of frames that have been sent on the time interval by node is
Between any two nodes and , there are two links, one in each direction. The link from to has latency . Note that the latency includes the time to serialize a frame, transmit it across the physical link, and deserialize the frame into the elastic buffer. It does not include the time for the frame to propagate from the tail to the head of the elastic buffer, and so is not a measure of communication delay between nodes.
On this link, the number of frames at time is therefore
Note that this implies a specific interpretation of boundary points, so that frames which at time are exactly at the start of the link are considered as on the link, and frames that are at the end of the link are considered as no longer on the link.
Define the number of frames received into the elastic buffer at from over the interval to be . Since the links do not drop frames, the number received is simply the number sent, delayed by the latency. We have
To determine the occupancy of the elastic buffer, we need to specify additional information, because while knowledge of informs us of how many frames arrive and leave the buffer within a particular time interval, the total number of frames in the buffer also depends on how many it contained beforehand. So we define to be the occupancy of the elastic buffer at node associated with the link from node at time , and to be the occupancy at time . The following gives a mathematical definition of . We construct as the unique function for which
and for which the following difference relationship holds for all . The difference between the occupancy at time and the occupancy at time is simply the number of frames received minus the number sent over that interval. That is,
(1) | ||||
Now define
(2) |
and notice that equation (1) implies that
Hence is constant. Evaluating equation (2) at shows that it may be determined from and the initial conditions for , which specify the value of for all . Note that the initial conditions are not simply given by since the system contains delays, and is therefore infinite-dimensional. Then we have
Note in particular that and may differ.
Using the above definition of , it is convenient to write as
This number is the buffer occupancy plus the link occupancy, plus the clock offset between the nodes. We can interpret its invariance as follows. It is constant because the first two terms sum to the number of frames on the path from the head of the buffer at to the head of the buffer at . The only way this can change is via one or other clock increasing by one, and those two actions correspond to a frame being added or removed from this path. Furthermore, we have
which means that the total number of frames on the two links plus two buffers is conserved.
In the above description we have for simplicity only discussed the case where all links transmit at the same rate. However, it is straightforward to extend this model to include links which send at different rates. To do this, one adds gearboxes so that node sends frames onto link for every tick of . Then the buffer occupancies become
Since this additional complexity does not affect the control mechanism responsible for synchronization we do not discuss it further here.
In a hardware implementation, each node has memory dedicated to the elastic buffer for each network interface. Two pointers are stored which keep track of each end of the buffer, so that adding or removing a frame does not require data to be moved in memory. However, the buffer has a fixed size, and so can overflow. Both overflow and underflow at any node are fatal errors for the bittide system.
The requirement that the elastic buffers neither overflow nor underflow means that the difference in clock frequencies at the two ends of a link cannot stay too large for too long; if it does, either the buffer at the low-frequency end will overflow or the buffer at the high-frequency end will underflow, or both.
Define the elastic buffer length to be . Then we must ensure that the frequencies are such that the occupancies satisfy
This is the fundamental performance requirement that the control system must enforce. Additionally, it is preferable that be small, since smaller buffers mean smaller communication latency. Here, by communication latency we mean the amount of real time (wall-clock time) that it takes for a frame to leave the head of the source elastic buffer and arrive at the head of the destination elastic buffer.
Achieving this requirement means we must have clocks that are operating (on average) at the same frequency at all nodes. In practice, left alone, no two clocks will remain perfectly synchronized, and over time their counters will diverge. Some of the most stable clocks are atomic clocks, which offer a relative error of about 1 part in . This much error means that the buffer will accumulate about 1 bit every 100 seconds on a gigabit link. To avoid buffer overflow and underflow, we therefore need to use feedback control to stabilize the buffer occupancies.
5 Connecting a controller
The basic idea of the control system is that it can measure the buffer occupancies and set the frequencies . However, there are several complicating factors. The most fundamental is that the controller cannot actually set the exact frequency . The oscillator at node has a frequency at which it will operate when it is not corrected, which is called the uncorrected frequency, denoted by . When the oscillator is controlled, the frequency is
where is the correction set by the controller. The uncorrected frequency is typically not known exactly, and is subject to both manufacturing tolerances and the effects of aging, temperature, and other physical effects. It changes with time, and is not measurable while the system is running. Models for the change in over time include phenomena such as drift and jitter [1].
At each node, by design, there is no way to measure wall-clock time ; the best the processor can do is observe the local clock . It is this notion of time that determines when the occupancy of the elastic buffers is sampled and when the frequency is updated. Therefore, the sample-rate at each node varies, depending on the state of the system. This is one of the sources of nonlinear behavior in the system.
The model of the system is written using wall-clock time as independent variable. However, we do not assume that the controller can observe . Node measures the buffer occupancies at times . These sample times are defined by
Here is the sample period, in local ticks. Notice that, since the initial conditions specify that , the first sample time is . After sampling at time , node sets the frequency correction at a time local ticks later. Specifically, the correction is set at times , defined by
It is important that the initial phase is not an integer. Otherwise, the buffer occupancy is measured at exactly the times when is integral, and those are precisely the times at which a frame is removed from the elastic buffer. While this is mathematically well defined, in practice we cannot measure buffer occupancy exactly at this time. The interpretation of the fractional part is that it specifies when the samples are made, relative to the removal of frames from the buffer.
At each time , the controller at node measures , the set of buffer occupancies at that node, that is
Each buffer occupancy is labeled with the neighboring node that supplies it. The controller is a function which maps the history of these measurements to the correction , according to
(3) |
This correction is applied on the interval , and as a result the frequency is piecewise constant, with
(4) |
5.1 The abstract frame model
The model defined by the above equations is called the abstract frame model for bittide. It defines the connection between the controller and the clock, using an ideal abstraction for the frames, by which one can determine the location of all of the frames using only the history of . Since this is a transport model, one might also model it using an advection partial differential equation, but here we use the delay-differential equation form. We summarize the model here. For all , , and ,
(5) | ||||
The initial conditions of the model are
(6) |
The first of these equations ensures that the dynamics are well-defined, by specifying the initial value of on a sufficiently large interval of time for the delay dynamics. The second equation defines on the period between time zero and the time at which the first controller action takes effect.
A controller is called admissible if
for all . This ensures that
(7) |
The parameters of the model are as follows. The minimum frequency is . The epoch is , and it must satisfy
The initial buffer occupancies are . The sampling period is , the number of clock cycles between controller updates. The delay models the time (in local ticks) taken to compute the controller update and for the oscillator to respond to a frequency change. We assume . The initial frequencies are
The initial clock phases are . The uncorrected oscillator frequencies are . The constants are computed according to
With a state-space decentralized controller, we have
where is state of the controller at node and step . This results in an input-output controller map of the form (3).
6 Existence and uniqueness of solutions
It is important to establish the existence and uniqueness of solutions for the abstract frame model. The model’s dynamics are a type of hybrid system, with nonlinearities, state-dependent multi-rate sampling and delays, making the analysis challenging. In addition, the buffer occupancies are discrete, and this allows for the possibility that the dynamics will exhibit Zeno behavior. We will establish that admissibility (7) is a critical condition to avoid Zeno behavior when frames enter or leave the elastic buffer. As such, we can ensure the existence and uniqueness of solutions for any controller that satisfies the assumptions above.
We summarize the controller and sampling behavior of the model by writing it as
Here the function contains the construction of the sample times , the measurement, and the controller. Notice that the first argument of is the entire history of , not just its value at a particular time. To state this precisely, define the (non-minimal) state space for the system as follows. Consider functions with or , which are piecewise linear, continuous, and satisfy
Let denote the set of all such functions, and . The function has domain , and . Specifically, for , , and , we have if and for some . Under these conditions, for , let be such that
which must exist, since is strictly increasing. Evaluate
and define . Notice that evaluating requires evaluating at times , which lie within the domain of by the requirements on .
We can now prove the following result.
Theorem 1.
Proof. First let be defined by initial conditions (6). We can now define a map as follows. Given , define according to
(8) |
which simply finds the component of which has the smallest domain, breaking ties by choosing the smallest index. Let . We have lies in , and so let . We now extend the piecewise linear function by adding a point so that
and extending to this point via linear interpolation. Let this newly extended function be , whose derivative is by construction bounded below by . We define by . The proof now proceeds by induction, constructing a sequence of functions by repeatedly applying to . At each step, the domain of one component of is extended by at least . This is always the component with the smallest domain, and so in the limit the domain of extends to as desired. Uniqueness follows by observing that the order in which updates are performed, as determined by the breaking of ties in (8), does not affect the resulting solution.
7 Simulation
The above proof of existence also leads immediately to the following algorithm for simulating the system.
while | |
Here is a piecewise linear increasing function, stored as a list of knots, i.e., pairs of real numbers. It can be evaluated via linear interpolation, as can the inverse function . The function simply adds a knot to the end of the list. Also is given by the independent variable of the last knot.
Code to simulate this system is available at https://bittide.googlesource.com/callisto.
7.1 Limitations of the model
The abstract frame model allows one to determine the location of all of the individual frames on the network. For control systems, one often uses models at a coarser level of precision. The AFM does, however, omit certain phenomena. In particular, the model assumes that a frame is inserted into the elastic buffer as soon as it has traversed the link.
The model also does not include limitations which may be imposed by the particular physical oscillator used. The oscillator frequency is set by writing the desired frequency offset into a register, the number of bits of which determines the number of quantization levels available for control. Depending on the specific system parameters, this quantization may have a significant effect on the control system.
8 Control objectives and requirements
The objective of bittide is the maintenance of logical synchronization. No matter what frequencies the nodes run at, the logical synchronization happens as a consequence of the lockstep behavior of the system — until, that is, the buffer overflows, or underflows. Thus, the primary objective of the control system is to manage the buffer occupancy — keep it within limits and make it as small as possible. Buffer occupancy translates directly into communication latency, therefore smaller is better, as long it it does not underflow. Keeping the buffer within limits also requires frequencies to not deviate too much, or for too long from each other. Minimizing the frequency deviation is not a goal. However, the physical oscillator may drift, and the controller should tolerate this drift.
A secondary objective is to keep the frequency as large as possible; if there were no frequency limits, a trivial control strategy for managing buffer occupancy would simply be to reduce all nodes to close to zero frequency. Since the processor cores are clocked exactly in lockstep with frame transmission, higher frequencies are better because they result in faster computation. The controller should behave well when encountering constraints imposed by frequency limits.
All of the above objectives must be achieved with a decentralized controller. This has profound consequences for the application layer, enabling strong security guarantees. For the control algorithm, it means that the individual controllers cannot communicate directly with each other to share measurement information and coordinate their actions.
We impose specific design choices on the controllers for a bittide system, motivated by the desire for strong isolation. They cannot use Paxos-style consensus algorithms, or elect a leader. They cannot mark or inspect frames. They cannot observe the real time . They cannot broadcast information on the network. Each node must update its frequency using only observations of its elastic buffers.
There is however an additional freedom available to the bittide controller. After booting, once the buffer occupancies and frequencies have reached equilibrium, each node may adjust its buffer, discarding frames or adding frames, and thereby reset its buffer occupancy to a desired value. This can happen only at bootup, because at this stage the frames do not yet contain application data. Therefore the objective that buffer occupancies should be regulated applies only after the initial transients of the boot phase.
The bittide control system presented here measures buffer occupancy and uses this directly to choose the frequency correction. More sophisticated schemes which estimate frequencies at neighbors are possible and may offer performance benefits.
There are further practical considerations for the bittide system controller. These include allowing nodes to leave or join the network gracefully, and detection of node and link failures. It must also work over a wide range of network topologies, and it is preferable that minimal configuration should be necessary to inform the controller of the topology and link latencies. The controller should handle a broad range of frequencies. There are many ways to formalize these objectives, and the design of a controller that achieves all of these objectives remains a subject for research.
9 Example
\begin{overpic}[width=433.62pt]{bms_both2} \put(0.0,17.0){\hbox to0.0pt{\hss\small$\omega$}} \put(0.0,68.0){\hbox to0.0pt{\hss\small$\beta$}} \put(38.0,-1.0){\hbox to0.0pt{\hss\small$t$\hss}} \put(65.0,66.3){\tiny$\beta_{31}$} \put(65.0,62.5){\tiny$\beta_{32}$} \put(65.0,58.5){\tiny$\beta_{21}$} \put(65.0,50.0){\tiny$\beta_{12}$} \put(65.0,46.5){\tiny$\beta_{23}$} \put(65.0,42.5){\tiny$\beta_{13}$} \put(8.0,31.5){\tiny$\omega_{3}$} \put(8.0,14.5){\tiny$\omega_{2}$} \put(8.0,7.3){\tiny$\omega_{1}$} \end{overpic}
Figure 2 shows an example simulation for a graph with three nodes and three edges in a triangular topology. The parameters for this system are
for all , and the uncorrected frequencies are
These parameters are chosen for illustrative purposes only (so that the variations in occupancy and frequency are visible in the figure). For example, on a modern system typically the uncorrected frequencies would be much closer together. The controller used here is a simple proportional controller, given by
and the gain . A stability analysis of this system can be found in [8].
Some features of this mechanism are apparent. The frequency at a node is proportional to the sum of the buffer lengths, and so short buffer lengths at a node will cause that node to have a low rate of transmission, and so its buffer lengths will increase. Consequently we expect that frequencies and buffer lengths will tend to equilibrate, as seen in these plots, and that at equilibrium all frequencies will be close to each other. We can also observe that the buffer occupancies at each end of a link almost sum to a constant. This mirroring of the buffer occupancies is not perfect, due to the effects of latency.
10 Conclusions
In this paper we have presented a model for the bittide synchronization mechanism. We have discussed the unique features and requirements of the control design problem. As these systems are deployed, we anticipate further research will be developed for these systems.
11 Acknowledgments
We thank Sam Grayson, Sahil Hasan, Sarah Aguasvivas Manzano, Jean-Jacques Slotine, and Tong Shen for many fruitful discussions. We particularly thank Sarah Aguasvivas Manzano for carefully reading the manuscript.
References
- [1] D. W. Allan. Time and frequency (time-domain) characterization, estimation, and prediction of precision clocks and oscillators. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 34(6):647–654, 1987.
- [2] S. P. Boyd, P. Diaconis, and L. Xiao. Fastest mixing Markov chain on a graph. SIAM Review, 46(4):667–689, 2004.
- [3] J. C. Corbett et al. Spanner: Google’s globally distributed database. ACM Transactions on Computer Systems, 31(3):1–22, Aug. 2013.
- [4] J. Dean and L. A. Barroso. The tail at scale. Communications of the ACM, 56(2):74–80, 2013.
- [5] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Oxford University Press, 1970.
- [6] A. Jadbabaie, J. Lin, and S. A. Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988–1001, 2003.
- [7] F. P. Kelly, A. K. Maulloo, and D. Tan. Rate control for communication networks: shadow prices, proportional fairness and stability. Journal of the Operational Research Society, 49(3):237–252, 1998.
- [8] S. Lall, C. Caşcaval, M. Izzard, and T. Spalink. Resistance distance and control performance for bittide synchronization. European Control Conference, 2022.
- [9] Y. Li et al. Sundial: fault-tolerant clock synchronization for datacenters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pages 1171–1186, Nov. 2020.
- [10] D. L. Mills. Internet time synchronization: the network time protocol. IEEE Transactions on Communications, 39(10):1482–1493, 1991.
- [11] C. W. Reynolds. Flocks, herds and schools: a distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, pages 25–34, 1987.
- [12] T. Spalink. Deterministic sharing of distributed resources. Princeton University, 2006.
- [13] S. Strogatz. From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D: Nonlinear Phenomena, 143(1-4):1–20, 2000.
- [14] D. Swaroop and J. K. Hedrick. String stability of interconnected systems. IEEE Transactions on Automatic Control, 41(3):349–357, 1996.
- [15] Telcordia GR-253. Synchronous Optical Network Transport Systems: Common Generic Criteria, 2000.
- [16] A. T. Winfree. Biological rhythms and the behavior of populations of coupled oscillators. Journal of Theoretical Biology, 16(1):15–42, 1967.