Asynchronous Spatial-Temporal Allocation for Trajectory Planning of Heterogeneous Multi-Agent Systems
Abstract
To plan the trajectories of a large-scale heterogeneous swarm, sequentially or synchronously distributed methods usually become intractable due to the lack of global clock synchronization. To this end, we provide a novel asynchronous spatial-temporal allocation method. Specifically, between a pair of agents, the allocation is proposed to determine their corresponding derivable time-stamped space and can be updated in an asynchronous way, by inserting a waiting duration between two consecutive replanning steps. Via theoretical analysis, the inter-agent collision is proved to be avoided and the allocation ensures timely updates. Comprehensive simulations and comparisons with five baselines validate the effectiveness of the proposed method and illustrate its improvement in completion time and moving distance. Finally, hardware experiments are carried out, where heterogeneous unmanned ground vehicles with onboard computation navigate in cluttered scenarios with high agility.
Supplementary Materials
Video: https://youtu.be/au3fhqbySOE
Code: https://github.com/CYDXYYJ/ASAP
I Introduction
Collision-free trajectory planning plays an essential role for a swarm of agents navigating in a shared workspace [1]. The simplest method is to adopt a central coordinator to decide every agent’s motion [2, 3]. However, this centralized solution becomes unrealistic and intractable for large-scale swarms, due to the limited interaction range and unbearable computation time. Thus, an increasing number of works, e.g., [4, 5, 6] pursue distributed solutions, where the underlying agents can decide their own actions via local communication with others. Related researches gradually advance from sequentially distributed methods [7] to synchronously concurrent distributed [8], and further to asynchronously distributed ones [9].
For decentralized planning, the commonly-used methods are sequentially distributed ones, e.g., [4, 7, 10, 11, 12], by which the participants replan their trajectories in a sequence. As such, the runtime only linearly increases with the number of agents. By comparison, the concurrent distributed methods, e.g., [13, 14, 15, 16, 17], where different agents concurrently replan trajectories, can further decrease the computation complexity. Concurrent methods can be divided into two categories: synchronous and asynchronous. The former requires both global clock synchronization and timetable coordination to synchronize the calculation duration. This, nevertheless, becomes unrealistic for heterogeneous multi-agent systems, since diversified calculation time and indirect communication are inherent characteristics of those systems.
To fulfill an asynchronous motion planning, the authors in [18] extend the reciprocal velocity obstacles methods [19] to agents with kinodynamic constraints. Nonetheless, the short time horizon in [18] implies that it is not eligible for high-agility cases. MADER in [5] introduces a novel Check-Recheck deconfliction scheme, in which the Check period verifies whether an agent’s optimized trajectory collides with those committed by other agents, while the Recheck period assesses whether an agent has received any new trajectories during the Check period. However, each agent is only concerned with other agents’ current trajectories, neglecting those upcoming ones. Similar problem can be found in [20] and [21]. Meanwhile, the authors in [9] suggest an asynchronous decentralized trajectory planner capable of ensuring safety despite communication delays and interruptions. This method extends the Buffered Voronoi Cells [22] to the asynchronous setting by enlarging the makespan of the constraints which may over-constrain the solution space [20]. Another noteworthy work [23] employs the asynchronous alternating direction method of multipliers (ADMM) to resolve the non-convex constraints. To ensure safety despite packet loss, the method inflates the collision region and continues local ADMM iterations using the latest available predictions of neighbors. However, it lacks a formal proof of collision avoidance. In summary, asynchronously distributed planning methods with safety guarantees and high efficiency are still in demand.

To this end, a novel method named Asynchronous Spatial-Temporal Allocation (ASTA) is proposed to solve the online trajectory planning problem for heterogeneous multi-agent systems in an asynchronously distributed manner. For a single agent, the allocation toward a neighbour is defined as a sequence of time-stamped half space. The allocation is updated asynchronously based on the agents’ trajectories through local communication. To achieve this, an interval called waiting time is inserted between two consecutive replanning of each agent to allow for allocation updates. To avoid the potential inter-agent collision when the agents adopt different versions of allocations, the update is purposely restricted to a specific duration. It is theoretically proved that by obeying this allocation, the inter-agent collision can be avoided. In addition, the update frequency of the allocation is shown to have a tailored lower bound.
The contributions of the paper are summarized as follows:
-
Numerical simulations and comparisons with five state-of-the-art methods [4, 5, 8, 13, 16] validate the effectiveness of our method and also illustrate the improvement in the completion time and the moving distances. In the hardware experiments (Fig. 1), heterogeneous unmanned ground vehicles (UGVs) with onboard computation finish two tasks in high agility, where the UGVs’ average speed can reach up to of their maximum speed.
II Problem Formulation
Consider a group of agents with different kinds of dynamic models in a shared 2D workspace. The objective is to drive these agents from their current states to their corresponding targets , , without any collision under asynchronous local communication.
II-A Dynamic Models
For agent , its state and control input at time are denoted as and , , respectively. Its dynamic model is described as follows:
(1) |
where and denote the set of available states and control inputs, respectively; reflects the differential constraint.
Note that the dynamic models of the agents in (1) are heterogeneous, which include the commonly-encountered double integrator, unicycle and bicycle as special cases, elaborated as follows:
-
•
Double integrator: , , , where , ;
-
•
Unicycle: , , , where , ;
-
•
Bicycle: , , , where , , ;
in which , , , , , and represent the agent’s position, linear velocity, acceleration, heading angle, angular velocity, steering angle and wheelbase, and the subscripts , denote the , coordinates, respectively.
II-B Inter-Agent Collision Avoidance
In order to characterize the inter-agent collision avoidance, we first define the representation of the agents’ trajectories as follows:
Definition 1
The trajectory of agent between time span is defined as
The shape of agent at time is denoted by a convex polygon .
At any time , for any pair of agents and , they are collision-free if and only if . Accordingly, their trajectories are collision-free if and only if .
II-C Problem Statement
In this work, the trajectory planning is carried out in a receding horizon way, wherein is the planned state at in agent ’s -th replanning. Additionally, the planning horizon of the -th replanning is denoted as . The time when agent finishes its -th replanning step is denoted as , which is also the starting time of the -th trajectory. The -th replanned trajectory is denoted as . Particularly, for the time beyond the planning horizon, i.e., , we enforce .
Then, for agent in its -th replanning, the trajectory generation problem can be formulated as the following optimal control problem (OCP):
s.t. | ||||
(2a) | ||||
(2b) |
where ; is the predicted state at ; and is the objective function, defined as
which is utilized to drive agent to its target and penalize the control inputs , where , and are the weighted matrices. In the ideal case, the lower-level controller is assumed to perfectly follow the planned trajectory such that .
Note that the collision avoidance constraints (2b) are impossible to be directly established at the replanning stage, since the exact future motions of other agents are unavailable. To solve this problem, we propose a method to dynamically reformulate constraints (2b) such that the OCP (2) can be formulated and solved in an asynchronous manner while the inter-agent collision is theoretically avoided.
III Proposed Method
As illustrated in Fig. 2, the proposed method has three main components: i) Spatial-Temporal Allocation (STA), consisting of a sequence of half spaces with time stamps (), is used to determine feasible time-stamped space w.r.t another agent (the yellow part). ii) Between a pair of agents, STA is asynchronously updated based on their trajectories towards a specific duration (the gray part). iii) Trajectory planning via STA aims to confine each agent’s planned trajectory in its respective feasible time-stamped space (the red part). These components will be presented in the following three subsections.

III-A Spatial-Temporal Allocation
STA between agents and aims to allocate the time-stamped feasible region for replanning, so as to avoid collision between agents. The definition of STA is given below.
Definition 2
The Spatial-Temporal Allocation (STA) of agent w.r.t. agent is
where represents a half space in 2D space, is a normal vector of the boundary of pointing to the interior of and is the offset. We call a Time-Stamped Half Space (TSHS). Moreover, is the time when these agents establish communication. Conversely, we have , where , . As a result, it is clear that and .
Since STA is updated along with the replanning process, we use to represent the allocation after the -th update in the sequel.
III-B Allocation Update
Allocation update is a core part in our method which keeps the collision avoidance constraints up-to-date. For agent , there is a waiting time between two consecutive trajectory calculation times (see Fig. 3). In this work, we particularly enforce that , in order to ensure a lower bound of the updating frequency.
At time when the -th replanning step is finished, the newly planned trajectory is broadcast to other waiting agents. Specifically, if agent stays at waiting time after its -th replanning, it will reach a renewal with agent based on and . Fig. 3 provides an illustration. During the waiting time, agent can also receive other agents’ data and reach renewals. A renewal is defined as follows:
where is the start time of the -th renewal. Moreover, a part of this renewal from to is denoted as .
The process that produces a renewal via and is to generate hyperplane in order to split the space into two halves and based on and for . According to the separating hyperplane theorem [24], if , then we can always find a hyperplane that separates those two polygons. One of the possible implementations of separating hyperplane is using GJK algorithm [25, 13].

Notably, despite the fact that the duration concerned is for the -th renewal, only the duration , where , needs to be considered. Because for , and hold according to the assumption in Section II-C that for , . Thus, we have as well as for .
For STA, when the first renewal starts, it is initialized as . In particular, for the first renewal, we enforce to avoid collision during . Then, for a new renewal , the STA is updated as follows:
(3) |
where In this updating mechanism, STA is only updated for the time beyond , which can also be reflected in Fig. 3.
A pair of agents reach their renewal if and only if one of them just finishes its replanning and the other one stays at waiting time. This is a typical asynchronous method and thus called Asynchronous Spatial-Temporal Allocation.
III-C Trajectory Planning Using STA
In OCP (2), during agent ’s -th replanning, given as the latest STA with agent , the constraints (2b) can be replaced by
(4) |
where is the set of agents that have ever established communication with agent . Based on this replacement, we further reformulate the OCP (2):
s.t. | ||||
(5a) | ||||
(5b) |
where and . Note that the constraints (4) consider an intractably infinite time interval . We replace it with constraints (5a) and (5b) which only consider a finite horizon, thus facilitating the application. The reason why OCP (2) with constraints (4) is equal to OCP (5) can be found in the Lemma 1 of Appendix A. In real implementation, the underlying continuous OCP will be reformulated and resolved in a discrete form, which will be stated in Section IV-A.
III-D The Overall Algorithm
The overall planning algorithm built up by ASTA is outlined in Alg. 1. At the beginning, each agent initializes its trajectory (Line 1), where we enforce that . Afterwards, the agent scans its communication network to find all neighbors (Line 1) and then broadcasts its trajectory via communication network (Line 1). Then, for each neighbor, it opens an independent thread to wait for other agents’ feedback (Line 1). Once receiving a confirmation signal, the allocation will be updated as in equation (3). Thereafter, the receivers regarding all neighbors are closed (Line 1) and the new trajectory is planned according to OCP (5). However, in this work, the feasibility of the OCP cannot be guaranteed, which highlights an area for our future work. To tackle this, if the given OCP (5) is infeasible, the previous trajectory will be continuously adopted. Finally, the trajectory is sent to the lower-level controller, which controls the agent’s motion (Line 1). The whole loop will be carried out indefinitely until this agent reaches its target position.
III-E Algorithm Analysis
Note that there may happen potential collision between a pair of agents due to conflicts in their safety constraints, since they may adopt different versions of allocations. To avoid this, the proposed method is designed to partially update STA after a specific time as (3). Thanks to such a design, inter-agent collision can be theoretically avoided, which is supported by the following theorem.
Theorem 1
For a pair of agents and , if i) agents ’s and ’s trajectories do not collide with each other when they establish their allocation, and ii) in every following replanning step, they obey their corresponding allocations, then , they will not collide with each other.
Proof:
Please refer to Appendix B. ∎
Besides safety guarantee, another key point is that as the allocation is updated in a triggered way, a lower bound of updating frequency is required. Otherwise, due to the out-of-date allocation, the efficiency of trajectory planning of the swarm will be adversely affected. Therefore, we enforce that the waiting time is longer than the computation time in Section III-B. Thus, the following property can be obtained.
Theorem 2
Between agents and , the lower bound of the frequency of the allocation update is
Proof:
Please refer to Appendix C. ∎
IV Simulations and Experiments
In this section, we first illustrate the implementation of the proposed method and then validate its performance via numerical simulations and hardware experiments.
IV-A Algorithm Implementation
This subsection demonstrates how to implement the proposed planning algorithm on digital platforms.
Due to the fact that the OCP (5) is characterized in the continuous-time manner, it requires to be reformulated as a numerical optimization problem via discretization based on a sampling time interval . Accordingly, the length of horizon will be determined as . Consequently, in the -th replanning step, the planned state at time and the control input at time , where are chosen as the optimization variables. Towards the constraints (5a) and (5b), they can be rewritten as , where , , is the collection of vertices of polygon regarding the planned state , and represent the hyperplane obtained at time . Other constraints can be similarly discretized at these time steps.
In terms of implementation, an agent only needs to consider its neighbors within a specified radius. This reduces the number of optimization constraints and achieves manageable computational complexity and admissible scalability. The obstacle avoidance scheme from our previous work [17] can be introduced to apply the proposed method in obstacle environments. Furthermore, in our implementation, an agent can determine the relative time difference between each other via communication. Thus, a global clock synchronization is eliminated. Lastly, we adopt the deadlock resolution scheme in [16] to prevent deadlock.
IV-B Numerical Simulations
In this subsection, the OCP discretized as in Section IV-A is formulated and resolved by acados [26]. In addition, the proposed communication mechanism is realized via Robot Operating System (ROS). All the simulation cases are run on a single computer with an Intel Core [email protected], utilizing multiple programs.
Index | Model | |||||||
---|---|---|---|---|---|---|---|---|
1 | Bicycle | 0.07 | 0.09 | 0.15 | 20 | 23/11 | 4.2 | 4.5 |
2 | Double. | 0.12 | 0.14 | 0.15 | 20 | 17/9 | 4.4 | 8.8 |
3 | Unicycle | 0.16 | 0.21 | 0.15 | 20 | 19/11 | 4.4 | 7.7 |
4 | Double. | 0.10 | 0.17 | 0.15 | 20 | 19/10 | 4.3 | 7.4 |
5 | Bicycle | 0.08 | 0.10 | 0.15 | 20 | 22/11 | 4.3 | 6.0 |
6 | Double. | 0.10 | 0.14 | 0.15 | 20 | 17/9 | 4.8 | 7.3 |
7 | Unicycle | 0.12 | 0.16 | 0.15 | 20 | 26/13 | 4.4 | 7.7 |
8 | Unicycle | 0.16 | 0.18 | 0.15 | 20 | 19/10 | 4.6 | 8.9 |
The first simulation example is that agents navigate to their antipodal position in a circle with a diameter of m. The detailed information about the underlying agents and the simulation results are listed in Table I. Their diameters are uniformly set as m, but their maximum speeds range from to . In this simulation, as elaborated in Section II-A, agents with different dynamic models are considered. The agents’ trajectories and inter-agent distance are depicted in Fig. 4, from which it can be observed that the minimum inter-agent distance is around m, larger than the safety distance . It can be seen that safety is guaranteed during the simulation. This task is finished within s and the maximum transition length is m as shown in Table I.
Three more complicated scenarios are further simulated. In the first scenario of Fig. 5, we consider a system composed of agents, confirming its effectiveness in handling large-scale systems. In its second scenario, UGVs with bicycle model exchange their lateral positions concurrently, which simulates complicated lane changes in urban road traffic. The result indicates a relatively smooth and fast position exchange, which further exhibits the practicability of our method in real traffic management. To further evaluate the proposed method in an obstacle environment, we conducted a simulation as shown in Fig. 6. The result indicates that the method can be extended to handle asynchronous trajectory planning in obstacle scenarios while ensuring safety.






Method | Dis. | |||
---|---|---|---|---|
Ego [4] | Seq. | 10.0 / 11.6 | 3 / 11 | 8.33 / 8.80 |
IMPC-DR [16] | Syn. | 9.5 / 10.1 | 95 / 130 | 8.57 / 8.79 |
DMPC [8] | Syn. | 9.4 / 10.6 | 51 / 90 | 8.43 / 8.89 |
LSC [13] | Syn. | 11.1 / 12.6 | 15 / 46 | 9.11 / 9.73 |
MADER [5] | Asyn. | 11.5 / 14.5 | 24 / 38 | 9.70 / 11.7 |
Ours | Asyn. | 9.0 / 9.6 | 8 / 21 | 8.24 / 8.34 |
Then, we compare the proposed ASTA with five state-of-the-art methods [4, 16, 8, 13, 5]. The dynamic models of the underlying robots are uniformly determined as double-integrators with maximum velocity and acceleration set as and , respectively. Furthermore, they uniformly have a planning horizon as . In this scenario, eight agents need to navigate to their antipodal positions without inter-agent collision. The results are depicted in Fig. 7 and provided in Table II. Executed in a sequential (Seq.) way, the Ego-swarm [4] outperforms other methods in computing time and has a considerably good transition time and distance. The LSC [13], IMPC-DR [16] and DMPC [8] all adopt synchronously (Syn.) concurrent replanning. All of them can complete this task effectively, but with a relatively longer transition distance compared to Ego-Swarm. In addition, replanning in an asynchronous (Asyn.) way, MADER in [5] has a relatively worse performance in this crowded scenario. This may be due to its check-recheck scheme, which results in more conservative trajectories. In contrast, our method provides quicker and shorter transitions in addition to a relatively lower computational cost. In summary, with additional capability to deal with the task in an asynchronous way, the proposed method not only outperforms its asynchronous counterpart in [5], but also performs better than the others in [4, 16, 8, 13].
IV-C Hardware Experiments


In this subsection, we verify the effectiveness of our method via hardware experiments. In these experiments, UGVs of different hardware models are involved as shown in Fig. 8. Their dynamic models include unicycles, double integrators, and kinematic bicycles. The radii of these agents range from m to m, and their maximum velocities vary from to . The architecture of this multi-robot system is depicted in Fig. 8. The agents’ positions are obtained via the indoor motion capture system OptiTrack and transmitted to other agents via WiFi and the ROS-based communication system. The MPC-based trajectory tracker is utilized to track the planned trajectory whose results are sent to the actuator controller. The computation of the trajectory planner, the MPC tracker and the actuator controller is accomplished by a Raspberry Pi 4B onboard computer. To address the communication delays in experiments, we implemented a feedforward compensation mechanism to mitigate their impact.
The first scenario named “crossing” lets eight UGVs cross in a rectangle ground. Since the maximum radius of the underlying agents reaches up to m, this testing ground is thus crowded. The actual moving snapshot is provided in Fig. 1, where these agents can finish this navigation without any inter-agent collision. Furthermore, the average speed of all UGVs can reach up to of their corresponding maximum velocities, which demonstrates a considerably high agility.

The second scenario is utilized to show the scalability of our method when an agent encounters new neighbors at an un-signalized intersection. Six agents carry on reciprocating motion in a specific area. After tens of seconds, the other two Ackermann UGVs launch their trajectory planning procedures and set up communication with others. Then, these two agents are sent to pass through this area. The result is illustrated in Fig. 9. Two Ackermann UGVs only take less than s to finish this task, which means that their average speeds can reach up to almost , i.e., of the maximum velocity. For more information about those experiments, please refer to the supplementary video.
V Conclusion
This work has proposed a novel distributed asynchronous trajectory planning method called Asynchronous Spatial-Temporal Allocation for multi-agent systems with heterogeneous constrained dynamics. Theoretical analysis, numerical simulations and hardware experiments all validate its effectiveness and performance. Future work will extend the proposed method to address scenarios involving communication delays or malicious neighbors.
Appendix A The Equivalence of Optimization Problems
Proof:
To begin with, we will prove that a trajectory satisfying constraints (5a) and (5b) can also satisfy constraints (4). For , if we have , i.e., equation (5a), then we can get that
(6) |
If , for , and hold, then also holds, since . Combined with (6), we can obtain that
(7) |
which is equal to .
Otherwise, we will consider the situation that . Case 1): For , and hold. Then, exists since . Case 2): For , we have enforced that in (5b). Combining with (6), we can also obtain (7). To sum up, a trajectory satisfying constraints (5a) and (5b) also satisfies constraints (4).
Next we intend to prove that a trajectory satisfying constraints (4) can also satisfy constraints (5a) and (5b). At first, given condition that , it is clear that equation (6) holds which means constraints (5a) are satisfied. For , we have such that can induce that holds for . This can further imply constraints (5b). As a result, the proof is completed. ∎
Appendix B Proof of Theorem 1
The actual trajectory for agent during can be written as
(8) |
where , , and
(9) | ||||
represents the set of all the indices of the replanning steps whose trajectories are executed during .
Considering that agents and obey their corresponding STA, we have and , where and . According to Definition 2, we have and the following equation:
where , , and similar notations are used for agent . Incorporated with (8), the above equation can be converted into
This proves that the actual trajectories of agents and during time interval have no overlap. Moreover, this property holds for , which means that the inter-agent collision can be avoided after .
Then, we discuss the time interval . Note that no matter how STA is updated, STA in this interval is fixed as (). Thus, for agents and , any of their planned trajectories satisfies and , respectively, thus is collision-free. In particular, they are collision-free inherently when reaching their first renewal, because of the precondition. Therefore, in a similar way as above we can prove that . Then, the proof is completed.

Appendix C Proof of Theorem 2
We list all the possible situations in Fig. 10 to investigate the time interval between two adjacent updates. Without loss of generality, assume that the first renewal is reached after agent finishes its calculation. Accordingly, is defined as the time interval between their first renewal and the ending time of agent ’s first waiting time. Since the situations enumerated in Fig. 10 all have the exact expression of time interval except for cases (c) and (f), we will pay special attention to these two scenarios.
Regarding case (c), since the assumption in Section III-B, it can be easily verified that this case cannot exist. Towards case (f), since , it can be verified that .
After analyzing all the situations, it is clear that holds and the lower bound of the updating frequency can be easily obtained.
References
- [1] Z. Liu, M. Guo, W. Bao, and Z. Li, “Fast and adaptive multi-agent planning under collaborative temporal logic tasks via poset products,” Research, vol. 7, p. Article 0337, 2024.
- [2] W. Hönig, J. A. Preiss, T. K. S. Kumar, G. S. Sukhatme, and N. Ayanian, “Trajectory planning for quadrotor swarms,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 856–869, 2018.
- [3] B. Li, Y. Ouyang, Y. Zhang, T. Acarman, Q. Kong, and Z. Shao, “Optimal cooperative maneuver planning for multiple nonholonomic robots in a tiny environment via adaptive-scaling constrained optimization,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1511–1518, 2021.
- [4] X. Zhou, X. Wen, Z. Wang, Y. Gao, H. Li, Q. Wang, T. Yang, H. Lu, Y. Cao, C. Xu, and F. Gao, “Swarm of micro flying robots in the wild,” Science Robotics, vol. 7, no. 66, p. eabm5954, 2022.
- [5] J. Tordesillas and J. P. How, “Mader: Trajectory planner in multiagent and dynamic environments,” IEEE Transactions on Robotics, vol. 38, no. 1, pp. 463–476, 2021.
- [6] F. Rey, Z. Pan, A. Hauswirth, and J. Lygeros, “Fully decentralized admm for coordination and collision avoidance,” in 2018 European Control Conference (ECC). IEEE, 2018, pp. 825–830.
- [7] A. Richards and J. How, “Decentralized model predictive control of cooperating uavs,” in 2004 IEEE Conference on Decision and Control (CDC), vol. 4. IEEE, 2004, pp. 4286–4291.
- [8] C. E. Luis and A. P. Schoellig, “Trajectory generation for multiagent point-to-point transitions via distributed model predictive control,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 375–382, 2019.
- [9] B. Şenbaşlar and G. S. Sukhatme, “Asynchronous real-time decentralized multi-robot trajectory planning,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 9972–9979.
- [10] Y. Chen, M. Cutler, and J. P. How, “Decoupled multiagent path planning via incremental sequential convex programming,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 5954–5961.
- [11] D. Morgan, G. P. Subramanian, S.-J. Chung, and F. Y. Hadaegh, “Swarm assignment and trajectory optimization using variable-swarm, distributed auction assignment and sequential convex programming,” International Journal of Robotics Research, vol. 35, no. 10, pp. 1261–1285, 2016.
- [12] M. Čáp, P. Novák, A. Kleiner, and M. Selecký, “Prioritized planning algorithms for trajectory coordination of multiple mobile robots,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 3, pp. 835–849, 2015.
- [13] J. Park, D. Kim, G. C. Kim, D. Oh, and H. J. Kim, “Online distributed trajectory planning for quadrotor swarm with feasibility guarantee using linear safe corridor,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4869–4876, 2022.
- [14] C. Toumieh and A. Lambert, “Decentralized multi-agent planning using model predictive control and time-aware safe corridors,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11 110–11 117, 2022.
- [15] R. Firoozi, L. Ferranti, X. Zhang, S. Nejadnik, and F. Borrelli, “A distributed multi-robot coordination algorithm for navigation in tight environments,” arXiv preprint arXiv:2006.11492, 2020.
- [16] Y. Chen, M. Guo, and Z. Li, “Deadlock resolution and recursive feasibility in mpc-based multi-robot trajectory generation,” IEEE Transasctions on Automatic Control, in press, 2024.
- [17] Y. Chen, C. Wang, M. Guo, and Z. Li, “Multi-robot trajectory planning with feasibility guarantee and deadlock resolution: An obstacle-dense environment,” IEEE Robotics and Automation Letters, vol. 8, no. 4, pp. 2197–2204, 2023.
- [18] J. Alonso-Mora, P. Beardsley, and R. Siegwart, “Cooperative collision avoidance for nonholonomic robots,” IEEE Transactions on Robotics, vol. 34, no. 2, pp. 404–420, 2018.
- [19] J. Van Den Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics Research: The 14th International Symposium ISRR. Springer, 2011, pp. 3–19.
- [20] K. Kondo, J. Tordesillas, R. Figueroa, J. Rached, J. Merkel, P. C. Lusk, and J. P. How, “Robust mader: Decentralized and asynchronous multiagent trajectory planner robust to communication delay,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 1687–1693.
- [21] C. Ma, Z. Han, T. Zhang, J. Wang, L. Xu, C. Li, C. Xu, and F. Gao, “Decentralized planning for car-like robotic swarm in cluttered environments,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 9293–9300.
- [22] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered Voronoi cells,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 1047–1054, 2017.
- [23] L. Ferranti, L. Lyons, R. R. Negenborn, T. Keviczky, and J. Alonso-Mora, “Distributed nonlinear trajectory optimization for multi-robot motion planning,” IEEE Transactions on Control Systems Technology, vol. 31, no. 2, pp. 809–824, 2023.
- [24] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
- [25] E. G. Gilbert, D. W. Johnson, and S. S. Keerthi, “A fast procedure for computing the distance between complex objects in three-dimensional space,” IEEE Journal on Robotics and Automation, vol. 4, no. 2, pp. 193–203, 1988.
- [26] R. Verschueren, G. Frison, D. Kouzoupis, J. Frey, N. v. Duijkeren, A. Zanelli, B. Novoselnik, T. Albin, R. Quirynen, and M. Diehl, “acados—a modular open-source framework for fast embedded optimal control,” Mathematical Programming Computation, vol. 14, no. 1, pp. 147–183, 2022.