This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sensor-based Multi-agent Coverage Control with Spatial Separation in Unstructured Environments

Xinyi Wang1, Jiwen Xu1, Chuanxiang Gao1, Yizhou Chen1, Jihan Zhang1,
Chenggang Wang2, Yulong Ding3, Ben M. Chen1
The work was supported in part by the Research Grants Council of Hong Kong SAR (14206821 and 14217922), in part by the Hong Kong Centre for Logistics Robotics, and in part by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 62303354 and 62303316).1Xinyi Wang, Jiwen Xu, Yizhou Chen, Chuanxiang Gao, Jihan Zhang, and Ben M. Chen are with the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China (e-mail: [email protected];[email protected]; [email protected]; [email protected]; [email protected]; [email protected]);2Chenggang Wang is with the Department of Automation, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai Jiao Tong University, Shanghai 200240, P. R. China. ([email protected]).3Yulong Ding is with the Department of Control Science and Engineering, Tongji University, Shanghai, China, and with the Frontiers Science Center for Intelligent Autonomous Systems, Ministry of Education, China; (e-mail: [email protected]);
Abstract

Multi-robot systems have increasingly become instrumental in tackling coverage problems. However, the challenge of optimizing task efficiency without compromising task success still persists, particularly in expansive, unstructured scenarios with dense obstacles. This paper presents an innovative, decentralized Voronoi-based coverage control approach to reactively navigate these complexities while guaranteeing safety. This approach leverages the active sensing capabilities of multi-robot systems to supplement GIS (Geographic Information System), offering a more comprehensive and real-time understanding of environments like post-disaster. Based on point cloud data, which is inherently non-convex and unstructured, this method efficiently generates collision-free Voronoi regions using only local sensing information through spatial decomposition and spherical mirroring techniques. Then, deadlock-aware guided map integrated with a gradient-optimized, centroid Voronoi-based coverage control policy, is constructed to improve efficiency by avoiding exhaustive searches and local sensing pitfalls. The effectiveness of our algorithm has been validated through extensive numerical simulations in high-fidelity environments, demonstrating significant improvements in task success rate, coverage ratio, and task execution time compared with others.

I Introduction

Multi-robot systems are extensively employed in coverage tasks across diverse applications, including gas leak detection, pollution source identification, and search and rescue operations [1, 2]. These systems can efficiently locate Targets of Interest (ToI) in unknown and unstructured environments through interactive data collection. Building on their capabilities, integration with Geographic Information Systems (GIS) offers new opportunities for improving efficiency and accuracy of such operations [3].

Refer to caption
Figure 1: GIS provides maps as prior information for multi-robot systems in post-disaster search and rescue.

While GIS offers extensive pre-collected data, multi-robot information sharing, and advanced analysis features like heat maps as shown in Fig. 1, its limitations are evident in uncertain situations such as post-disaster rescue efforts. In these contexts, the active sensing capabilities of multi-robot systems can serve as real-time supplements to GIS, providing a more comprehensive and up-to-date understanding of the environment.

Despite the unprecedented advancements in this field, the challenge of searching ToI in unknown environments persists, particularly in balancing efficiency with successful task execution. Some search-based approaches resort to exhaustive exploration of the surrounding environment to identify feasible paths [4]. However, this approach compromises efficiency and necessitates complete environmental traversal. Such inefficiency has critical implications for timely hazard avoidance or life-saving actions, especially given the battery limitations in large-scale environments.

On the other hand, some reactive approaches compute search policies that focus on the immediate rewards associated with ToI, thereby enhancing efficiency through the local sensing range. These methods, rigorously examined in the context of Centroidal Voronoi Tessellation (CVT) in multi-robot coverage problems [2, 5, 6, 7] offer numerous advantages, including asynchronous operation, distributed execution, adaptability, and verifiable asymptotic correctness [8]. However, they are susceptible to local minima due to short-sighted decisions, which can lead to the failure to complete the task. Furthermore, the majority of existing methods are confined to two-dimensional, well-structured environments [9]. They fall short in accounting for unstructured settings, including raw sensor data, where robots are often trapped in the presence of non-convex obstacles when attempting to expedite their greedy coverage policy [10, 11, 12].

In this paper, we introduce a novel decentralized Voronoi-based active coverage policy for unknown and unstructured environments with guaranteed safety. This research is driven by scenarios where robots aim to balance coverage efficiency and success rate while adhering to safety constraints. We aim to develop a cooperative control policy that enables individual robots to make intelligent decisions while enhancing overall team efficiency. Firstly, we utilize realistic point cloud data as input, characterized by its highly non-convex and unstructured nature. By integrating spatial decomposition and spherical mirroring techniques, collision-free Voronoi regions are efficiently generated using only local sensing and position information from other robots. Secondly, explored information is employed to construct a deadlock-aware guided map to refine subsequent processes for optimal decision-making. This map is integrated with centroid Voronoi based coverage control policy to improve efficiency by avoiding exhaustive searches and local sensing pitfalls. Thirdly, the effectiveness of our proposed method has been extensively validated through numerical simulations in high-fidelity large-scale environments, including cluttered, thin-structural, and narrow settings. Compared with other Voronoi-based methods, the approach has significantly improved the successful coverage ratio and time.

To the best of our knowledge, we are the first to propose a Voronoi-based coverage algorithm that mitigates the issue of local minima trapping in general unstructured obstacle environments supported by convergence guarantees.

II Related Work

This work integrates contributions related to its functionality, building upon collision avoidance methods and our previous research on coverage control [12].

A crucial consideration in collision avoidance is the formulation of obstacles derived from point cloud data. Most existing methods rely on using high resources to maintain a cost map, such as the Euclidean Distance Transform (EDT) map, to calculate the minimum distance to obstacles for optimization [13, 14, 15], which may suffer from time-consuming computations and redundant processes [16]. Alternatively, some strategies depend on down-sampling and spatial partitioning using convex clusters, [17]. Unfortunately, such techniques may sacrifice the free space or lead to unsafe areas [18]. Instead, our approach directly generates safe Voronoi cells on a voxel-based environment representation, inspired by the concept presented in [19, 20]. The advantages lie in threefold: 1) It operates exclusively on point cloud data, eliminating the need for a post-processed map and ensuring the preservation of raw sensor measurement fidelity; 2) It does not assume the convexity or structure of the environment, making it particularly suitable for real-world sensors influenced by noise and adaptable to a range of scenarios; 3) The strategy requires only the sensing of other robots’ positions, which simplifies communication requirements and bolsters system robustness.

Another challenge in active coverage control design arises from the complexities of raw sensor measurements and unstructured environments, which often lead robots to be trapped in local minima. This issue is commonly encountered in coverage methods that rely on CVT, a form of non-convex optimization [10, 11, 12]. While there are some techniques like random restarts [21] and iterative refinement [22] address this issue, it has not been fully solved. In this work, we introduce a deadlock-aware guided map that incorporates a move-to-centroid policy and dynamically updates feasible directions as new obstacles are encountered. The gradient of this map effectively guides the robot away from obstacles and pitfalls while simultaneously directing it toward the target, thereby preventing entrapment in local minima.

III Preliminaries

Consider a team of nn robots localized at 𝐏(t)=[𝐩1(t),,𝐩n(t)]\mathbf{P}(t)=[\mathbf{p}_{1}(t),\dots,\mathbf{p}_{n}(t)] at time tt in a workspace 𝒲3\mathcal{W}\subset\mathbb{R}^{3}. The dynamics of each robot can be modeled as a single integrator, described by 𝐩˙i=𝐮i\dot{\mathbf{p}}_{i}=\mathbf{u}_{i}, and its geometry can be modeled as a ball with centered position 𝐩i=(pi,x,pi,y,pi,z)\mathbf{p}_{i}=(p_{i,x},p_{i,y},p_{i,z}) and radius rir_{i}, where i1,,ni\in 1,\dots,n. The environment is represented by a grid map with discretized points 𝐩𝒲\mathbf{p}\in\mathcal{W}. It can be uniformly partitioned into non-overlapping regions corresponding to each robot utilizing Voronoi partitioning. These partitions are generated based on 𝐏\mathbf{P} and can be regarded as the intersection of a set of maximum margin separating hyper-planes [23]. For each robot ii, the Voronoi cell is defined as follows:

𝒱i=\displaystyle\mathcal{V}_{i}= {𝐩𝒲|𝐩𝐩i𝐩𝐩j,\displaystyle\{\mathbf{p}\in\mathcal{W}|\|\mathbf{p}-\mathbf{p}_{i}^{*}\|\leq\|\mathbf{p}-\mathbf{p}_{j}^{*}\|\text{,} (1)
(𝐩i,𝐩j)=argmin𝐩ii,𝐩jjd(𝐩i,𝐩j)}\displaystyle(\mathbf{p}_{i}^{*},\mathbf{p}_{j}^{*})=\underset{\mathbf{p}_{i}\in\mathcal{R}_{i},\mathbf{p}_{j}\in\mathcal{R}_{j}}{\arg\min}d(\mathbf{p}_{i},\mathbf{p}_{j})\}

where \|\cdot\| denotes l2l_{2}-norm. It can be observed that the discrete points within the Voronoi cell 𝒱i\mathcal{V}_{i} are closer to its generator 𝐩i\mathbf{p}_{i} than to any other robots. Therefore, it is reasonable to assume that each robot ii is tasked with its corresponding region 𝒱i\mathcal{V}_{i} using a limited-range sensor.

III-A Sensor and Obstacle Representation

Let 𝒪={𝐪o𝒲|𝐪o=(qo,x,qo,y,qo,z),1om}\mathcal{O}=\{\mathbf{q}_{o}\in\mathcal{W}|\mathbf{q}_{o}=(q_{o,x},q_{o,y},q_{o,z}),1\leq o\leq m\} be the set of obstacles, where mm is the number of all point clouds. The free configuration space is defined as 𝒲free=𝒲\𝒪.\mathcal{W}_{\mathrm{free}}=\mathcal{W}\backslash\mathcal{O}. Suppose the robot’s perception is solely sensor-based, with the sensor equipped having a fixed sensing range denoted as Rsensor>0R_{\mathrm{sensor}}\in\mathbb{R}_{>0}. This implies that the robot’s knowledge about the location of environmental obstacles is restricted to the structure it perceives within a nearby area around its current position. Let the robot be surrounded by mrm_{r} point clouds of obstacles inside the sensor range RsensorR_{\mathrm{sensor}}. The observable obstacles environment, represented by a series of discrete points and denoted as the set 𝒬={𝐪o𝒲𝐪o𝐩iRsensor,1omr}\mathcal{Q}=\{\mathbf{q}_{o}\in\mathcal{W}\mid\|\mathbf{q}_{o}-\mathbf{p}_{i}\|\leq R_{\mathrm{sensor}},1\leq o\leq m_{r}\}, constitutes a spherical region centered at the robot’s position 𝐩i\mathbf{p}_{i}. Any region outside the robot’s sensory footprint is assumed to be obstacle-free until updated when the robot perceives it.

III-B Voronoi-based Coverage Control

To evaluate the coverage of the target of interest (ToI) by a multi-robot system, two key factors must be considered: the value of each sensed point and the sensing capability, which is influenced by the distance between the robot and the point on the grid to be sensed. The value of these points is indicative of the importance of the information to be covered. To quantify this, we introduce a density function ϕ():𝒲+\phi(\cdot):\mathcal{W}\rightarrow\mathbb{R}^{+} based on a Gaussian mixture model for reference coverage information. As the distance between the robot and the point to be sensed increases, the sensing capability generally diminishes. To optimize the deployment locations of the multi-robot system, we utilize a cost function based on ϕ(𝐩)\phi(\mathbf{p}) [8, 11, 24]

(𝐏)=i=1ni(𝐏)=i=1n𝒱i𝐩𝐩i2ϕ(𝐩)𝑑𝐩.\displaystyle\mathcal{H}(\mathbf{P})=\sum_{i=1}^{n}\mathcal{H}_{i}(\mathbf{P})=\sum_{i=1}^{n}\int_{\mathcal{V}_{i}}\|\mathbf{p}-\mathbf{p}_{i}\|^{2}\phi(\mathbf{p})d\mathbf{p}. (2)

The derivative of \mathcal{H} indicates that moving to the centroids of the sensors’ corresponding Voronoi cells can reduce the coverage cost. Hence, a gradient decent control policy 𝐮i\mathbf{u}_{i} is designed to steer robots to a CVT:

𝐮i=umax𝐩i𝐂𝒱𝐢𝐩i𝐂𝒱𝐢.\displaystyle\mathbf{u}_{i}=-u_{\max}\frac{\mathbf{p}_{i}-\mathbf{C_{\mathcal{V}_{i}}}}{\|\mathbf{p}_{i}-\mathbf{C_{\mathcal{V}_{i}}}\|}. (3)

where 𝐂𝒱i=argmin𝐩ii(𝐏).\mathbf{C}_{\mathcal{V}_{i}}=\underset{\mathbf{p}_{i}}{\arg\min}\mathcal{H}_{i}(\mathbf{P}).

IV Sensor-based Coverage Control with Deadlocks and Collision Avoidance

Refer to caption
Figure 2: Multi-robot coverage mission in unstructured environments with three sparsely distributed information sources.

This section introduces a novel reactive coverage method that enables a multi-robot team to cover the ToI using only its current position and the structure perceived within its local sensor footprint. A method overview is depicted in Fig. 2.

IV-A Spatial Decomposition

To separate the movement space of multi-robots into some security domains without executing duplicated tasks, a safe convex region needs to be constructed for each robot at discrete time kΔtk\Delta t, where Δt\Delta t is the time interval. For each time, the Voronoi cell for each robot is only determined by neighboring robots and obstacles, and can thus be formed as the intersection of half-spaces separating robot ii from obstacles or other robots with linear separator 𝐚ij\mathbf{a}_{ij}, bijb_{ij}, and 𝐚io\mathbf{a}_{io},biob_{io}, where i,j1,,ni,j\in 1,\dots,n and o1,,mo\in 1,\dots,m.

IV-A1 Robot-robot Collision Avoidance Hyperplane

To achieve a spatially load-balanced deployment that can provide locally optimal sensor coverage of a target, it is essential to define a separation strategy that ensures that each voxel in the deployment region is closer to the position of the robot, which serves as the generator for its Voronoi cell, than to any other robot in the environment. This separation condition is fundamental to the definition of the Voronoi cell, which partitions the environment into non-overlapping regions based on each robot’s location. We can calculate 𝐚ij\mathbf{a}_{ij} and bijb_{ij} by finding a perpendicular bisector of any two positions of robot 𝐩i,𝐩j\mathbf{p}_{i},\mathbf{p}_{j} using the definition in the following:

𝐚ij=𝐩ij=𝐩i𝐩j,bij=𝐩ijT𝐩i+𝐩j2.\displaystyle\mathbf{a}_{ij}=\mathbf{p}_{ij}=\mathbf{p}_{i}-\mathbf{p}_{j},\quad b_{ij}=\mathbf{p}_{ij}^{\mathrm{T}}\frac{\mathbf{p}_{i}+\mathbf{p}_{j}}{2}. (4)

IV-A2 Robot-obstacle Collision Avoidance Hyperplane

The robot operates within a realistic world that contains unstructured obstacles, represented by point cloud data. These raw data are processed with computationally demanding techniques like surface reconstruction and the maintenance of additional cost maps, commonly used in traditional methods. To develop a more efficient and lightweight algorithm, we have devised a method that directly extracts collision avoidance hyperplanes from the point clouds within the visible range of the robot’s sensor RsensorR_{\mathrm{sensor}}. Consequently, our approach substantially minimizes the computational and storage resources required for clustering obstacles.

Refer to caption
Figure 3: Illustration of spatial decomposition and safe region construction in environments filled with a high density of obstacles, as represented by point clouds. The original squares within sensor range RsensorR_{\mathrm{sensor}} are projected outside of the circle with a one-to-one mapping. The robot is positioned in an obstacle-free location 𝐩i\mathbf{p}_{i}. By finding the convex hull of the mirrored points, i.e., 𝐪1,,𝐪5\mathbf{q}^{\prime}_{1},\dots,\mathbf{q}^{\prime}_{5} (the yellow squares), we can target which squares that are closest to the robot, i.e., 𝐪1,,𝐪5\mathbf{q}_{1},\dots,\mathbf{q}_{5} (the green squares). Subsequently, a safe convex region 𝒱¯i\bar{\mathcal{V}}_{i} colored in green can be obtained by separating hyperplane theorem.

Assuming robot ii is initially collision-free 𝐩i𝒲free\mathbf{p}_{i}\in\mathcal{W}_{\mathrm{free}} surrounded by mrm_{r} point clouds of obstacles inside the sensor range RsensorR_{\mathrm{sensor}}. We use spherical mirroring operation mentioned in [19] to map each 𝐪o𝒬\mathbf{q}_{o}\in\mathcal{Q} to 𝐪o\mathbf{q}_{o}^{\prime} with RsensorR_{\mathrm{sensor}}:

𝐪o=F(𝐪o)=𝐪o𝐩i+2(Rsensor𝐪o𝐩i)𝐪o𝐩i𝐪o𝐩i,\mathbf{q}_{o}^{\prime}=F(\mathbf{q}_{o})=\mathbf{q}_{o}-\mathbf{p}_{i}+2(R_{\mathrm{sensor}}-\|\mathbf{q}_{o}-\mathbf{p}_{i}\|)\frac{\mathbf{q}_{o}-\mathbf{p}_{i}}{\|\mathbf{q}_{o}-\mathbf{p}_{i}\|}, (5)

where 𝐪o\mathbf{q}_{o}^{\prime} is the mirrored point, 𝐪o\mathbf{q}_{o} is the original point, and F(𝐪o)F(\mathbf{q}_{o}) is the spherical mapping function. As illustrated in Fig. 3 (a), the purpose of employing spherical mirroring is to reverse the positions of any points 𝐪i\mathbf{q}_{i} that are located within the sphere along the corresponding ray. This mirroring operation effectively relocates the origin points of obstacles from the internal region of the sphere to the external region, creating a mirrored representation. By using the QuickHull technique [25], we can efficiently determine the transformation points required to construct a convex hull decided by a vertex vector Ωc=[𝐪1,,𝐪mc]3×mc\Omega_{c}=[\mathbf{q}_{1}^{\prime},\dots,\mathbf{q}_{m_{c}}^{\prime}]\in\mathbb{R}^{3\times m_{c}} from the mirrored points.

Moreover, due to the monotonically decreasing nature of the function F()\|F(\cdot)\|, points that are closer to the robot are transformed further away. Since there is no point outside the convex hull Ωc\Omega_{c}, it implies that the area inside the corresponding origin points 𝐪1,,𝐪mc\mathbf{q}_{1},\dots,\mathbf{q}_{m_{c}}, is obstacle-free. With the described spherical mirroring operation, the extraction of points that are necessary for generating robot-obstacle collision avoidance hyperplanes becomes a straightforward task for an unordered point cloud.

IV-B Safe Region Construction

Based on above, 𝐚io\mathbf{a}_{io} can be quickly calculated by solving the following low-dimensional quadratic programming (QP) problem:

min\displaystyle\min 𝐚ioT𝐚io\displaystyle\mathbf{a}_{io}^{\mathrm{T}}\mathbf{a}_{io} (6)
s.t. (𝐪o𝐩i)T𝐚io1,o1,,mc.\displaystyle(\mathbf{q}_{o}-\mathbf{p}_{i})^{\mathrm{T}}\mathbf{a}_{io}\geq 1,\quad\forall o\in 1,\dots,m_{c}.

We then shift the hyper-plane to be tight with the target. Thus, bio=min𝐚ioT𝐪ob_{io}=\min{\mathbf{a}_{io}^{\mathrm{T}}\mathbf{q}_{o}}. Solving a QP problem requires O(N3)O(N^{3}) time, where NN denotes the number of decision variables [26]. These decision variables are linearly correlated with the number of obstacle points mcm_{c} in our method. Consequently, the total computational complexity for each robot, influenced by the local density of obstacles, is O(mc3)O(m_{c}^{3}). Leveraging advanced optimization utilities, such as CVXOPT, we can solve the problem in tens of milliseconds.

Moreover, we take into account the geometric dimensions of robots by employing a buffered term [27]. We introduce safety buffer variables βij=ri𝐚ij\beta_{ij}=r_{i}\|\mathbf{a}_{ij}\| and βio=ri𝐚io\beta_{io}=r_{i}\|\mathbf{a}_{io}\| to ensure the body of each robot within its corresponding buffered Voronoi cells (BVCs). It should be noted that these buffer variables can be further generalized to accommodate robots of varying dimensions along different axes. Then, Eq. (1) is extended to the following definition.

Definition 1

For a team of nn robots localized at 𝐏\mathbf{P} in an obstacle-free workspace 𝒲free\mathcal{W}_{\mathrm{free}}, the buffered Voronoi cell for each robot ii is:

𝒱¯i={𝐩𝒲free|\displaystyle\bar{\mathcal{V}}_{i}=\{\mathbf{p}\in\mathcal{W}_{\mathrm{free}}| 𝐚ijT𝐩bijβij,ji,i,j1,,n,\displaystyle\mathbf{a}_{ij}^{\mathrm{T}}\mathbf{p}\leq b_{ij}-\beta_{ij},\forall j\neq i,i,j\in 1,\dots,n, (7)
𝐚ioT𝐩bioβio,o1,,mc}.\displaystyle\mathbf{a}_{io}^{\mathrm{T}}\mathbf{p}\leq b_{io}-\beta_{io},o\in 1,\dots,m_{c}\}.
Lemma 1 (Properties of BVC)

If 𝐩i(t)𝐩j(t)ri+rj,ij\|\mathbf{p}_{i}(t)-\mathbf{p}_{j}(t)\|\geq r_{i}+r_{j},i\neq j and 𝐩i(t)𝐪ori\|\mathbf{p}_{i}(t)-\mathbf{q}_{o}\|\geq r_{i}, i,j1,,n,o1,,mc\forall i,j\in 1,\dots,n,o\in 1,\dots,m_{c} at time tt, we have 1) 𝒱i¯\bar{\mathcal{V}_{i}}\neq\emptyset; 2)𝒱i¯𝒱i\bar{\mathcal{V}_{i}}\subset\mathcal{V}_{i}; 3) 𝐩¯i𝐩¯jri+rj,𝐩¯i𝒱¯i,𝐩¯j𝒱¯jij\|\bar{\mathbf{p}}_{i}-\bar{\mathbf{p}}_{j}\|\geq r_{i}+r_{j},\forall\bar{\mathbf{p}}_{i}\in\bar{\mathcal{V}}_{i},\bar{\mathbf{p}}_{j}\in\bar{\mathcal{V}}_{j}\forall i\neq j; 4) 𝒱¯i𝒱¯j=\bar{\mathcal{V}}_{i}\cap\bar{\mathcal{V}}_{j}=\emptyset; 5) 𝐩¯i𝐪ori,𝐩¯i𝒱¯i,𝐪oΩ\|\bar{\mathbf{p}}_{i}-\mathbf{q}_{o}\|\geq r_{i},\bar{\mathbf{p}}_{i}\in\bar{\mathcal{V}}_{i},\mathbf{q}_{o}\in\Omega; 6) 𝐪o𝒱¯i\mathbf{q}_{o}\notin\bar{\mathcal{V}}_{i}.

Proof: According to Section II-A in [26], 1) - 4) have been proved. 5) According to Eq. (7) and Eq. (6), 𝐚ioT𝐩¯ibioβio\mathbf{a}_{io}^{\mathrm{T}}\bar{\mathbf{p}}_{i}\leq b_{io}-\beta_{io}, 𝐚oiT𝐪oboi\mathbf{a}_{oi}^{\mathrm{T}}\mathbf{q}_{o}\leq b_{oi}. Since 𝐚io=𝐚oi\mathbf{a}_{io}=-\mathbf{a}_{oi} and bio=boib_{io}=-b_{oi}, by adding them, we get 𝐚ioT(𝐩¯i𝐪o)βio\mathbf{a}_{io}^{\mathrm{T}}(\bar{\mathbf{p}}_{i}-\mathbf{q}_{o})\leq-\beta_{io}. Due to βio=ri𝐚io+\beta_{io}=r_{i}\|\mathbf{a}_{io}\|\in\mathbb{R}^{+}, we have 𝐚ioT(𝐩¯i𝐪o)ri𝐚io\|\mathbf{a}_{io}^{\mathrm{T}}(\bar{\mathbf{p}}_{i}-\mathbf{q}_{o})\|\geq r_{i}\|\mathbf{a}_{io}\|. Since 𝐚io𝐩¯i𝐪o𝐚ioT(𝐩¯i𝐪o)\|\mathbf{a}_{io}\|\|\bar{\mathbf{p}}_{i}-\mathbf{q}_{o}\|\geq\|\mathbf{a}_{io}^{\mathrm{T}}(\bar{\mathbf{p}}_{i}-\mathbf{q}_{o})\| Therefore, 𝐩¯i𝐪ori\|\bar{\mathbf{p}}_{i}-\mathbf{q}_{o}\|\geq r_{i}. 6) If 𝐪o𝒱¯i\mathbf{q}_{o}\in\bar{\mathcal{V}}_{i}, then Eq. (7) should be satisfied, i.e., 𝐚ioT𝐪obioβio\mathbf{a}_{io}^{\mathrm{T}}\mathbf{q}_{o}\leq b_{io}-\beta_{io}. Rewritten this, we have 𝐚ioT𝐪o+ri𝐚iobio\mathbf{a}_{io}^{\mathrm{T}}\mathbf{q}_{o}+r_{i}\|\mathbf{a}_{io}\|\leq b_{io}. According to 5), 𝐚oiT𝐪oboi\mathbf{a}_{oi}^{\mathrm{T}}\mathbf{q}_{o}\leq b_{oi}, i.e., 𝐚ioT𝐪obio\mathbf{a}_{io}^{\mathrm{T}}\mathbf{q}_{o}\geq b_{io}, which contradicts to the definition of 𝒱¯i\bar{\mathcal{V}}_{i}.

\Box

By combining separating hyperplane theorem and sphere flipping transformation, our method can efficiently extract a collision-free region using only raw measurement points, as shown in Fig. 3 (b). Consequently, each robot is capable of independently calculating its operational domain 𝒱i\mathcal{V}_{i} based on the relative positioning of its peers in a decentralized fashion at each time step.

Theorem 1

If a team of robot is initialized at collision-free configuration, i.e., 𝐩i(0)𝐩j(0)ri+rj,ij,𝐩i(0)𝐪ori\|\mathbf{p}_{i}(0)-\mathbf{p}_{j}(0)\|\geq r_{i}+r_{j},\forall i\neq j,\|\mathbf{p}_{i}(0)-\mathbf{q}_{o}\|\geq r_{i}, by using Eq. (3), then for t>0\forall t>0, we have 𝐩i(t)𝐩j(t)ri+rj,ij,𝐩i(t)𝐪ori\|\mathbf{p}_{i}(t)-\mathbf{p}_{j}(t)\|\geq r_{i}+r_{j},\forall i\neq j,\|\mathbf{p}_{i}(t)-\mathbf{q}_{o}\|\geq r_{i}.

Proof: Given that the robot is initially collision-free, according to Lemma 1, safe 𝒱¯i\bar{\mathcal{V}}_{i} can be calculated considering obstacle and other robots information using Eq. (7). Since the policy Eq. (3) ensures the robot’s position 𝐩i\mathbf{p}_{i} is updated inside its corresponding Voronoi cell, i.e., 𝐂𝒱¯i𝒱¯i\mathbf{C}_{\bar{\mathcal{V}}_{i}}\in\bar{\mathcal{V}}_{i}, by employing mathematical induction, it can be easily proved that the robots’ movements will remain confined to a sequence of secure convex regions for all future time [26].

\Box

Therefore, constructing these safe convex regions, the multi-robot system is designed to strictly prevent duplicated task execution and collisions.

IV-C Deadlock-aware Guided Map

Despite its efficiency in convex environments, the control policy in Eq. (3) may let the robot get stuck in many realistic scenarios. These challenges arise primarily due to the complex, non-convex arrangements of obstacles. To address this, we introduce a real-time constructed guided map that is dynamically adjusted in response to environmental changes, thereby enabling the robot to avoid pitfalls and enhancing both the efficiency of coverage and the system’s adaptability.

In the proposed method, we consider the density map ϕ\phi in Eq. (2), which characterizes the distribution of information. Each point on the grid map is assigned a corresponding value. As the robot moves, it employs local sensing to identify a point 𝐩g\mathbf{p}_{g} within its sensor range RsensorR_{\mathrm{sensor}} that represents a local maximum in the information distribution, which then becomes the robot’s navigation goal. To facilitate reaching this local objective, we define the navigation function (NF) go:𝒲free+\mathcal{M}_{\text{go}}:\mathcal{W}_{\mathrm{free}}\rightarrow\mathbb{R}^{+} as a continuous function that approximates the minimum length of a collision-free path from any point 𝐪𝒲free\mathbf{q}\in\mathcal{W}_{\text{free}} to 𝐩g\mathbf{p}_{g}, i.e., cost-to-goal function.

Lemma 2 (Properties of NF [28])

1) go(𝐩g)=0\mathcal{M}_{\text{go}}(\mathbf{p}_{g})=0, and go(𝐩)=\mathcal{M}_{\text{go}}(\mathbf{p})=\infty iff no point in 𝒲\mathcal{W} is reachable from 𝐩\mathbf{p}, e.g., obstacle position 𝐪o,o1,,mc\mathbf{q}_{o},o\in 1,\dots,m_{c}; 2) For every reachable position 𝐩(t)\mathbf{p}(t) at time tt, executing an action yields a subsequent position 𝐩(t+Δt)\mathbf{p}(t+\Delta t) that satisfies go(𝐩(t+Δt))<go(𝐩(t))\mathcal{M}_{\text{go}}(\mathbf{p}(t+\Delta t))<\mathcal{M}_{\text{go}}(\mathbf{p}(t)).

From Lemma 2, it can be observed that the NF exhibits no local minima other than at the goal, ensuring a cycle-free execution of actions that inevitably leads to the goal position. According to [28], the NF determined by Dijkstra’s algorithm working backward from the goal yields an optimality. Here, to balance the efficiency and optimality, we utilize A algorithm to compute cost-to-go values for each grid on a voxel-based map representation. Consequently, based on go\mathcal{M}_{\text{go}}, the map ϕ\phi in the definition of (𝐏)\mathcal{H}(\mathbf{P}) in the Eq. (2) is appropriately modified as follows:

ϕ=eγgo(𝐩),𝐩𝒲free,\displaystyle\phi=e^{-\gamma\mathcal{M}_{\text{go}}(\mathbf{p})},\forall\mathbf{p}\in\mathcal{W}_{\mathrm{free}}, (8)

where gain γ0\gamma\geq 0 serves as a design parameter that influences the magnitude of the value. By integrating centroid Voronoi tessellation with ϕ\phi and computing 𝐂𝒱¯i\mathbf{C}_{\bar{\mathcal{V}}_{i}}, the regions are shifted to align with higher values in the density map, thereby providing feasible heading directions for coverage. The negative gradient of go\mathcal{M}_{\text{go}} always effectively shortens the shortest path to the destination.

Lemma 3 (Convergence [10])

Assuming there is a finite set of 𝒲\mathcal{W}, by adopting controller in Eq. (3), the location of each sensor will decrease its configuration cost and asymptotically converge to a static location.

This approach synergizes local sensing with global information distribution, resulting in more optimal coverage outcomes. Finally, a sequence of coverage path during the time interval reflecting the optimized location of deployment for each robot can be obtained.

V Simulations

This section analyzes the applicability of the proposed method across diverse and unstructured settings, including real forests and indoor office datasets, and evaluates the performance of computational time and collision distance. Besides, benchmarking our method against established coverage control protocols can help quantify the improvements in task efficiency. The accompanying video provides a clear visual aid to understand our method’s mechanics and advantages.111https://youtu.be/wOunvjGHhBQ The simulations are conducted on a laptop equipped with an Intel i7-9700 CPU and 16GB of RAM. A team of robots is simulated through multiprocessing. For real-world scenario validation, We utilized the Robot Operating System (ROS) with C++ and Python programming languages. MATLAB is employed for comparative analysis. Velocity constraints are established with a maximum limit of umax=2.5u_{\text{max}}=2.5m/s. The time interval for each replanning is Δt=0.1\Delta t=0.1s. The obstacles are inflated by a robot radius, rir_{i}= 0.25m. The grid map resolution is also set to 0.25m for all axes. The sensor’s radius is restricted to a range of 15m. If 𝐮<0.01\|\mathbf{u}\|<0.01m/s, the robot is considered to have converged.

V-A High-fidelity Validation

Unlike other coverage methods, like Obstacle-aware Voronoi Cells (OAVC) [10, 11], which are restricted to hyperspherical object models, or the method in [12], which presumes obstacles to be convex polytopes, our approach allows for a broader range of obstacle shapes with various obstacle densities.

We execute tests utilizing high-fidelity point cloud data, sourced from forested areas by the University of Hong Kong and facilitated by the MARSIM simulator [29]. This dataset is characterized by extensive cluttered and thin structural elements. We expanded the scope of the forest environment to large-scale dimensions of 140m ×\times 140m ×\times 10m, and the coverage result is depicted in Fig. 4. We also use data from indoor office settings, courtesy of the Technical University of Munich [30] with dimensions of 80m ×\times 80m ×\times 5m. This environment featured narrow passageways and multiple layers, and the coverage result is depicted in Fig. 4.

Refer to caption
Figure 4: Coverage of ToI in two 3D large-scale scenarios by 16 robots. (a) Cluttered and thin structural forest environment. (b) Narrow indoor office environment.

We tested the above mentioned two scenarios with randomized initialization and ToI. We evaluate the two most time-consuming components, local map and Voronoi cell generation, and consider both neighboring robots and obstacles for evaluating collision avoidance performance. One trial test in the forest is shown in Fig. 4 (a) and Fig. 5. The time required for generating a local map depends on the complexity of computing a feasible path. This complexity increases with a high density of obstacles, where conditions such as non-convex obstacles can potentially trap the robots. Despite these challenges, our proposed method can still enable robots to create a secure area in real-time while ensuring safety margins (1.95m) that substantially exceed the robot’s radius (0.25m).

We also tested with varying robot numbers (n=2, 4, 8, 16), each configuration undergoing 10 trials, as depicted in Fig. 6. Due to the decentralized nature of our algorithm, computational time remains largely invariant as the number of robots scales up. Additionally, in more expansive environments—nearly double in obstacle density, the algorithm still satisfies real-time processing criteria. The incidence of collisions remains zero, even when covering obstacle-dense areas, and there are only negligible fluctuations in the minimum distance as the number of robots increases. This demonstrates both scalability in larger contexts and robustness in intricate environments of the proposed method.

Refer to caption
(a)
Refer to caption
(b)

s

Figure 5: One trial performance in terms of computational time and minimum distance during coverage by 16 robots.
Refer to caption
(a)
Refer to caption
(b)
Figure 6: The variation between computational time and minimum distance with the increasing number of robots in two scenarios with different obstacle densities.
Refer to caption
(a)
Refer to caption
(b)
Figure 7: Coverage in a cluttered environment using the proposed method and Deadlock-aware guided map of the pink robot (t=12.10s). The black points represent obstacles, and the colored contour lines represent the ToI with three peaks.
Refer to caption
(a) OAVC [11]
Refer to caption
(b) Adaptive method [9]
Figure 8: Compared coverage path obtained by OAVC and adaptive method when convergence. (a) The colored contour lines represent the ToI computed by a Gaussian density map. (b) The yellow contour lines represent the density map, combining both attractive (ToI coverage) and repulsive fields.

V-B Comparisons of Coverage Performance

We undertake comparative evaluations with other Voronoi-based coverage methods: 1) OAVC method in [11], formulates secure areas by computing tangential boundary lines to circular obstacles and adheres to the gradient of target information; 2) Adaptive coverage method in [9], utilizes a repulsive density function concerning detected obstacles, ensuring that the newly computed centroid is positioned away from the obstacle. To be fair, the obstacles are modeled as the same shape in [11]. We evaluate these methods over 30 trials, each with randomly initialized starting positions within an 80m ×\times 80m ×\times 16m workspace. The ToI is defined by a Gaussian distribution featuring three peaks. The performance metrics are as follows: 1) Coverage ratio: The ratio of detected peaks to the total number of peaks; 2) Success rate: The fraction of trials where robots successfully converge to the ToI without experiencing deadlock or collisions within a limited time (30s) or until all peaks are detected; 3) Average time: Task execution time, considering only successful trials. The results are shown in Tab. I.

As shown in Fig. 7, utilizing the proposed method, even in highly irregular environments, all robots can maintain safe distances between each other and prevent both local minima and potential collisions. In contrast, in Fig. 8 (a), the OAVC method directs the robots toward areas of high density without accounting for the obstacle effect, thereby resulting in getting stuck in corners. In Fig. 8 (b), when multiple repulsive fields and attractive fields (target coverage) interact with each other, the adaptive method in [9] predisposes robots to entrapment. Its constrained avoidance space further exacerbates this issue, as it can reduce the navigable space and lead a group of robots to be trapped when moving collectively, thereby increasing the risk of failure.

TABLE I: Comparisons of coverage performance
Alg. Sensor Cover. Ratio Succ. Rate Aver. Time
Ours Local 27/30 96.67% 15.48s
OAVC [11] Global 22/30 46.67% 17.14s
Adaptive [9] Local 13/30 10.00% 13.30s

VI Conclusions

In this work, our proposed method, which incorporates collision avoidance without entrapment, is effectively applicable across a range of unstructured settings. The approach leverages spatial decomposition and sphere flipping transformation to construct a safe Voronoi region in complex environments with considerable point cloud data. The NF is integrated to steer the robot away from pitfalls while simultaneously directing it towards the ToI. Comparisons with others highlight that our approach performs extremely well, with high success ratios and guaranteed safety. Our method also extends the traditional coverage control method, move-to-centroid policy, into more realistic scenarios. Future work includes testing our algorithm on physical robot swarms to address hardware-specific challenges and to further validate its real-world applicability.

References

  • [1] S. Huang, R. S. H. Teo, W. W. L. Leong, N. Martinel, G. L. Forest, and C. Micheloni, “Coverage control of multiple unmanned aerial vehicles: A short review,” Unmanned Systems, vol. 6, no. 02, pp. 131–144, 2018.
  • [2] C. Wang, S. Zhu, W. Yu, L. Song, and X. Guan, “Minimum norm coverage control of auvs for underwater surveillance with communication constraints,” in 2022 American Control Conference (ACC).   IEEE, 2022, pp. 1222–1229.
  • [3] J. Zhang, R. Wang, G. Yang, K. Liu, C. Gao, Y. Zhai, X. Chen, and B. M. Chen, “Sim-in-real: Digital twin based UAV inspection process,” in 2022 International Conference on Unmanned Aircraft Systems (ICUAS).   IEEE, 2022, pp. 784–801.
  • [4] X. Lan and M. Schwager, “Rapidly exploring random cycles: Persistent estimation of spatiotemporal fields with multiple sensing robots,” IEEE Transactions on Robotics, vol. 32, no. 5, pp. 1230–1244, 2016.
  • [5] W. Luo and K. Sycara, “Voronoi-based coverage control with connectivity maintenance for robotic sensor networks,” in 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS).   IEEE, 2019, pp. 148–154.
  • [6] H. Mahboubi, F. Sharifi, A. G. Aghdam, and Y. Zhang, “Distributed coordination of multi-agent systems for coverage problem in presence of obstacles,” in 2012 American Control Conference (ACC).   IEEE, 2012, pp. 5252–5257.
  • [7] C. Wang, S. Zhu, B. Li, L. Song, and X. Guan, “Time-varying constraint-driven optimal task execution for multiple autonomous underwater vehicles,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 712–719, 2022.
  • [8] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” IEEE Transactions on Robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.
  • [9] Y. Bai, Y. Wang, X. Xiong, M. Svinin, and E. Magid, “Adaptive multi-agent control with dynamic obstacle avoidance in a limited region,” in 2022 American Control Conference (ACC).   IEEE, 2022, pp. 4695–4700.
  • [10] A. Pierson and D. Rus, “Distributed target tracking in cluttered environments with guaranteed collision avoidance,” in 2017 International Symposium on Multi-Robot and Multi-Agent Systems (MRS).   IEEE, 2017, pp. 83–89.
  • [11] A. Abdulghafoor and E. Bakolas, “Distributed coverage control of multi-agent networks with guaranteed collision avoidance in cluttered environments,” IFAC-PapersOnLine, vol. 54, no. 20, pp. 771–776, 2021.
  • [12] X. Wang, C. Gao, X. Chen, and B. M. Chen, “Fast and secure distributed multi-agent coverage control with an application to infrastructure inspection and reconstruction,” in Proceedings of the 42nd Chinese Control Conference.   IEEE, 2023, pp. 5998–6005.
  • [13] X. Wang, L. Xi, Y. Chen, S. Lai, F. Lin, and B. M. Chen, “Decentralized mpc-based trajectory generation for multiple quadrotors in cluttered environments,” Guidance, Navigation and Control, vol. 1, no. 02, p. 2150007, 2021.
  • [14] L. Xi, X. Wang, L. Jiao, S. Lai, Z. Peng, and B. M. Chen, “GTO-MPC-based target chasing using a quadrotor in cluttered environments,” IEEE Transactions on Industrial Electronics, vol. 69, no. 6, pp. 6026–6035, 2021.
  • [15] Y. Chen, S. Lai, J. Cui, B. Wang, and B. M. Chen, “GPU-accelerated incremental euclidean distance transform for online motion planning of mobile robots,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6894–6901, 2022.
  • [16] X. Zhou, Z. Wang, H. Ye, C. Xu, and F. Gao, “Ego-planner: An esdf-free gradient-based local planner for quadrotors,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 478–485, 2020.
  • [17] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-d complex environments,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1688–1695, 2017.
  • [18] C. Toumieh and A. Lambert, “Decentralized multi-agent planning using model predictive control and time-aware safe corridors,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11 110–11 117, 2022.
  • [19] S. Katz, G. Leifman, and A. Tal, “Mesh segmentation using feature point and core extraction,” The Visual Computer, vol. 21, pp. 649–658, 2005.
  • [20] X. Zhong, Y. Wu, D. Wang, Q. Wang, C. Xu, and F. Gao, “Generating large convex polytopes directly on point clouds,” arXiv preprint arXiv:2010.08744, 2020.
  • [21] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, “Continuous-time trajectory optimization for online UAV replanning,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 5332–5339.
  • [22] F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, “Teach-repeat-replan: A complete and robust system for aggressive flight in complex environments,” IEEE Transactions on Robotics, vol. 36, no. 5, pp. 1526–1545, 2020.
  • [23] O. Arslan and D. E. Koditschek, “Sensor-based reactive navigation in unknown convex sphere worlds,” The International Journal of Robotics Research, vol. 38, no. 2-3, pp. 196–223, 2019.
  • [24] A. Breitenmoser, M. Schwager, J.-C. Metzger, R. Siegwart, and D. Rus, “Voronoi coverage of non-convex environments with a group of networked robots,” in 2010 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2010, pp. 4982–4989.
  • [25] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithm for convex hulls,” ACM Transactions on Mathematical Software (TOMS), vol. 22, no. 4, pp. 469–483, 1996.
  • [26] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered Voronoi cells,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 1047–1054, 2017.
  • [27] X. Wang, L. Xi, Y. Ding, and B. M. Chen, “Distributed encirclement and capture of multiple pursuers with collision avoidance,” IEEE Transactions on Industrial Electronics, pp. 1–11, 2023.
  • [28] S. M. LaValle, Planning algorithms.   Cambridge university press, 2006.
  • [29] F. Kong, X. Liu, B. Tang, J. Lin, Y. Ren, Y. Cai, F. Zhu, N. Chen, and F. Zhang, “Marsim: A light-weight point-realistic simulator for lidar-based uavs,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2954–2961, 2023.
  • [30] S. Ikehata, H. Yang, and Y. Furukawa, “Structured indoor modeling,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1323–1331.