Adaptation to Team Composition Changes for Heterogeneous
Multi-Robot Sensor Coverage
Abstract
We consider the problem of multi-robot sensor coverage, which deals with deploying a multi-robot team in an environment and optimizing the sensing quality of the overall environment. As real-world environments involve a variety of sensory information, and individual robots are limited in their available number of sensors, successful multi-robot sensor coverage requires the deployment of robots in such a way that each individual team member’s sensing quality is maximized. Additionally, because individual robots have varying complements of sensors and both robots and sensors can fail, robots must be able to adapt and adjust how they value each sensing capability in order to obtain the most complete view of the environment, even through changes in team composition. We introduce a novel formulation for sensor coverage by multi-robot teams with heterogeneous sensing capabilities that maximizes each robot’s sensing quality, balancing the varying sensing capabilities of individual robots based on the overall team composition. We propose a solution based on regularized optimization that uses sparsity-inducing terms to ensure a robot team focuses on all possible event types, and which we show is proven to converge to the optimal solution. Through extensive simulation, we show that our approach is able to effectively deploy a multi-robot team to maximize the sensing quality of an environment, responding to failures in the multi-robot team more robustly than non-adaptive approaches.
I Introduction
Multi-robot sensor coverage is the problem of deploying a team of robots in an environment in order to maximize the observation of events or phenomena [1, 2]. Distributing coverage of an environment among the members of a multi-robot team allows their heterogeneous capabilities to be fully realized, with robots capable of sensing specific events moving to the best positions possible for that particular sensing modality. In order to maximize the overall capability of a multi-robot team, they must be enabled to deploy themselves in such a way that balances their individual sensing capabilities. Effectively solving this problem is crucial for the deployment of multi-robot systems to address real-world applications such as search and rescue [3] and security and surveillance [4].
As real-world environments consist of multiple types of events and robots can possess only a limited variety of sensing capabilities, individual robots must be capable of dynamically balancing their sensor inputs in order to construct the most complete view of the environment. For example, in a disaster, a robot may possess the ability to sense both fire and radiation, while a teammate possesses only a radiation sensor. A more complete view of this disaster environment would be gained if the first robot focuses its attention on fire, while the second robot focuses on radiation. As real-world environments can be chaotic, these overall team capabilities can change, as sensors or entire robots fail. Multi-robot teams must be able to adapt to changes that occur, and continuously balance their available capabilities in a way that provides the best overall observation of the environment. Figure 1 shows a motivating example of this, where a team of five robots is tasked with sensing and monitoring fire and radiation in an environment. As the robots operate, a robot that has been observing the radiation source fails. An adaptive approach enables the multi-robot team to react to this change in team capability, and a robot that had focused on the fire now shifts towards sensing the radiation.

Because of its relevance to many real-world applications, multi-robot sensor coverage has seen significant recent research. Many early approaches focused on sensor coverage of environments with only a single form of sensory information by homogeneous robot teams [5, 6]. However, this limited view of the problem fails to properly address real-world environments with a multitude of event types. Several methods have been proposed to address heterogeneous environments where multiple types of events can occur and teams consist of robots with mixtures of sensing capabilities, basing coverage on mixtures of probability distributions [7], information maximization [8], and defined control laws [9]. These methods all have the drawback of determining their balance of sensing capabilities through fixed parameters, as opposed to balancing the sensing capabilities based on the environment and the team composition (i.e., with possible changes due to robot failure).
In this paper, we introduce a novel formulation of multi-robot sensor coverage that integrates the competing utilities provided by multiple sensing capabilities into a unified framework. We consider a heterogeneous team of robots, where each individual robot possesses only a subset of possible sensing capabilities, operating in an environment where multiple types of events have occurred. The team is tasked with maximizing the overall sensing quality of these events. We propose an approach based on regularized optimization that finds weights to optimally balance the utilities corresponding to the various sensing capabilities, with an iterative solver proven to converge to the optimal solution. At each point in time, our approach identifies the optimal action for each member of the multi-robot team, allowing each individual robot to balance its competing sensing capabilities in order to maximize its overall sensing quality and enabling the team as a whole to adapt to changes in the environment and team composition.
We introduce two important contributions:
-
•
We propose a novel formulation of heterogeneous multi-robot sensor coverage, integrating multiple sensing capabilities into a unified mathematical framework based on regularized optimization. Our formulation identifies an optimal balance between these competing utilities, and does this at each time step, enabling adaptation to changes in the environment and the composition of the robot team.
-
•
We introduce an iterative algorithm to solve this proposed problem, which is hard to solve due to non-smooth terms. We show that this algorithm is theoretically proven to converge to the optimal solution.
II Related Work
As multi-robot sensor coverage has connections to many real-world robotics applications, it is an active research area with multiple approaches that address various aspects of it. The key divisions of research are sensor coverage approaches in homogeneous systems and heterogeneous systems.
Homogeneous sensor coverage addresses environments with only a single type of event or a multi-robot system with the capability to only sense a single event modality. Accordingly, most homogeneous sensor coverage approaches address the problem of evenly distributing multiple robots spatially in an environment, as there is no need to consider individual capabilities [10, 11]. This has been accomplished through Voronoi distributions [12], decomposition of an environment into cells [13], estimating density functions [14], or representing an environment as a graph and utilizing graph partitioning methods to assign robots to regions [15, 16, 17] or teams [18]. Additionally, partitioning an environment has been done by calculating the information gain estimated from different regions [19], using a market-based system to assign robots based on information gain [20], or by planning paths that use greedy algorithms to maximize spatial coverage [21].
Homogeneous sensor coverage has also been extensively studied with the addition of real-world constraints. Maintaining communication is important for the success of multi-robot operations, and multiple methods have focused on the deployment of robots with constraints on communication [22, 23, 24, 25, 26]. These methods have been based on both line of sight and distance thresholds, and have been applied to open environments and obstructed ones such as hallways. Methods have also examined the physical limitations of sensors and attempted to incorporate this into the control laws that dictate their coverage approaches. For example, visibility constraints of cameras [27], limited range sensors [28], or limited field of view sensors [29] have all been integrated into deployment methods. These constraints provide realistic representations of events and sensing. Finally, physical limitations on the robots themselves have also been studied. Power limitations were studied in [30, 31], where the real-world limitations on mobile robot batteries were used to constrain the area that a team could cover. Limitations on motors [6] and effects of traction and slippage [32] also have been used to analyze paths to coverage positions. Turning radius was used as a key constraint in multiple works, particularly with maneuverability-restricted robots such as boats [33, 34, 35].
While these various approaches towards homogeneous multi-robot sensor coverage have been effective, they have the key limitation of addressing only a single sensing modality, which is a poor representation of real-world uses and applications. To address this, heterogeneous multi-robot sensor coverage attempts to solve the problem of coverage of multiple event types with a multi-robot team that possess multiple forms of sensors. Small multi-robot systems have been enabled to do this through fixed scheduling algorithms [36], following (e.g., robots with heterogeneous capabilities move together so each can provide a perspective based on their sensor complement) [37], or integrating observations from robots performing other tasks [38].
For larger multi-robot teams, most approaches have utilized different methods to optimize the ‘combined sensing quality’, or the total information available to sensors across the possible event types [9]. This has been done by optimizing a cost function [39] or by identifying a distribution of robots that matches an estimated sensing quality function [8]. Voronoi regions have also been applied here, with their boundaries based on multiple event types as opposed to a single one [40, 12]. These approaches are generally fixed, assuming a static event is occurring and modeling robots as valuing each of their available sensors equally.
Limited approaches have been proposed to adapt to dynamic changes in the environment. In [7], robots learn a model of the events occurring from sensor observations, and base their behavior on this model. This has also been accomplished with a mixture of density functions to model complex events [41], or by making online estimations of information gain in various parts of the environment [2]. However, even these methods lack the ability to adapt to changes in the team capabilities, and so are unable to respond to sensor or robot failures.
In contrast to these reviewed approaches, our novel approach to heterogeneous multi-robot sensor coverage is able to balance available sensing capabilities in order to provide a more complete view of the environment. Additionally, our formulation allows a multi-robot team to adapt to changes in the environment and team composition, responding to sensor or robot failures.
III Our Proposed Approach
In this section, we introduce our novel approach to heterogeneous multi-robot sensor coverage that balances sensing quality based on the capabilities available to each robot. We denote matrices with uppercase bold letters and vectors as lowercase bold letters. Given a matrix , we denote its -th column as and its -th row as .
III-A Problem Formulation
We address the problem of a heterogeneous multi-robot team tasked with covering an environment where multiple event types occur. We define robot team members, each located at a position denoted as for the -th robot. Each robot has a set of sensing capabilities denoted by a vector , where is the number of possible event types and if the -th robot has the -th sensing capability, and otherwise. We additionally define events occurring in the environment, modelling each event with one or more density functions centered on one or more positions (e.g., in a disaster scenario, smoke may be spreading from a single fire or from several). Events can be any of different types, corresponding to the set of available sensing capabilities.
Each robot estimates the density functions corresponding to the events based on its own observations of them, where denotes the -th robot’s estimation of the -th event type and returns a scalar value for a given position. At each time step, each robot incorporates sensor observations at its position based on its heterogeneous capabilities and updates the corresponding functions. As a robot moves towards a source of an event, the value returned by the associated rises; similarly, if a robot were to move away from a source of an event the value would fall. If a robot does not possess the necessary sensing capability (i.e., ) then for all positions.
To quantify the value of each of a robot’s heterogeneous sensing capabilities, we define the utility associated with moving towards an event type, and thus increasing the sensing quality with respect to it. We introduce as the utility associated with sensing quality, where describes the value of the -th team member moving in the direction of the -th event type, given its current estimate of that event. This utility is calculated using the gradient of with respect to the robot’s current position . We denote the movement implied by this gradient as , which is a movement in the direction of the -th event, based on the -th robot’s estimate of that event. Formally, the utility is based on the value returned by if this movement is taken:
(1) |
We note that just as the utility if the -th robot cannot sense the -th event type, the movement is also equal to the zero vector.
Given the described utility , the objective of our problem formulation is to maximize the overall sensing utility based on each robot’s current estimation of the events occurring in the environment. Each robot, given the utility of its various capabilities, must find an optimal balance among them. We apply this balance to the possible actions for each robot, generating movements that allow them to maximize their individual sensing quality based on their available capabilities.
III-B Optimization to Balance Sensing Capabilities
We introduce an optimization-based formulation to identify an optimal balance of the competing utilities of the various sensing capabilities. First, we introduce the base objective function, where we maximize the overall utility provided by :
(2) |
where denotes element-wise matrix multiplication and denotes the element-wise -norm of a matrix. We introduce , which weights the sensing utilities, with specifically representing the weight that the -th robot assigns to sensing the -th event type.
To control the formation of this weight matrix, we introduce the following constraints:
(3) |
where is a vector of s of length . We introduce these constraints to ensure that all weights are positive and so that the weights assigned to each individual robot in sum to (i.e., no weight for a sensing capability can grow unreasonably large).
Next, we introduce a regularization term to encourage the assignment of at least one robot to each event type in the environment. To do this, we introduce the -norm on each column of and define the event norm:
(4) |
Because of the constraints introduced above, the values in are bounded to be between 0 and 1, and each row sums to 1. Maximizing this norm encourages values to form in each column of , meaning that each event receives weights from a robot. Otherwise, multiple robots could assign the maximum weight of 1 to a single event, leaving others unattended.
We also introduce a regularization term to enforce temporal consistency in the weight matrix. We note that if a robot is moving in the direction of an event in order to improve its sensing quality, abruptly switching directions at the next time step to move towards an alternative event is not ideal; changes should be gradual so as to not lose progress made towards improved sensing quality. To enforce this, we introduce specifying as , indicating the weight matrix at time step , and add a penalty term based on the difference between the value of with the value at the previous time step, :
(5) |
where denotes the squared Frobenius norm. We initialize to give equal weights to all available sensing modalities, i.e. if the -th robot is capable of sensing of the possible sensing modalities, then each entry .
Our final objective function combines these introduced terms into a unified regularized optimization problem that identifies an optimal balance between the competing utilities of the available sensing capabilities:
s.t. | (6) |
where and are hyperparameters controlling the importance of the two introduced regularization terms.
Earlier, we introduced , or the movement of the -th robot in the direction of the -th event. The overall movement for the -th robot is based on a combination of these movements, weighted by the weight matrix computed for time step in the objective function:
(7) |
This overall movement update is scaled to unit length and added to the previous position to arrive at the new position:
(8) |
III-C Optimization Algorithm
Because of the non-smooth terms and equality constraints, Eq. (6) is hard to solve. We propose an iterative solution based on the Augmented Lagrangian Multiplier (ALM) method, similar to [42], in which we can transform constraints into penalty terms in the objective formulation.
We consider problems of the form
(9) |
Constrained optimization problems in this form can be solved by the general ALM method described in Algorithm 1. The equality constraint of is transformed into the penalty term added to in Line 3. This line and the updates to and are repeated until the value of converges.
Following this general form, we can rewrite our final objective function in Eq. (6) and move the constraint of into the objective function as a penalty term. At the same time, we rewrite our objective as a minimization problem as opposed to maximization:
(10) | ||||
s.t. |
where and are introduced as multiplier variables.
We also note that , the element-wise -norm of the Hadamard product, or element-wise matrix multiplication, can be rewritten as the Frobenius inner product, which is equal to the trace of the matrix product. For the first term in our objective function, this means that
(11) |
This makes our actual objective function
(12) | ||||
s.t. |
To solve this rewritten objective function, we take the derivative with respect to and set it equal to 0:
(13) |
Here, is a diagonal matrix such that
(14) |
After rearranging Eq. (13), we see that the update to at each step is:
(15) |
Finally, to ensure the constraint is incorporated, we threshold the values in :
(16) |
After updating , we also update and :
(17) | |||
(18) |
where is a value chosen such that . These steps are repeated until the value of converges. This process is formally defined in Algorithm 2.
Computational Complexity. In Algorithm 2, Lines 3, 5, 6, and 7 are trivial and can be computed in linear time. The computational complexity of our proposed solution is determined solely by Line 4, which computes both a matrix inverse and a matrix multiplication. Respectively, these have complexities of and . Typically, will be much larger than (i.e., a scenario where the number of possible event types exceeds the number of available robots is not one that will be able to be comprehensively sensed, and so the number of robots will need to increase). When this is the case, the overall complexity of each iteration of our proposed solution algorithm is .
Convergence. Under the condition that , the general ALM approach described in Algorithm 1 is proven to converge to an optimal value of [43]. As we initialize , then holds at . We also initialize the parameter such that , and this parameter controls the only update to in Line 6. Thus, cannot be less than , as this would require that , and so holds at every step.
IV Experiments
IV-A Experimental Setup
In order to comprehensively evaluate our adaptive multi-robot sensor coverage approach, we performed both extensive simulations in a high-fidelity simulator to integrate real-world control considerations. This simulator also required our approach to integrate with the Robot Operating System (ROS), as would be necessary on physical robots.
We evaluate the effects on various combinations of multi-robot team sizes () and numbers of event types (). Evaluation is conducted with each event type being randomly generated at two positions. Members of a multi-robot team are also initialized at randomly generated positions near a chosen start area in this environment, with only a subset of possible sensors available to each robot. We conduct each simulation until . In order to demonstrate the adaptive abilities of our approach, we simulate various numbers of robot failures during the simulation.
In all evaluations, we considered the metric of sensing quality, or the improvement of sensing performance over the base deployment of the multi-robot system. This metric relates the sensing quality at a specific point to the initial sensing quality when robots are randomly positioned in an environment. That is, if a system begins with robots deployed within an environment and they each proceed intelligently based on their sensors, then the overall sensing quality would improve. Specifically, we look at the improvement of sensing quality at the end of the simulation.
As our approach is not only adaptive to the distribution of sensors but the availability of sensors as a dynamic system proceeds (i.e., in the real world, robots can fail or lose sensing capabilities due to environmental factors), we consider a number of alternate approaches in order to demonstrate the effectiveness of our approach. We compare to three alternate approaches in order to evaluate our proposed method for multi-robot sensor coverage:
Results are reported as multiples of the initial sensing quality.
E.g., an initial sensing quality of and a final sensing
quality of will be reported as .
For each combination of and , we report with results with
0, 1, 2, and 3 robot failures.
The best improvements are highlighted in bold text.
# of Robots ()
Approach
2 Events ()
3 Events ()
4 Events ()
Baseline
9.99 / 6.69 / 5.75 / 2.88
8.85 / 8.05 / 5.12 / 3.24
10.02 / 9.00 / 5.50 / 2.23
Equally Weighted
8.87 / 5.69 / 4.94 / 2.77
9.47 / 7.61 / 4.99 / 2.89
6.47 / 5.50 / 3.73 / 2.38
Single Capability
9.66 / 6.68 / 6.18 / 2.71
7.88 / 6.45 / 4.96 / 2.83
6.37 / 3.96 / 2.98 / 1.60
Our Full Approach
12.94 / 11.44 / 7.22 / 4.30
11.55 / 11.45 / 6.26 / 5.15
10.84 / 8.41 / 5.27 / 3.92
Baseline
10.35 / 10.30 / 7.76 / 5.93
11.85 / 8.89 / 7.69 / 6.83
7.99 / 7.48 / 6.29 / 4.24
Equally Weighted
10.59 / 10.12 / 8.39 / 6.48
7.09 / 6.77 / 6.30 / 2.05
6.28 / 5.80 / 4.99 / 3.46
Single Capability
9.84 / 9.06 / 6.50 / 5.77
7.20 / 5.63 / 4.98 / 4.05
6.36 / 5.90 / 5.40 / 4.87
Our Full Approach
13.19 / 11.25 / 9.69 / 8.92
11.35 / 9.56 / 8.33 / 7.87
12.64 / 9.66 / 6.56 / 5.79




-
1.
Baseline: This approach sets and , and so continues to find a weight matrix that maximizes the available utility but does not utilize regularization to influence the development of . This approach still attempts to adapt each robot’s weighting of its capabilities in order to provide a complete view of the environment.
-
2.
Equally Weighted: This approach defines an equally weighted , where each robot assigns identical values to each of its available sensing capabilities (i.e., if a robot has two sensing capabilities, it assigns a weight of to each of them). This approach does not adapt to the availability of sensors or the failure of robots during the sensor coverage task.
-
3.
Single Capability: This approach randomly selects an available sensing capability for each robot and only allows the robot to use that sensor (i.e., if a robot has an RGB camera and a depth camera, this approach limits the robot to only one and ignores the other). Similar to the Equally Weighted approach, this approach does not adapt to changes in the system as the robots operate.
IV-B Evaluation on Simulated Multi-Robot Systems
We present extensive quantitative results in Table I. For each combination of and , we conduct simulations with , and robot failures. Each combination of parameters and failures is simulated 100 times. We report the improvement in sensing quality at the end of the simulation and at its highest point. This is reported as a multiple of the initial sensing quality, e.g. if the initial sensing quality is and the sensing quality at the end of the simulation is , we report an improvement of .
We observe that our full approach consistently provides the largest sensing quality improvement, across nearly every combination of , , and number of robot failures. This demonstrates the effectiveness of our approach to identify an optimal weighting of available sensing capabilities, assigning weights in the context of the capabilities available to the overall team. Additionally, we see that as the number of robot failures increases, our approach widens its performance gap over the compared approaches, indicating that its ability to adapt to changes in the multi-robot team best enables it to overcome robot failure and continue to provide effective sensing performance. In some cases, our approach provides multiple times as much sensing improvement as the compared approaches. For example, for , , and three robot failures, our approach is able to increase sensing quality 7.87 times the initial value, while the Equally Weighted approach only doubles it. This shows the main strength of our approach, in that it is able to adapt to changes in team composition (i.e., failure) and continue to optimally balance the remaining available sensing capabilities.
In a few combinations, our baseline approach with and slightly edges out our the full approach, and when it does not it still consistently performs the second best or very near to it. This shows that even without our introduced regularization terms that distribute weights among event types and maintain temporal consistency, our approach’s ability to identify an optimal balance between sensing capabilities is much more effective than relying on a either a single capability or an equal weighting of capabilities.
Figure 2 shows qualitative results from example simulation of multi-robot sensor coverage in an urban environment. The initial state is seen in Figure 2(a), with five Husky ground robots. Three events are simulated, located down each road entering the three-way intersection. Figure 2(b) shows the ground robots deploying towards the simulated events. Two robots are moving towards the left road, one moving up the road entering the top of the frame, and the remaining two towards the road on the right. Figure 2(c) shows the simulated failure, with the Husky robot marked with the large red arrow failing. As this was the only robot moving towards the event at the top of the frame, existing approaches that cannot adapt would lose observations of this event. In Figure 2(d) we see that our approach is able to adapt to this failure. One of the robots that had been moving left has shifted its weighting of its sensing capabilities and is now moving towards the top of the frame to provide observations of that event. Approaches that prioritize only a single sensing modality or that do not adjust the weighting of sensing modalities would not be able to adapt to this failure, leaving the event type completely unobserved.
V Conclusion
Multi-robot sensor coverage is the problem of deploying a multi-robot team in an environment in order to maximize the overall sensing quality. Real-world environments consist of a variety of event modalities, and so in order to provide a complete and comprehensive view of an environment, a multi-robot team must deploy intelligently based on its available sensing capabilities. In addition, failures can occur to both sensors and robots, and so a multi-robot team must be able to adapt to these, and change its behavior to continue to provide high-quality sensing. In this paper, we present a novel formulation of heterogeneous multi-robot sensor coverage in which we provide an adaptive approach based on regularized optimization. We propose a problem formulation that integrates multiple sensing capabilities and identifies an optimal balance of these capabilities at each time step, adapting to not only the available capabilities but also changes in the environment and the multi-robot system. We introduce an iterative algorithm to solve this formulated problem, which we show is proven to converge to an optimal solution. Through extensive simulation, we demonstrate that our approach provides effective multi-sensor robot coverage, outperforming methods that focus on a single capability or that are unable to adapt to changes in robot capabilities.
References
- [1] J. Cortés, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks: Variations on a theme,” in Mediterranean Conference on Control and Automation, 2002.
- [2] M. Schwager, J. McLurkin, and D. Rus, “Distributed coverage control with sensory feedback for networked robots.,” in Robotics: Science and Systems, 2006.
- [3] V. Zadorozhny and M. Lewis, “Information fusion based on collective intelligence for multi-robot search and rescue missions,” in International Conference on Mobile Data Management, 2013.
- [4] S. Meguerdichian, F. Koushanfar, G. Qu, and M. Potkonjak, “Exposure in wireless ad-hoc sensor networks,” in International Conference on Mobile Computing and Networking, 2001.
- [5] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” Transactions on Robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.
- [6] A. Pierson, L. C. Figueiredo, L. C. Pimenta, and M. Schwager, “Adapting to performance variations in multi-robot coverage,” in International Conference on Robotics and Automation, 2015.
- [7] W. Luo and K. Sycara, “Adaptive sampling and online learning in multi-robot sensor coverage with mixture of gaussian processes,” in International Conference on Robotics and Automation, 2018.
- [8] A. Sadeghi and S. L. Smith, “Coverage control for multiple event types with heterogeneous robots,” in International Conference on Robotics and Automation, 2019.
- [9] M. Santos, Y. Diaz-Mercado, and M. Egerstedt, “Coverage control for multirobot teams with heterogeneous sensing capabilities,” Robotics and Automation Letters, vol. 3, no. 2, pp. 919–925, 2018.
- [10] I. Rekleitis, V. Lee-Shue, A. P. New, and H. Choset, “Limited communication, multi-robot team based coverage,” in International Conference on Robotics and Automation, 2004.
- [11] M. A. Batalin and G. S. Sukhatme, “Spreading out: A local approach to multi-robot coverage,” in Distributed Autonomous Robotic Systems, vol. 5, pp. 373–382, Springer, 2002.
- [12] K. Guruprasad and D. Ghose, “Performance of a class of multi-robot deploy and search strategies based on centroidal voronoi configurations,” International Journal of Systems Science, vol. 44, no. 4, pp. 680–699, 2013.
- [13] N. Hazon and G. A. Kaminka, “Redundancy, efficiency and robustness in multi-robot coverage,” in International Conference on Robotics and Automation, 2005.
- [14] S. G. Lee, Y. Diaz-Mercado, and M. Egerstedt, “Multirobot control using time-varying density functions,” Transactions on Robotics, vol. 31, no. 2, pp. 489–493, 2015.
- [15] S.-k. Yun and D. Rusy, “Distributed coverage with mobile robots on a graph: Locational optimization,” in International Conference on Robotics and Automation, 2012.
- [16] S.-k. Yun and D. Rus, “Distributed coverage with mobile robots on a graph: locational optimization and equal-mass partitioning,” Robotica, vol. 32, no. 2, pp. 257–277, 2014.
- [17] C. S. Kong, N. A. Peng, and I. Rekleitis, “Distributed coverage with multi-robot system,” in International Conference on Robotics and Automation, 2006.
- [18] B. Reily, C. Reardon, and H. Zhang, “Representing multi-robot structure through multimodal graph embedding for the selection of robot teams,” in International Conference on Robotics and Automation, 2020.
- [19] N. Fung, J. Rogers, C. Nieto, H. I. Christensen, S. Kemna, and G. Sukhatme, “Coordinating multi-robot systems through environment partitioning for adaptive informative sampling,” in International Conference on Robotics and Automation, 2019.
- [20] R. Zlot, A. Stentz, M. B. Dias, and S. Thayer, “Multi-robot exploration controlled by a market economy,” in International Conference on Robotics and Automation, 2002.
- [21] M. Corah and N. Michael, “Efficient online multi-robot exploration via distributed sequential greedy assignment.,” in Robotics: Science and Systems, 2017.
- [22] F. Amigoni, J. Banfi, N. Basilico, I. Rekleitis, and A. Q. Li, “Online update of communication maps for exploring multirobot systems under connectivity constraints,” in Distributed Autonomous Robotic Systems, pp. 513–526, 2019.
- [23] J. Banfi, A. Q. Li, N. Basilico, I. Rekleitis, and F. Amigoni, “Asynchronous multirobot exploration under recurrent connectivity constraints,” in International Conference on Robotics and Automation, 2016.
- [24] J. Banfi, A. Q. Li, N. Basilico, I. Rekleitis, and F. Amigoni, “Multirobot online construction of communication maps,” in International Conference on Robotics and Automation, 2017.
- [25] P. K. Penumarthi, A. Q. Li, J. Banfi, N. Basilico, F. Amigoni, J. O’Kane, I. Rekleitis, and S. Nelakuditi, “Multirobot exploration for building communication maps with prior from communication models,” in International Symposium on Multi-Robot and Multi-Agent Systems, 2017.
- [26] B. Reily, C. Reardon, and H. Zhang, “Leading multi-agent teams to multiple goals while maintaining communication,” in Robotics: Science and Systems, 2020.
- [27] Y. Kantaros, M. Thanou, and A. Tzes, “Distributed coverage control for concave areas by a heterogeneous robot–swarm with visibility sensing constraints,” Automatica, vol. 53, pp. 195–207, 2015.
- [28] L. C. Pimenta, V. Kumar, R. C. Mesquita, and G. A. Pereira, “Sensing and coverage for a network of heterogeneous robots,” in Conference on Decision and Control, 2008.
- [29] A. Gusrialdi, T. Hatanaka, and M. Fujita, “Coverage control for mobile networks with limited-range anisotropic sensors,” in Conference on Decision and Control, 2008.
- [30] A. Kwok and S. Martinez, “Deployment algorithms for a power-constrained mobile sensor network,” International Journal of Robust and Nonlinear Control, vol. 20, no. 7, pp. 745–763, 2010.
- [31] X. Wang, S. Han, Y. Wu, and X. Wang, “Coverage and energy consumption control in mobile heterogeneous wireless sensor networks,” Transactions on Automatic Control, vol. 58, no. 4, pp. 975–988, 2012.
- [32] A. Pierson, L. C. Figueiredo, L. C. Pimenta, and M. Schwager, “Adapting to sensing and actuation variations in multi-robot coverage,” International Journal of Robotics Research, vol. 36, no. 3, pp. 337–354, 2017.
- [33] G. Notomista, M. Santos, S. Hutchinson, and M. Egerstedt, “Sensor coverage control using robots constrained to a curve,” in International Conference on Robotics and Automation, 2019.
- [34] I. Vandermeulen, R. Groß, and A. Kolling, “Turn-minimizing multirobot coverage,” in International Conference on Robotics and Automation, 2019.
- [35] N. Karapetyan, J. Moulton, J. S. Lewis, A. Q. Li, J. M. O’Kane, and I. Rekleitis, “Multi-robot dubins coverage with autonomous surface vehicles,” in International Conference on Robotics and Automation, 2018.
- [36] F. Shkurti, A. Xu, M. Meghjani, J. C. G. Higuera, Y. Girdhar, P. Giguere, B. B. Dey, J. Li, A. Kalmbach, C. Prahacs, et al., “Multi-domain monitoring of marine environments using a heterogeneous robot team,” in International Conference on Intelligent Robots and Systems, 2012.
- [37] S. Hood, K. Benson, P. Hamod, D. Madison, J. M. O’Kane, and I. Rekleitis, “Bird’s eye view: Cooperative exploration by ugv and uav,” in International Conference on Unmanned Aircraft Systems, 2017.
- [38] P. Gao, R. Guo, H. Lu, and H. Zhang, “Regularized graph matching for correspondence identification under uncertainty in collaborative perception,” in Robotics: Science and Systems, 2020.
- [39] M. Santos and M. Egerstedt, “Coverage control for multi-robot teams with heterogeneous sensing capabilities using limited communications,” in International Conference on Intelligent Robots and Systems, 2018.
- [40] O. Arslan and D. E. Koditschek, “Voronoi-based coverage control of heterogeneous disk-shaped robots,” in International Conference on Robotics and Automation, 2016.
- [41] M. Schwager, D. Rus, and J.-J. Slotine, “Decentralized, adaptive coverage control for networked robots,” International Journal of Robotics Research, vol. 28, no. 3, pp. 357–375, 2009.
- [42] R. T. Rockafellar, “Augmented lagrange multiplier functions and duality in nonconvex programming,” Journal on Control, vol. 12, no. 2, pp. 268–285, 1974.
- [43] D. P. Bertsekas, Constrained optimization and Lagrange multiplier methods. Academic Press, 2014.