Avoiding dynamic small obstacles with onboard sensing and computating on aerial robots
Abstract
In practical applications, autonomous quadrotors are still facing significant challenges, such as the detection and avoidance of very small and even dynamic obstacles (e.g., tree branches, power lines). In this paper, we propose a compact, integrated, and fully autonomous quadrotor system, which can fly safely in cluttered environments while avoiding dynamic small obstacles. Our quadrotor platform is equipped with a forward-looking three‐dimensional (3D) light detection and ranging (lidar) sensor to perceive the environment and an onboard embedded computer to perform all the estimation, mapping, and planning tasks. Specifically, the computer estimates the current pose of the UAV, maintains a local map (time-accumulated point clouds KD-Trees), and computes a safe trajectory using kinodynamic A* search to the goal point. The whole perception and planning system can run onboard at 50Hz with careful optimization. Various indoor and outdoor experiments show that the system can avoid dynamic small obstacles (down to 20mm diameter bar) while flying at 2m/s in cluttered environments. Our codes and hardware design are open-sourced on Github222https://github.com/hku-mars/dyn_small_obs_avoidance.git.
I Introduction
Unmanned aerial vehicles (UAVs) have shown increasing potentials in a variety of applications, such as delivery, inspection, mapping, and search-and-rescue missions. To enable these widespread applications, a fully autonomous UAV that is able to fly in cluttered environments is crucial. One major challenge in achieving this goal is the detection and avoidance of very small and/or dynamic objects such as tree branches, power lines, small pipes, and stair rails that are common in real-world environments.
Despite the plenty of works on autonomous UAVs recently developed, there are still two critical unsolved issues. Most existing works cannot handle dynamic obstacles with fully onboard UAV resources due to the limited rate of sensing and/or planning. Avoiding small obstacles is another challenge to UAVs due to the low resolution of mapping in existing works.
These two issues are rooted in the limited sensing capabilities available for existing UAVs. The popularly used RGB-D sensors have limited sensing range (a few meters) and resolution, preventing the timely and reliable detection of small obstacles (e.g., pipes with diameter smaller than 20mm). The noisy measurements of RGB-D sensors also require additional probabilistic filtering on each grid occupancy (e.g., OctoMap [1]), which imposes an inherent trade-off between processing time and affordable resolution. Multi-line spinning lidars are another type of sensor that has been used on UAVs. Besides the high-cost, their bulky size, weight (1kg), low resolution (e.g., for 16 lines lidars), and low frame-rate (e.g., 10Hz) have significantly prevented their adoption to UAVs for obstacle avoidance.

In this paper, we present an autonomous UAV system aiming to fulfill intelligent flight in cluttered and dynamic environments. The system-level contribution is the proof of feasibility of using only a low-cost, light-weighted, and small form factor 3D lidar and onboard processing to achieve high frequency and safe navigation in unknown cluttered environments. We summarize our contributions as follows:
-
1.
A complete autonomous lidar-based UAV. It is lightweight (1.8kg in total) and compact (280mm wheelbase) while equipped with onboard sensing (a solid-state lidar: LIVOX AVIA) and computing modules (a DJI Manifold 2-C, with Intel i7 8550U CPU). The system performs full state estimation and high-resolution mapping at 50Hz with onboard computing;
-
2.
A lightweight motion primitives-based motion planner directly running on lidar point-cloud. The motion planner can achieve up to 50Hz replanning rate, on actual onboard systems (as opposed to maximal 30Hz update rate of RGB-D cameras such as Intel Realsense), allowing timely detection and avoidance of dynamic obstacles;
-
3.
The UAV system is able to avoid dynamic small obstacles. Extensive real-world experiments show that our system can reliably detect and avoid both static and dynamic small obstacles (down to 20mm) in various indoor and outdoor environments.
II RELATED WORK
II-A Navigation in cluttered environments
In recent years, the localization and mapping technology of unmanned vehicles have developed rapidly, especially the vision-based and lidar-based methods. For the application of vision-based UAVs, Visual-Inertial Odometry (VIO) is one state-of-art technique, which gives stable odometry by only using camera images and IMU measurement[2, 3, 4, 5]. Most existing works of autonomous aerial vehicle system are based on visual sensors and have achieved outstanding performance in static complex environments.
A standard process of a vision-guided aerial vehicle is to obtain the location from the VIO algorithm and combine it with the depth image from the depth camera to build a map (e.g. Occupancy grid map[1], Euclidean Signed Distance Field (ESDF) map[6] or point clouds map[7]), and then generate and execute a trajectory on the established map. Liu et al.[8] use a uniform resolution volumetric occupancy grid map to establish a short-range planner for high-speed quadrotor navigation. Oleynikova et al.[9] presents a UAV system using a high rate local replanner based on ESDF map, which can avoid newly-detected obstacles and generate feasible trajectories online in 50ms. Brett et al.[10] proposed a state-based motion primitive planning on current point cloud map and achieved high-speed avoidance. The Nanomap[11] and Mapless planner[12] are more elegant, which checks collision not only in the current FoV point cloud, but also in the past FoVs. Nanomap[11] even achieved 10m/s velocity flight in forests considering the pose uncertainty.
Besides vision-based UAV, there are also many works on lidar-based UAV. Liu et al.[13] generate Safe Flight Corridors (SFC) consisting of multiple ellipsoids from point clouds of a 360° lidar and optimize the trajectory in the flight corridor. Mihir et al. [14] proposed a method to explore a mine autonomously, which utilizes control-based motion primitives to find future-safe paths in an ESDF map built from lidar data. And Zhang et al.[15] achieved a 10m/s flight in forests using a set of static motion primitives assuming a constant flying velocity in the lidar frame.
II-B Navigation in dynamic environments
The approaches mentioned above have a common assumption that the environment is static, so the rate of map updating and replanning is usually low. To avoid dynamic obstacles, it requires both the environment perception (i.e., mapping) and trajectory replanning to be performed at a very high rate. Sanchez et al.[16] generates a progressive optimal trajectory allowing the UAV to avoid moving people at a very low speed. Allen et al.[17] use kinodynamic RRT* algorithm to make the UAV avoid a dynamic sword rapidly, but the perception and replanning are both performed by offboard sensors (i.e., Motion capture systems) and computers. Gao et al.[18] successfully demonstrate the avoidance of dynamic obstacles (e.g., moving people) using a RGB-D camera on a quadrotor UAV, but the system requires a global map saved on an offboard server. Falanga et al.[19] use event camera to fulfill a high speed (10m/s relative speed) objects avoidance, having only 3.5ms overall latency. However, the event cameras cannot have a complete perception of the environment.
II-C Navigation considering small obstacles
In practical applications, it is common for UAVs to encounter small obstacles (e.g., power lines in outdoor, small pipes and/or stair rails in indoor). Existing works are usually based on visual sensors and detect such small objects by reasoning the image. Madaan et al.[20] trains a convolution network to segment wires from images and reconstructs them to fulfill the avoidance. Zhang et al.[21] proposes a method to detect the power line using convolutional and structured features. These methods are usually not general and costly for UAVs.
Our work considers the three challenges all at once. We emphasize a wholistic system design from sensing, perception, estimation, to planning, instead of focusing on one component alone. Compared with the existing work, our developed system is lightweight, fast, and able to safely navigate in complex indoor and outdoor environments while avoiding dynamic small obstacles.
III SYSTEM OVERVIEW
In this section, we present the hardware design of our quadrotor testbed aimed at achieving good autonomous flying performance, being an integrated, durable and aggressive UAV, as shown in Fig. 2.

Thrust weight ratio | 3 |
---|---|
Weight (lidar included) | 1.8kg |
Size (with propeller) | 44x38x15 |
Size (without propeller) | 26x27x15 |
Maximum flight time | 10min |
Battery | 5S 3300mah 50C |
III-A Fully onboard hardware architecture
The UAV hardware system consists of a Livox AVIA lidar, a DJI Manifold 2-C onboard computer (i7 8550U CPU, 1.8GHz Basic Frequency, quad-core), and a Pixhawk4-mini flight controller. We also add a monocular camera on the UAV to gain first-person view (FPV) images for better visualization. Note that the camera is not used in the perception system.
Compared with vision-based UAVs, our system only uses a lidar (with a built-in consumer-grade IMU) to navigate the UAV and to sense the environment. The measured point cloud and IMU data are transmitted to the onboard computer, which performs all the state estimation, mapping, and motion (re-) planning (see Section IV). At last, the flight controller Pixhawk4-mini performs the trajectory tracking control.
Besides the avionic system mentioned above, we also design and optimize our specific quadrotor frame. To ensure that the UAV has enough mobility and small size, we choose T-motor F90 motors and 7042 propellers as the power plant. As shown in Table I, the final UAV has a moderate weight (1.8kg in total), size (280mm wheelbase), and flight endurance (over 10 mins). With over 3-g thrusts, the UAV is able to perform aggressive maneuvers to quickly respond to dynamic obstacles.
III-B Lidar characteristics
Lidar is a type of sensor which emits laser pulses and receives the reflected pulse to gain depth information of the environment. Unlike multi-line spinning lidars used on existing UAVs [13], we use a solid-state lidar, the Livox AVIA, which features many advantages that are suitable for aerial applications: (1) Shown in Table II, the sensor has a weight and size that can be adopted to a small-scale UAV; (2) the sensor has a very long detection range, allowing obstacle detection and avoidance at high-speed flight; (3) the sensor has an extremely low false alarm rate, producing only three noise points per one million points. This could eliminate the time-consuming ray-casting occupancy filtering as in Octomap [1] and ESDF [6]; (4) the sensor can output point clouds data at a very high rate (up to 2500Hz), as opposed to the 1020Hz output rates of multi-line spinning lidars. This allows timely detection of dynamic obstacles; (5) the sensor uses 6 laser heads to scan the front area simultaneously in a rosellete pattern, forming a circular FoV (see Fig. 3). Since it covers the entire FoV very quickly, the sensor is very suitable for detecting line-shaped small objects. Furthermore, the sensor uses a non-repetitive scanning [22] that significantly boosts the resolution even being static. An example of the sensor measurements for a 9mm bar is seen in Fig. 4, it can be seen that the bar is detected only after 20ms and the resolution increases over the accumulation time.

Maximum Detection Range | 450m |
---|---|
FoV | |
Point Rate | 240,000points/sec |
Maximum Frame Rate | 2500Hz |
Build-in IMU Rate | 200Hz |
Range Precision | 2cm |
False Alarm Ratio | <0.0003 |
Weight | 498g |
Size | 76mm65mm91mm |

IV Integrated Onboard Perception And Motion Planning

In this section, we detail the software and algorithm design. The overview of the system workflow is seen in Fig. 5. A typical autonomous UAV consists of three functional parts: navigation, planning, and control. In our system, navigation and planning modules are both running on the onboard computer, and the tracking control is running on the onboard flight controller Pixhawk4-mini. For the navigation task, we use FAST-LIO lidar SLAM (Simultaneous Localization And Mapping)[23] algorithm to compute the location of the UAV and build a point cloud map of the environment, both at 50Hz. After receiving a new scan of point cloud and IMU data from the lidar, FAST-LIO estimates the current UAV state in a tightly-coupled iterated Kalman filter. The estimated state is, in turn, used to project the new scan of point cloud to the world frame. The new scan of point cloud is then added to a local map organized in two time-accumulated KD-Trees, detailed in Section IV-A. The updated local map triggers a collision check on the current trajectory under tracking. If any collision exists, a re-planning is triggered, and a new safe trajectory is generated by the kinodynamic A* search, as detailed in Section IV-B, to replace the current trajectory. Otherwise, the current trajectory along with the state estimation from FAST-LIO are sent out to the controller for tracking control. The trajectory tracking controller is a cascaded PID controller implemented on Pixhawk4-Mini, and we tune the parameters appropriately beforehand. Furthermore, we carefully optimize all the software modules to enhance their real-time performance. For example, we replace the KD-tree structure with an incremental KD-tree structure [24] in FAST-LIO to speed up its computation.
IV-A Time-accumulated KD-Tree
Most existing works rely on occupancy grid map[8, 18, 9], where the space is discretized into small grids. To filter out sensor noise, ray-casting is usually conducted to estimate the occupancy probability of each grid along the ray of a point measurement (e.g., OctoMap [1]). Euclidean Signed Distance Field (ESDF) map[6, 25] builds on top of the occupancy grid map and further maintains the signed distance of each grid to its nearest occupied grids. Such distance provides the gradient information allowing the optimization of a smooth trajectory. However, it takes additional time to compute the distance information.
Due to the very long measuring range of our lidar (up to 450m as in Table II), ray-casting is very time-consuming, especially when the grid size is small to accommodate small objects. On the other hand, the lidar has a very low noise ratio (lower than 0.0003) that makes the ray-casting used in OctoMap unnecessary. In this paper, we directly plan on the raw point clouds received from the estimation and mapping module. To enable efficient collision check [10], we organize the raw point clouds into a KD-tree structure.
Considering all historic points in the collision check leads to over conservative motion planning since the space travelled by dynamic objects will be mistakenly considered as occupied [26]. Moreover, maintaining all points in a single KD-tree is time-consuming due to the large number. On the other hand, using points in the current scan only [10] will lead to incomplete coverage of the FoV and miss potential obstacles on the trajectory.

We propose to use a temporal local map. That is, only the most recent point cloud is used for collision check. We accumulate the point cloud over a certain time, called the accumulation time. For the livox AVIA lidar, the accumulation time is set to one second, which produces sufficiently high-resolution point clouds (see Fig. 4). We further downsample the raw point clouds at a prescribed resolution (e.g., 10cm) to lower the computation. Note that the down-sampling resolution, unlike the grid size in Octomap, does not affect the detection of objects smaller than the resolution, but may only inflate the obstacle space slightly.
The local map is organized into two KD-Trees, one static and one dynamic growing (see Fig. 6), each KD-Tree stores up to the accumulation time point clouds data. The update of each KD-tree is detailed in Algorithm 1, where denotes the maximum number of lidar scans stored in one KD-Tree, and denotes the number of KD-Trees. In the beginning, the two KD-Trees are both empty. Once a new scan of points arrives, the KD-Tree that is still not full is obtained (Line 1 - 6). Then, the new scan of points is added to the point cloud already on the Tree (Line 11). The updated point clouds are down sampled at the prescribed resolution (Line 13) and used to build up a new KD-Tree replacing the existing one (Line 14). The above process repeats until the current KD-Trees is full (Line 7), where the new scan is accumulated from scratch (Line 8-9) and saved to the other KD-Tree (this will override the existing point cloud already on the other KD-Tree).
Input:
Point clouds in the new scan
Output:
Two KD-Trees
Parameters:
The two KD-Trees save point cloud up to twice the accumulation time, which is sufficient to cover the lidar FoV with great details. When performing collision check, both KD-Tree should be used: the static one provides information of the environments while the dynamic one timely detects dynamic obstacles at the lidar scan rate (i.e., 50Hz).
Remark: when adding a new scan of point cloud, the entire KD-Tree is re-built from scratch (Line 14, Algorithm 1) by using the standard PCL library [27]. This re-building could be very inefficient when the number of new points is significantly less than existing points on the KD-tree. In this case, incremental KD-Tree [24] could be used, which saves much building time by only updating and re-balancing the KD-Tree partially.
IV-B Kinodynamic A* search
Once a collision is found on the current trajectory under tracking (e.g., due to dynamic or newly detected obstacles), the re-planning module is triggered to plan a safe trajectory based on the most recent local map saved in the two KD-Trees and on the current UAV states estimated from FAST-LIO. To avoid dynamic obstacles, the re-planning module should be lightweight and can run at a high frequency. In our system, we choose the motion primitive-based method, which has been successfully used in high-speed planning[10] and dynamic objects avoidance[19].
We directly adopt the Kinodynamic A* search algorithm presented in [28], where a graph is constructed and searched via A*. The graph is initialized with the starting UAV state (e.g., position, velocity, etc.), where we discretize the control space into a certain number of discrete actions. For each action, we integrate it over a certain time (i.e., ) and obtain a dynamically feasible trajectory (called motion primitive). Collision check is performed on each motion primitive using points in the local map (i.e., the two time-accumulated KD-Trees). If the motion primitive is collision-free and satisfies kinodynamic constraints, its end state is added to the graph. This process is called expansion and is repeated until the target point is reached. Similar to [28], we use analytical expansion to speed up the search process.
Since a quadrotor UAV is differential flat [29], the state and actual control can be uniquely determined from the trajectory and its high-order derivatives (the yaw angle is always aligned with the velocity direction), this enables us to use the high-order derivatives (e.g., acceleration, jerk, snap) as the virtual control in Kinodynamic A* search. As shown in [30], planning in higher-dimension space such as jerk is usually very time consuming, we choose acceleration as the control action as in [31]. More specifically, we discretize the acceleration space of each direction into three actions , leading to a total number of 27 motion primitives when expanding a node. More implementation details can be referred to [31].
V Results
We present several real-world experiments to validate the hardware and software of our UAV system, indoor office corridor, outdoor forests, and cluttered tree crowns (see Fig. 7). All the environments are natural except the possible dynamic small obstacles (e.g., narrow bar, tree branches) intentionally added to the scene. In all experiments, the parameters of the system are set as follows according to the actual situation: the down-sampling resolution is set to 10cm when building the two time-accumulated KD-Trees. The safety clearance between UAV and obstacles is set to 45cm considering the actual size of the UAV and the possible obstacle shrink caused by point cloud down-sampling. The kinodynamic limits of the motion planning modules for velocity is set to 2m/s, m/s2, and s. The snapshots of experiments are shown in Fig. 7 (a-c). Due to the use of lidar, a byproduct of our system is a high-resolution high-accuracy 3D map of the environments built in real-time as shown in Fig. 7 (d-f). We supply a video attachment (also available at https://youtu.be/pBHbQ_J1Qhc) that better visualize the flight performance of our system in all these three environments.


V-A Indoor experiments with dynamic and static obstacles
Instead of flying in a man-made indoor environment[31, 10], we choose a natural office corridor and no changes have been made to the site, as shown in Fig. 7(a). The UAV can autonomously fly to a 9 meters away goal point while avoiding both the static and dynamic obstacles with a maximum velocity of 2m/s. During the flight, we suddenly raise a narrow-bar shape obstacle (diameter 20mm) and hold it to the UAV flight path. However, our system can detect this dynamic small obstacle and avoid it in a very short time. As shown in Fig. 8, a smooth trajectory (the yellow curve) is first generated to the target point. With the narrow-bar raising, the trajectory is detected to have collision with the updated environment, and the planning module starts replanning. After several times of replanning, a final safe trajectory (the red curve) that passes under the bar is planned. In the upper right corner of Fig. 8, we show the zoomed-in view of the narrow bar where the blue points represent the points in time-accumulated KD-Trees used for replanning and white points are points in the past. It is seen that the bar movement profile is clearly captured by the LiDAR points. By using a temporal local map, the re-planner is able to utilize the space where dynamic obstacles swept over (indicated by white points in the upper right corner) to avoid the bar at its current position (indicated by the blue points in the upper right corner).
V-B Outdoor experiments with natural and dynamic obstacles
In order to validate the system in more natural and complex environments, we also carry out experiments in a forest. In the first scene, the quadrotor passes several trees and a dynamic branch (see Fig. 7(b)) to reach the target point 15m away. During the flight, we suddenly swept down a tree branch (diameter 10-20mm) and hold it on the UAV flight path (see Fig. 1). Fig. 9 shows the moment the re-planning is triggered. It can be seen that the tree branch movement profile is clearly seen (the zoomed view in the upper right corner). Once the tree branch falls within the safety clearance of the current trajectory under tracking (the yellow trajectory), the re-planning is triggered to produce a new trajectory (the red trajectory) that avoids the tree branch. Unlike the indoor experiment, all points on the tree branch are on the KD-Tree. This is because the tree branch falls within the safety clearance, which triggers the re-planning, within twice the accumulation time.

The second outdoor environment is a very complex hillside full of cluttered tree crowns, stones, water pipes, and with significant height variation, as shown in Fig. 10. We set the target point of the UAV to be 18 meters away from its takeoff position. Our method is able to find a safe path through cluttered tree crowns and flies to the target point. A part of the path is in a narrow corridor as shown in the upper right corner of Fig. 10 but the UAV still succeeds.

V-C Breakdown of running time
For a fully autonomous UAV system, a physical limitation that cannot be ignored is the limitation of computing resources. It is impossible and costly for a small quadrotor to carry a desktop-grade PC due to the weight constraint. Therefore, the algorithm speed optimization on the onboard computer is essential for UAVs. In our systems, we utilize several methods to speed up our algorithms.
In the motion planning module, we carefully design the time-accumulated KD-Tree and make the processing time of a new scan of point cloud data less than 5ms, which is described in detail in Section IV-A. The resultant computation time of building time-accumulated KD-Tree differs periodically from 0ms to 4ms due to the change of points size in the building process. The computing time of kinodynamic A* search depends on the distance from the target point. For a 7 meters away target in the first outdoor forest environment, it takes about 12ms to generate a trajectory. The running time breakdown is detailed in Fig. 11.

In general, our system achieves real-time capability and is able to avoid dynamic small obstacles. For a 7 meter away target, our system took an average of 17ms, including 4ms (average) for state estimation and mapping, 1ms for updating time-accumulated KD-Trees, and 12ms for kinodynamic A* search. Note that our re-planning is triggered only when the current trajectory collides with an obstacle instead of re-planning at a fixed rate, hence the worst case in Fig. 11 rarely occurs.

V-D Comparison of mapping method
We finally study the applicability of Octomap [1] in representing small dynamic objects. Octomap is a popular occupancy grid map that has been widely used in many existing works [18, 8] and also constitutes the basis for ESDF maps [6, 25]. Taking the indoor office corridor as an example, we build an Octomap at 30cm resolution, as shown in Fig. 12. In the figure, we also display the point cloud for comparison. It is seen that at this resolution, only the root part of the 2cm bar is detected due to the larger human arm holding the bar, while the main body of the bar, although being evident in the point cloud map, is not detected at all. The reason is that the grid containing the narrow bar has rays from the background objects. When the bar is very narrow, the grid contains many more rays from the background objects than rays from the bar itself. As a result, the ray-cast method would quickly lower the occupancy probability of the grid because it is “seen through” by many background points’ rays. This situation is even worse when the small bar is moving. We have also tested higher resolution of the Octomap including 20cm, 10cm, and 5cm, but none of them reliably detect the bar.
Updating the Octomap also takes significant time because, for a new point, all grids on the ray need to be updated. At a resolution of 30cm and a scan rate of 50Hz, the incremental update of Octomap takes 5-8ms time. This update time will dramatically increase when using a higher resolution or in outdoor open areas. In this case, new points are tens of or hundreds of meters away and for each new point, many grids on the ray need to be updated. On the other hand, our methods directly plan on the point cloud, the time for building (or updating) KD-trees depends on the number of points (which is nearly constant in every new scan) instead of the points’ locations.
VI Conclusion And Future Work
This paper proposes an autonomous UAV system where all the state estimation, mapping, and trajectory planning are performed onboard based solely on lidar and inertial measurements. The design of the hardware and software system is presented. Extensive indoor and outdoor experiments are conducted. Results show that the proposed system is able to fly safely in cluttered environments while avoiding dynamic small objects.
The proposed system uses a Kinodynamic A* (re-) planning method, where the control (i.e., acceleration) space is discretized to produce motion primitives. The searched trajectory has a discontinuous acceleration trajectory, preventing the UAV from high flying speeds requiring accurate trajectory tracking. The small number (i.e., 27) of control-space motion primitives also limits the UAV from operating in an even tighter space or higher re-planning rate. In the future, we would like to improve the re-planning module to enable more challenging environments and higher flying speeds.
References
- [1] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: an efficient probabilistic 3D mapping framework based on octrees,” Autonomous Robots, vol. 34, no. 3, pp. 189–206, Apr. 2013.
- [2] C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, “Svo: Semidirect visual odometry for monocular and multicamera systems,” IEEE Transactions on Robotics, vol. 33, no. 2, pp. 249–265, 2016.
- [3] M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, “Iterated extended kalman filter based visual-inertial odometry using direct photometric feedback,” The International Journal of Robotics Research, vol. 36, no. 10, pp. 1053–1072, 2017.
- [4] Y. Lin, F. Gao, T. Qin, W. Gao, T. Liu, W. Wu, Z. Yang, and S. Shen, “Autonomous aerial navigation using monocular visual-inertial fusion,” Journal of Field Robotics, vol. 35, no. 1, pp. 23–51, Jan. 2018.
- [5] N. Bucki, J. Lee, and M. W. Mueller, “Rectangular Pyramid Partitioning using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4626–4633, July 2020, arXiv: 2003.01245.
- [6] H. Oleynikova, Z. Taylor, M. Fehr, J. Nieto, and R. Siegwart, “Voxblox: Incremental 3D Euclidean Signed Distance Fields for On-Board MAV Planning,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1366–1373, Sept. 2017, arXiv: 1611.03631.
- [7] M. Liu, “Robotic Online Path Planning on Point Cloud,” IEEE Transactions on Cybernetics, vol. 46, no. 5, pp. 1217–1228, May 2016, conference Name: IEEE Transactions on Cybernetics.
- [8] Sikang Liu, M. Watterson, S. Tang, and V. Kumar, “High speed navigation for quadrotors with limited onboard sensing,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden: IEEE, May 2016, pp. 1484–1491.
- [9] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, “Continuous-time trajectory optimization for online UAV replanning,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon, South Korea: IEEE, Oct. 2016, pp. 5332–5339.
- [10] B. T. Lopez and J. P. How, “Aggressive 3-D collision avoidance for high-speed navigation,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore, Singapore: IEEE, May 2017, pp. 5759–5765.
- [11] P. R. Florence, J. Carter, J. Ware, and R. Tedrake, “NanoMap: Fast, Uncertainty-Aware Proximity Queries with Lazy Search Over Local 3D Data,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD: IEEE, May 2018, pp. 7631–7638.
- [12] J. Ji, Z. Wang, Y. Wang, C. Xu, and F. Gao, “Mapless-Planner: A Robust and Fast Planning Framework for Aggressive Autonomous Flight without Map Fusion,” arXiv:2011.03975 [cs], Nov. 2020, arXiv: 2011.03975.
- [13] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning Dynamically Feasible Trajectories for Quadrotors Using Safe Flight Corridors in 3-D Complex Environments,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1688–1695, July 2017.
- [14] M. Dharmadhikari, T. Dang, L. Solanka, J. Loje, H. Nguyen, N. Khedekar, and K. Alexis, “Motion Primitives-based Path Planning for Fast and Agile Exploration using Aerial Robots,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, France: IEEE, May 2020, pp. 179–185.
- [15] J. Zhang, C. Hu, R. G. Chadha, and S. Singh, “Falco: Fast likelihood‐based collision avoidance with extension to human‐guided navigation,” Journal of Field Robotics, p. rob.21952, Apr. 2020.
- [16] J. L. Sanchez-Lopez, M. Wang, M. A. Olivares-Mendez, M. Molina, and H. Voos, “A Real-Time 3D Path Planning Solution for Collision-Free Navigation of Multirotor Aerial Robots in Dynamic Environments,” Journal of Intelligent & Robotic Systems, vol. 93, no. 1-2, pp. 33–53, Feb. 2019.
- [17] R. E. Allen and M. Pavone, “A real-time framework for kinodynamic planning in dynamic environments with application to quadrotor obstacle avoidance,” Robotics and Autonomous Systems, vol. 115, pp. 174–193, May 2019.
- [18] F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, “Teach-repeat-replan: A complete and robust system for aggressive flight in complex environments,” IEEE Transactions on Robotics, vol. 36, no. 5, pp. 1526–1545, 2020.
- [19] D. Falanga, K. Kleber, and D. Scaramuzza, “Dynamic obstacle avoidance for quadrotors with event cameras,” Science Robotics, vol. 5, no. 40, p. eaaz9712, Mar. 2020.
- [20] R. Madaan, “Wire detection, reconstruction, and avoidance for unmanned aerial vehicles,” Master’s thesis, Carnegie Mellon University, Pittsburgh, PA, August 2018.
- [21] H. Zhang, W. Yang, H. Yu, H. Zhang, and G.-S. Xia, “Detecting power lines in uav images with convolutional features and structured constraints,” Remote Sensing, vol. 11, p. 1342, 06 2019.
- [22] Z. Liu, F. Zhang, and X. Hong, “Low-cost retina-like robotic lidars based on incommensurable scanning,” IEEE/ASME Transactions on Mechatronics, p. 1–1, 2021.
- [23] W. Xu and F. Zhang, “FAST-LIO: A Fast, Robust LiDAR-inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter,” arXiv:2010.08196 [cs], Oct. 2020, arXiv: 2010.08196.
- [24] Y. Cai, W. Xu, and F. Zhang, “ikd-Tree: An Incremental K-D Tree for Robotic Applications,” arXiv:2102.10808 [cs], Feb. 2021, arXiv: 2102.10808.
- [25] L. Han, F. Gao, B. Zhou, and S. Shen, “Fiesta: Fast incremental euclidean distance fields for online motion planning of aerial robots,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 4423–4430.
- [26] F. Gao, W. Wu, W. Gao, and S. Shen, “Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments,” Journal of Field Robotics, vol. 36, no. 4, pp. 710–733, 2019.
- [27] M. Muja and D. Lowe, “Flann-fast library for approximate nearest neighbors user manual,” Computer Science Department, University of British Columbia, Vancouver, BC, Canada, vol. 5, 2009.
- [28] S. Liu, N. Atanasov, K. Mohta, and V. Kumar, “Search-based motion planning for quadrotors using linear quadratic minimum time control,” in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2017, pp. 2872–2879.
- [29] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in 2011 IEEE international conference on robotics and automation. IEEE, 2011, pp. 2520–2525.
- [30] S. Liu, K. Mohta, N. Atanasov, and V. Kumar, “Search-based motion planning for aggressive flight in se (3),” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2439–2446, 2018.
- [31] B. Zhou, F. Gao, L. Wang, C. Liu, and S. Shen, “Robust and Efficient Quadrotor Trajectory Generation for Fast Autonomous Flight,” arXiv:1907.01531 [cs], July 2019, arXiv: 1907.01531.