This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

ExtPerFC: An Efficient 2D and 3D Perception
Software-Hardware Framework for Mobile Cobot

Tuan Dang, Khang Nguyen, and Manfred Huber
University of Texas at Arlington
[email protected], [email protected], [email protected]
Abstract

As the reliability of the robot’s perception correlates with the number of integrated sensing modalities to tackle uncertainty, a practical solution to manage these sensors from different computers, operate them simultaneously, and maintain their real-time performance on the existing robotic system with minimal effort is needed. In this work, we present an end-to-end software-hardware framework, namely ExtPerFC, that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We first design our framework to achieve real-time performance on the existing robotic system, guarantee configuration optimization, and concentrate on code reusability. We then mathematically model and utilize our transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we systematically test the proposed framework on the Baxter robot with two 7-DOF arms, a four-wheel mobility base, and an Intel RealSense D435i RGB-D camera. The results show that the robot achieves real-time performance while executing other tasks (e.g., map building, localization, navigation, object detection, arm moving, and grasping) simultaneously with available hardware like Intel onboard CPUS/GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. The source code is available at https://github.com/tuantdang/perception_framework.

Index Terms:
robotics, software framework, perception

I Introduction

The past few years have seen an increasing number of different sensing modalities integrated into robots that significantly enhance robot perception, especially for autonomous service mobile cobots to perform map building, localization, navigation, object detection, arm moving, and efficiently grasping objects for safety purposes [1]. However, besides support for conventional tasks in robot control and navigation, efficient techniques to deal with 2D and 3D perception requiring expensive computational power must also be deployed on the same system. For this reason, an efficient software-hardware framework that concurrently enables sensors, communication, perception, navigation, and motion planning to operate seamlessly is necessary for incremental robotic development.

Previous works focus on only one of the aspects mentioned above [2, 3], and flexible component integration is often omitted. Recently, Robot Operating System 2 (ROS 2) [4] has improved security and reliability, which are critical criteria in a commercial product; meanwhile, ROS 1 is still popular among research communities and industry. Nevertheless, the single failure point of the ROS master causes poor performance if multiple sensing modalities are initiated simultaneously, especially with high bandwidth data in LiDAR sensors and RGB-D cameras. Moreover, integrating state-of-the-art machine learning (ML) with optimal configurations into the current ROS software stack can burden developers as no official framework can handle this task.

Refer to caption
Figure 1: ExtPerFC’s concept for mobile cobot perception.

Recent works on deploying Deep Neural Networks (DNN) for a safe and secure automation domain [5, 6] propose a visionary hypervisor-centric architecture. Yet, the integration of ML modules is meticulously tailored for specific applications. Also, their complexity worsens when they are deployed on different computing hardware. Thus, a framework is needed to eliminate repetitive tasks (training, testing, and detection) and be compatible with available hardware on robot systems.

Refer to caption
Figure 2: ExtPerFC includes a hardware layer, an OS layer, a software framework, and an application layer.

To fill this gap, we build an efficient hardware-software framework (Fig. 1) that allows simple integration of various tasks and different hardware driver versions. More importantly, we introduce a design that can support multiple state-of-the-art object detection models and execute them on low-end commodity devices in real time, enabling developers to manage computing tasks on available hardware in the system with high flexibility and minimum effort (i.e., choose a set of devices to process data from sensors and deploy ML models).

In this work, the contributions are (1) building a complete software-hardware framework for a mobile cobot system that supports map building, localization, navigation, and motion planning (Sec. IV), as well as 2D and 3D perception using state-of-the-art DNNs (Sec. V), (2) verifying the framework feasibility and performance on a real robot system, and (3) producing a fast method to train multiple object detection using transfer learning (Sec. V-A) with open-source code.

However, to build such an efficient framework for 2D and 3D perception and other robot tasks, we have to overcome challenges: (1) integrating a large amount of software library while guaranteeing dependencies, (2) building and evaluating multiple DNN models, and (3) solving incompatible hardware issues due to high variance in hardware selection. This work extends the previous preliminary work [7] by overcoming the issue of finding the best configuration between available sensors. Furthermore, this work also focuses on building and testing 2D/3D perception more extensively.

II Related Work

Architecture of Robot Software: Many software architectures are proposed and designed for industrial robot applications [8] to cope with issues of robot-language dependency. Although they solve the issue, they incidentally limit users from developing new functions. Specifically, adding more sensing modalities or additional hardware becomes complicated since developers only access services from specific robot software development toolkits. Accessing services into the OS layer or other libraries is restricted. Moreover, using robot-language dependency prevents developers from accessing many open-source libraries. For those reasons, we develop a software framework that enables developers to flexibly access multiple system layers and access open-source libraries while maintaining simple integration with most state-of-the-art ML models.

Robot Mapping, Localization, and Navigation: Previous works on the Baxter robot [9, 10] primarily concentrate on picking and placing and motion planning tasks. Thus, the integration of LiDAR sensors along with the mobility base into the Baxter is crucial for its map building and navigation. Moreover, autonomous mobile service robots in the past with both navigation and tracking modules [11, 12] are embedded with lightweight localization and mapping algorithms due to their application simplicity (i.e., specific tasks and object tracking in a known environment), which makes these robots difficult to expand and adjust to developers. In this work, we introduce an expandable and user-adjustable software-hardware framework for mobile cobots with state-of-the-art laser-based SLAM algorithms deployed.

Motion Planning: The main objective of arm motion planning is to find a trajectory from the end-effector position to desired positions, avoiding collision and minimizing the path cost and time complexity under the constraint that every point in the trajectory must have an inverse kinematic solution. In dealing with efficient arm motion planning, many methods have been used [13, 14], such as AtlasRRT and CBIRRT2. Most of them are well-supported by MoveIt! [15]. Therefore, we reuse them for our framework implementation.

Robot Perception: 2D perception [16, 17] is widely used in research and industrial applications, while 3D perception [18, 19] is dominating in autonomous driving vehicles with LiDAR sensors support, but limited in everyday object perception regarding the robotic domain. The fact of lack of robotic research in 3D perception is the absence of diverse labeled datasets since most of them are not specifically for robotic applications. In this work, we used a hybrid method of 2D state-of-the-art detection and 3D estimation methods.

III Software-Hardware Framework

III-A Design Goals

In order to build a more reliable software-hardware framework for mobile cobots, we define our design goals are:

  • Reusability and simple integration into systems with mixed versions of OS and middleware: Linux & ROS.

  • Distributed computing with load balancing awareness.

  • State-of-the-art ML models deployed with optimal configurations of cameras and processing devices.

Hardware is often compatible with a specific Linux kernel, and a specific ROS distribution is only provided to a specific Linux version. For this reason, selecting hardware concurrently with selecting Linux kernels implies narrowing down options in choosing a ROS distribution for developers. Unfortunately, not all hardware works well with the same Linux kernel, leading to using various ROS versions in the same system. Therefore, calling the same APIs from different ROS distributions may cause backward incompatibility issues due to a slight change in the function prototype and the underlying implementation of that supported API. We adopt message conversion [27] between multiple communication protocols to implement the message translator between ROS versions.

The driver incompatibility problems can be solved by using suitable Linux kernels supporting these devices’ drivers. However, it may raise backward incompatibility between a certain ROS distribution and APIs from other ROS distributions. Indeed, we encountered these compatibility issues with the Baxter robot (Ubuntu 14.04 and ROS Indigo) when executing motion APIs on Linux machines with Ubuntu versions other than Ubuntu 14.04. Therefore, an OS bridge between APIs from different ROS distributions is needed.

With the increasing requirements of computational tasks, a single computer system may no longer fit into a robotic application since the failure of that computer would cause the entire system to crash. We adopt distributed computing with load balance awareness between computers by periodically broadcasting the status to their peers. With this scheme, we can obtain load balancing at run-time. Two aspects must be considered for robots with multiple vision cameras: (1) bandwidth to acquire data and (2) processing power to perform detection or recognition tasks. We propose a method for selecting the optimal configuration at run-time for the combination of cameras, ML models, and devices.

1
2
Input : S ={s1,s2,,sn}:==\{s_{1},s_{2},...,s_{n}\}:= available sensors
D ={d1,d2,,dm}:==\{d_{1},d_{2},...,d_{m}\}:= available devices
3
Output :  C ={c1,c2,,cn}:==\{c_{1},c_{2},...,c_{n}\}:= optimal configurations
4
5function hardware_selection(S, D)
6       if |S||S| = 0 or |D||D| = 0 then
7             return C = {c1c_{1}, c2c_{2}, …, cnc_{n}}
8      S = sort_by_image_size(S)
9       D = sort_by_comp_power(D)
10       for sis_{i}\in S do
11             for djd_{j}\in D do
12                   if (enet(sis_{i}, djd_{j}) or usb(sis_{i}, djd_{j})) = 1 then
13                         cic_{i} = (sis_{i}, djd_{j})
14                         delete_from_list(sis_{i}, S)
15                         delete_from_list(djd_{j}, D)
16                  elif enet(sis_{i}, djd_{j}) = 0 and usb(sis_{i}, djd_{j}) = 0 then
17                         dkd_{k} = find_connected(sis_{i}, D\\backslash{djd_{j}})
18                         cic_{i} = (sis_{i}, djd_{j}, dkd_{k})
19                         delete_from_list(sis_{i}, S)
20                         delete_from_list([djd_{j}, dkd_{k}], D)
21      hardware_selection(S, D)
return C = {c1c_{1}, c2c_{2}, …, cnc_{n}}
Algorithm 1 Hardware Selection Algorithm

III-B Framework Description

As illustrated in Fig. 2, we structure the software-hardware framework as four different layers: hardware layer, OS layer, software framework, and application layer.

  1. 1.

    Hardware Layer includes distributed computers, sensors (cameras, tactile sensors, LiDAR sensors), IMU, actuators, and ML accelerators and how they interface with each other by USB and Ethernet ports.

  2. 2.

    OS Layer contains an OS and device drivers that support connecting devices at the hardware layer.

  3. 3.

    Software Framework is the core contribution of this work. It bridges different Linux and ROS distributions, selects optimal configurations of sensors and ML models to available hardware to operate in real-time, and supports basic functionalities of the robot system, including map building, localization, navigation, motion planning, and arm moving for grasping. Most of the components are implemented on top of ROS and OpenVINO.

  4. 4.

    Application Layer allows users to control and monitor the robot via a GUI, as shown in Fig. 4, which is written using the PyQt5 toolkit. The users can manually control the robot’s arms and joint positions, navigate the mobility base, and program the robot using Python. Also, native simulations such as RViz can be used to monitor the robot.

Refer to caption
Figure 3: Illustration of the hardware selection algorithm.

III-C Hardware Configuration Optimization

Problem Formalization: We need to distribute processing of data from nn sensors S={s1,s2,,sn}S=\{s_{1},s_{2},...,s_{n}\} into the distributed computers. Each distributed computer could contain a CPU, a GPU, and a VPU, and we denote these processing units as did_{i}s in DD. The image stream acquired from a sensor comes from either a USB bus if that sensor is connected to a device via a USB port or from the network via an Ethernet switch, in which a device may ask the neighbor device to share the image stream. Also, we define each configuration as ci=[si,dj]c_{i}=[s_{i},d_{j}], where sis_{i} is from SS and djd_{j} is from DD.

Explanation & Description: To allocate the most suitable resources (devices) to the inputs (sensors) in a best-fit manner as in [20], we prioritize the following criterion: (1) a higher-ranked device in terms of computational power should process a sensor that acquires higher input image sizes, and make the following constraints in correspondence to the hardware layer Fig. 2, (2) any sensor is interfaced to at least one device by only either Ethernet or USB ports, and (3) any pair of devices are able to share data with each other via Ethernet ports.

Refer to caption
Figure 4: Graphic User Interface (GUI) and Program Manager provide robot teaching, monitoring, navigation, and programming.

We first sort both SS and DD lists in descending order in terms of input image sizes and computational power, respectively, as depicted in sort_by_image_size(S) and sort_by_comp_power(D). We then establish configurations for sensors and devices by nested-looping through SS and DD and check the following booleans, as described below:

usb(si,dj)={1, if si and dj are connected via USB0, otherwise\texttt{usb(s${}_{i}$,d${}_{j}$)}=\begin{cases}1,\text{ if $s_{i}$ and $d_{j}$ are connected via USB}\\ 0,\text{ otherwise}\end{cases}
enet(si,dj)={1, if si and dj are connected via ENET0, otherwise\texttt{enet(s${}_{i}$,d${}_{j}$)}=\begin{cases}1,\text{ if $s_{i}$ and $d_{j}$ are connected via ENET}\\ 0,\text{ otherwise}\end{cases}

If sensor ii interfaces to device jj via Ethernet or USB, we can group them as one configuration, as configuration MM (with USB port interface) and configuration NN (with Ethernet interface) in Fig. 3. However, if the sensor ii does not directly connect to the corresponding device jj, we will find an already-connected device as an intermediate device kk to share the sensing data stream with device jj. Thus, in this case, we group that sensor with the processing device jj and the intermediate device kk as one configuration as configuration PP in Fig. 3.

Note that after completing grouping each configuration, we also have to remove the configured elements from SS and DD lists: delete_from_list(si, S) and delete_from_list(dj, D), and hence recursively call the hardware_selection(S, D) function until either SS or DD list is empty. The pseudocode of the algorithm is presented in Alg. 1.

IV Robot Mapping, Localization, Navigation, and Arm Motion Planning

IV-A Robot Mapping and Localization

Deploying the robot in an unknown dynamic environment with high uncertainty in robot perception (e.g., cameras, LiDAR sensors) requires a robust simultaneous mapping and localization (SLAM) algorithm to attain the highest map-building and localization accuracy. Moreover, building an environment map and localizing within that map in a robot system is essential to enabling path planning and path execution and later avoiding obstacles in robot navigation. In the Baxter robot, we examine the reconstructed maps by using various ROS-based SLAM algorithms on the VLP-16 LiDAR sensors and prioritize GMapping for its highest accuracy and availability in multiple ROS versions.

The main goal of GMapping is to compute the posterior over maps and trajectories p(x1:t,m | z1:t,u0:t)p(x_{1:t},m\text{ }|\text{ }z_{1:t},u_{0:t}) given the estimated posterior p(x1:t | z1:t,u0:t)p(x_{1:t}\text{ }|\text{ }z_{1:t},u_{0:t}), where x1:tx_{1:t} are robot poses up to time tt, z1:tz_{1:t} is the observation up to time tt, and u0:tu_{0:t} is the control sequence. The GMapping is modeled as:

p(x1:t,m | z1:t,u0:t)=p(m | z1:t,x1:t)p(x1:t | z1:t,u0:t)p(x_{1:t},m\text{ }|\text{ }z_{1:t},u_{0:t})=p(m\text{ }|\text{ }z_{1:t},x_{1:t})\cdot p(x_{1:t}\text{ }|\text{ }z_{1:t},u_{0:t})

In specific, the posterior p(x1:t | z1:t,u0:t)p(x_{1:t}\text{ }|\text{ }z_{1:t},u_{0:t}) is treated as a particle that is associated with a proposal map using the Rao-Blackwellized particle filter (RBPF). Meanwhile, the posterior over maps p(m | z1:t,x1:t)p(m\text{ }|\text{ }z_{1:t},x_{1:t}) is built given the robot poses x1:tx_{1:t} and observations z1:tz_{1:t}. As the robot obtains observations and control measurements, the RBPF for GMapping is iterative, repeating the following steps: (i) sampling, (ii) importance weighting, (iii) resampling, and (iv) map estimating [21].

IV-B Robot Navigation

To navigate the robot in an environment, we reconstruct the global cost map and local cost map, where obstacles appear. The trajectory from source to destination is generated while minimizing the path cost in both maps. In this work, we inherit these ROS-based features and translate them into our control command in the application layer (our APIs).

IV-C Arm Motion Planning

The main objective of arm motion planning is to find a trajectory from the end-effector source position to the desired positions avoiding collision and minimizing the path cost and time complexity under the constraint that every point in the trajectory must have an inverse kinematic solution. In dealing with efficient arm motion planning, many methods have been used [13, 14], such as AtlasRRT and CBIRRT2. Most of them are well-supported by MoveIt!. Thus, we reuse them for the implementation of our work.

IV-D Graphic User Interface & Program Manager

To control the robot, we not only command from one computer but also command from other computers for synchronization tasks. Furthermore, working on multiple computers requires considerable steps like login using SSH, editing source code, and uploading and executing code. These steps are repetitive and time-consuming, becoming a burden for developers. We build a graphic user interface (GUI) application, as shown in Fig. 4, in this framework to eliminate these burdens while developing robotic software.

The key idea is that this GUI application can run on a remote computer (Windows or Linux) and is able to connect to available on the robot so that it can execute specific tasks on each computer remotely. For example, tucking and untucking robots must run programs on the Bater robot, while moving the arm from one position to another can be conducted on another computer. Yet, another advanced tool is the program manager, allowing developers to edit Python code on a remote computer and execute their own customized code. This GUI application eventually saves the programming time of developers by getting rid of repetitive routines, thus making robotic software development more productive.

To allow the remote GUI applications to execute a command or a Python script, a gateway on the robot system is needed. This gateway talks directly to other computers via ROS message and ROS bridge adaption layer. In specific, it operates as a translator between the GUI application and the entire robot system by translating the commands from GUI to robot system commands and delivering commands to specific computers. In the meantime, it acquires all information about the robot system from distributed computers and periodically sends it to the connected remote computer.

V Robot Perception

V-A Transfer Learning for 2D Object Detectors

To perform real-time detection and recognition of objects with a limited richness compared with those in open datasets like ImageNet and MS COCO datasets, we adopt two strategies: transfer learning [22] and single state detection [16]. We first transfer knowledge from a rich domain feature into a sparse domain feature, which represents our dataset. Herein, we then mathematically model the overall concept of transfer learning and address the two questions: (i) what to be transferred between models and (ii) how to transfer that knowledge.

Refer to caption
Figure 5: Illustration of transfer learning strategies: instance (fine-tuning) and feature representation transfer learning.

Transfer Learning Formulation: A domain is defined by 𝒟={𝒳,P(X)}\mathcal{D}=\{\mathcal{X},P(X)\} where X={x1,x2,..}𝒳X=\{x_{1},x_{2},..\}\in\mathcal{X} with 𝒳\mathcal{X} representing the feature space, and P(X)P(X) its marginal distribution. Let 𝒯={𝒴,P(Y|X)}\mathcal{T}=\{\mathcal{Y},P(Y|X)\} be the learning task that learns from training pairs (xi,yi)(x_{i},y_{i}) with yi𝒴y_{i}\in\mathcal{Y} in the label space. The objective of transfer learning is to improve the predictive function P(Yt|Xt)P(Y_{t}|X_{t}) in target domain 𝒟t={𝒳t,P(Xt)}\mathcal{D}_{t}=\{\mathcal{X}_{t},P(X_{t})\} using knowledge in the source domain 𝒟s={𝒳s,P(Xs)}\mathcal{D}_{s}=\{\mathcal{X}_{s},P(X_{s})\} and source learning task 𝒯s={𝒴s,P(Ys|Xs)}\mathcal{T}_{s}=\{\mathcal{Y}_{s},P(Y_{s}|X_{s})\}.

Let P(Y|X)=f(X,β)P(Y|X)=f(X,\beta) where ff is the task function. The minimizer for the trainable parameters, β\beta, is written in terms of the loss function, L(,)L(\cdot,\cdot), and the task function, ff, as follows:

argminβXL[f(X,β),Y]\mathop{\mathrm{argmin}}\nolimits_{\beta}\sum_{X}L\left[f\left(X,\beta\right),Y\right] (1)

With respect to DL and computer vision concepts, we divide the task function into two components: feature extraction (backbone) and detection (head), such that f(X,β)=(fDfF)(X,βD,βF)f(X,\beta)=(f^{D}\circ f^{F})(X,\beta^{D},\beta^{F}) where fDf^{D} and fFf^{F} are task function for detection and feature extraction, respectively, and βD\beta^{D}, βF\beta^{F} are parameters for detection and feature extraction, respectively. The analogous minimizer for βtF\beta_{t}^{F} and βtD\beta_{t}^{D} is:

argmin{βtD,βtF}XtL[(ftDftF)(Xt,βtD,βtF),Yt]\mathop{\mathrm{argmin}}\nolimits_{\left\{\beta_{t}^{D},\beta_{t}^{F}\right\}}\sum_{X_{t}}L\left[\left(f_{t}^{D}\circ f_{t}^{F}\right)\left(X_{t},\beta_{t}^{D},\beta_{t}^{F}\right),Y_{t}\right] (2)

Since features in the source domain are more generalized and sufficiently cover our target domain, we assume that the feature space in the source domain and target domain are similar. However, our target labels are different (𝒴s𝒴t\mathcal{Y}_{s}\neq\mathcal{Y}_{t}) since we retrain the models with in-lab objects (cone, cube, and sphere). Here, we utilize two transfer learning strategies: (1) instance transfer, where the marginal distribution of source features is different from that of target features, and (2) feature representation transfer, where we fit the source feature domain into the target feature domain (Fig. 5).

To implement instance transfer, we transfer (βsD,βsF)(βtD,βtF)\left(\beta_{s}^{D},\beta_{s}^{F}\right)\rightarrow\left(\beta_{t}^{D},\beta_{t}^{F}\right), where βsD\beta_{s}^{D} and βsF\beta_{s}^{F} are resultants from source task functions, and fine-tune (βtD,βtF)\left(\beta_{t}^{D},\beta_{t}^{F}\right) using Eq. 2. For feature representation transfer, we separate the source task into two components (backbone and head) and transfer the entire source task’s backbone into the target task. In specific, we transfer βsFβtF\beta_{s}^{F}\rightarrow\beta_{t}^{F}, and train βtD\beta_{t}^{D} using Eq. 2. We also train with randomly initialized weights as a third strategy for accuracy comparisons in Sec. VI.

Refer to caption
Figure 6: Validation losses with different transfer learning strategies and random weights training on MobileNetv1, MobileNetv2 Lite, VGG-16, and SqueezeNet models.

V-B Depth Estimation

We obtain depth images and RGB images simultaneously from the Intel RealSense D435i camera, which also well-handles the depth image creation process, including camera calibration, image rectification, and disparity computation.

Refer to caption
Figure 7: AP for each class and mAP among all classes on MobileNetv1, MobileNetv2 Lite, VGG-16, SqueezeNet, and YOLOv7-tiny models. All models are trained with three different strategies, as described in Sec. V.

As the whole predicted bounding box also covers non-detected objects, we average the depth of the bounding box would incur estimation errors. We, therefore, scale the bounding box small enough at the center of the box, then calculate the estimated depth of the object, DD, by averaging depth values of each pixel in the scaled region as follows:

D=(w×h)1[i=x0w/2x0+w/2j=y0h/2y0+h/2d(i,j)]D=(w\times h)^{-1}\left[\sum_{i=x_{0}-w/2}^{x_{0}+w/2}\sum_{j=y_{0}-h/2}^{y_{0}+h/2}d(i,j)\right] (3)

where d(i,j)d(i,j) returns the depth value at pixel (i,j)(i,j), ww and hh indicate the width and height of the scaled region, respectively, and (x0,y0)(x_{0},y_{0}) are center coordinates of the bounding box.

Note that, in this framework, we choose the scaled region as 20×2020\times 20 pixels to guarantee optimal selection for computational power while maintaining the correctness of depth estimation.

VI Evaluation & Demonstration

We evaluate the add-on components for system completeness, such as 2D and 3D vision, hardware configurations, and their performances. Other components such as mapping, localization, navigation, and planning are well-supported ROS packages: 2D Navigation Stack and MoveIt!.

VI-A Data Preparation

To verify the correctness of our proposed method, we first collect data from three in-lab objects: cones, cubes, and spheres. We then label them with annotations in Pascal VOC format and split our custom dataset into three sets: training set (70%), validation set (20%), and test set (10%).

Refer to caption
Figure 8: Experiment on simultaneous object detection using YOLOv7-tiny and depth estimation on Intel RS D435i camera.

VI-B Evaluation Metrics

To evaluate how well the transfer learning strategies perform during the training periods, we calculate the validation loss, average precision (AP), and mean average precision (mAP) for MobileNetv1 [23], MobileNetv2 [24], SqueezeNet [25], VGG-16 [26], and YOLOv7 [17]. We train each model with three different strategies, as illustrated in Fig. 5. Data augmentation is used in a preprocessing procedure to enrich the training dataset, including rotation, cropping, and color distortion. We also evaluate the detection performance on test sets using AP and mAP calculated based on multiple intersections over union (IoU) thresholds.

For each test image, the IoU is defined as IoU=(BpBt)/(BpBt)IoU={(B_{p}\cap B_{t})}/{(B_{p}\cup B_{t})} where BpB_{p} and BtB_{t} are predicted bounding box and ground truth bounding box, respectively. If an IoU of a prediction is greater or equal to the predefined threshold value, that prediction is classified as a true positive (TP); otherwise, it is counted as a false positive (FP). Since the testing stage produces multiple detections among the classes, we first sort the confidence scores of all predictions in descending order as we need the trend of recall scores (PP) to ascend while the trend of recall scores (RR) descends in the P-R curve. Hence, we compute PP and RR for predictions of proposal performance for each class, P=cTP/(cTP+cFP)P=cTP/(cTP+cFP) and R=cTP/GTR=cTP/GT, where cTPcTP and cFPcFP denote cumulative TP and cumulative FP, respectively, and GTGT is each class’s ground truth. Thus, the AP for each class is computed as APj=01p(r)AP_{j}=\int_{0}^{1}\text{p}(r), where jj denotes for class ”j”. However, the values we obtain are discrete; we defined the interpolated area under the P-curve along the R-axis in the P-R curve in terms of recall levels, rjr_{j}, as:

APj=i=1n1(ri+1ri)maxri+1ri+1p(ri+1)AP_{j}=\sum_{i=1}^{n-1}\left(r_{i+1}-r_{i}\right)\cdot\max_{r^{\prime}_{i+1}\geq r_{i+1}}p(r^{\prime}_{i+1}) (4)
Refer to caption
Figure 9: Estimated depth measurements compared to ground truth distances from kinematic transformation.
Refer to caption
Figure 10: Experiment setup of the Baxter robot simultaneously grasping the target-detected object (cube) and estimating its depth at (a) 0.3 meters, (b) 0.4 meters, and (c) 0.5 meters away from the mounted camera, respectively.
Refer to caption
Figure 11: Experiment setup of the Baxter robot (a) simultaneously recognizing the target person by tracking facial landmarks and building the environment map, (b) starting following the tracked person, and (c) keeping following that person.

The IoU thresholds range from 0.01 to 1.00 with a step of 0.01. After evaluating detection proposals on all IoU thresholds, we calculate the mAP for each model as below:

mAP=k=1cAPkcmAP=\frac{\sum_{k=1}^{c}AP_{k}}{c} (5)

where cc is the number of classes in our training set.

The AP and mAP results on multiple IoU thresholds ranging from 0.01 to 1.00 with a step of 0.01 are shown in Fig. 7.

VI-C Results

Training Performance: We train MobileNetv1, MobileNetv2, SqueezeNet, VGG-16, and YOLOv7 on the NVIDIA GTX 3090 (24 GB) GPU with the three strategies mentioned in Sec. V with 1000 epochs. The trained models start to converge at the 200th200^{th} epoch and finally converge at the 800th800^{th} epoch, taking approximately 39 minutes. Note that YOLOv7 is trained with a different loss function than other models, so we only compare YOLOv7 in prediction performance. Fine-tuning gives the lowest loss among the three methods, and VGG-16 obtains the best results among models. Indeed, VGG-16 learns the best with fine-tuning transfer learning strategy, as shown in Fig. 6.

Testing Performance: We test detectors on commodity computers using Intel processors (i.e., Core i3-3217U and HD Graphics 4000). Like the training process, the fine-tuning strategy gives the highest accuracy, while the feature extraction transfer learning strategy gives the best result except for YOLOv7, as shown in Fig. 7. The feature extraction transfer learning strategy performs better than the randomly initialized weights strategy regarding YOLOv7. When detecting a sphere, there is a slightly different in the precision between feature extraction transfer and fine-tuning transfer strategies. Fig. 7 also reveals that the source feature extraction in YOLOv7 works well with objects in our target domain, while other models fail to extract features from objects in our target domain. Lastly, YOLOv7 achieves the highest precision at the maximum IoU threshold, while feature extraction transfer learning does not work for SqueezeNet.

Network #Params CPU Intel GPU VPU
MobileNetv1 6,883,296 14.87 ±\pm 0.12 19.37 ±\pm 0.23 11.74 ±\pm 0.07
MobileNetv2 3,087,328 17.35 ±\pm 0.19 19.96 ±\pm 0.22 10.15 ±\pm 0.05
SqueezeNet 1,639,648 18.35 ±\pm 0.22 22.53 ±\pm 0.28 14.82 ±\pm 0.11
VGG-16 24,013,744 2.49 ±\pm 0.01 5.15 ±\pm 0.02 2.22 ±\pm 0.005
YOLOv7-tiny 6,652,669 12.59 ±\pm 0.07 21.47 ±\pm 0.22 13.67 ±\pm 0.08
TABLE I: Detection performance of models in frames per second (fps) on different hardware configurations (implemented using OpenVINO) with a confidence level of 95%.

Hardware Configuration: We run each detection model on CPU, GPU (Intel), and VPU (Intel NCS2) for n=300n=300 samples and calculate confidence intervals: CI=fps¯±zα/2(σ/n)CI=\overline{fps}\pm z_{\alpha/2}\cdot({\sigma}/{\sqrt{n}}), where fps¯\overline{fps} is mean frame rate (fps), σ\sigma is the standard deviation, and zz is the confidence level value of α=95%\alpha=95\%. The test is implemented using OpenVINO, which enables ML models to run on Intel onboard GPU. We also test on a VPU interfacing with a computer via USB. Onboard GPU outperforms other computing devices in terms of frame rate. MoblileNetv2 outperforms YOLOv7 when being tested on the CPU but underperforms YOLOv7 on the onboard GPU and VPU. Lastly, the VPU maintains the most stable performance due to its lowest variance (Table I). The reason for obtaining the low variance on VPU is it does not share the workload like CPU or onboard GPU.

Depth Estimation: We use techniques described in Eq. 3 to estimate depth of detected objects (Fig. 8). To generate ground truth distances, we teach the robot to grasp and hold an object in its gripper and then calculate the distance between the robot and that object using the kinematic transformation. Fig. 9 shows that the minimum error is 1.00 cm, the maximum error is 3.00 cm, and the mean error is 1.75 cm.

VI-D Demonstration

The demonstration video includes two scenarios: (1) the robot grasps an object while estimating the depth of the detected object (Fig. 10), and (2) the robot performs SLAM while following a person using a face recognition module (Fig. 11) running on the Intel NC2: https://youtu.be/q4oz9Rixbzs.

VII Current Limitations

As our implementation is tight to ROS APIs and OpenVINO, the proposed design is only compatible with this software framework. Scaling to other software systems needs to rewrite the adaptation layer but can retain the other components because these are independent of this framework.

VIII Conclusions & Future Works

This work proposes a software-hardware framework for mobile cobots focusing on building and optimizing 2D and 3D perception with multiple commodity hardware. We build the framework on top of multiple ROS distributions, Linux versions, and OpenVINO. For design purposes, the framework can support multiple hardware and find the optimal configurations for input devices/sensors and computing devices. We mathematically model our transfer learning strategies and evaluate them on different computing devices. We then tested our framework on a 7-DOF two-arm Baxter robot with 2D detection and 3D depth estimation. An end-user application is also introduced for system completeness to facilitate software reusability. We reserve advanced techniques in robot 3D perception, such as segmentation, detection, and recognition, from a point cloud perspective for future works.

IX Acknowlegment

We would like to thank Christopher Collander (LEARN Lab) for his initial support in this project.

References

  • [1] K. Hsiao, P. Nangeroni, M. Huber, A. Saxena, and A. Y. Ng, “Reactive grasping using optical proximity sensors,” in 2009 IEEE International Conference on Robotics and Automation.   IEEE, 2009, pp. 2098–2105.
  • [2] B. Hmedan, D. Kilgus, H. Fiorino, A. Landry, and D. Pellier, “Adapting cobot behavior to human task ordering variability for assembly tasks,” in The International FLAIRS Conference Proceedings, vol. 35, 2022.
  • [3] J. Vice, G. Sukthankar, and P. K. Douglas, “Leveraging evolutionary algorithms for feasible hexapod locomotion across uneven terrain,” arXiv preprint arXiv:2203.15948, 2022.
  • [4] S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall, “Robot operating system 2: Design, architecture, and uses in the wild,” Science Robotics, vol. 7, no. 66, p. eabm6074, 2022.
  • [5] A. Biondi, F. Nesti, G. Cicero, D. Casini, and G. Buttazzo, “A safe, secure, and predictable software architecture for deep learning in safety-critical systems,” IEEE Embedded Systems Letters, vol. 12, no. 3, pp. 78–82, 2019.
  • [6] E. Nazarova, O. Sautenkov, M. A. Cabrera, J. Tirado, V. Serpiva, V. Rakhmatulin, and D. Tsetserukou, “Cobotar: interaction with robots using omnidirectionally projected image and dnn-based gesture recognition,” in 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC).   IEEE, 2021, pp. 2590–2595.
  • [7] T. Dang, K. Nguyen, and M. Huber, “Perfc: An efficient 2d and 3d perception software-hardware framework for mobile cobot,” in The International FLAIRS Conference Proceedings, vol. 36, 2023.
  • [8] A. Rendiniello, A. Remus, I. Sorrentino, P. K. Murali, D. Pucci, M. Maggiali, L. Natale, S. Traversaro, E. Villagrossi, A. Polo et al., “A flexible software architecture for robotic industrial applications,” in 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), vol. 1.   IEEE, 2020, pp. 1273–1276.
  • [9] A. H. Qureshi, A. Simeonov, M. J. Bency, and M. C. Yip, “Motion planning networks,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 2118–2124.
  • [10] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in 2016 IEEE international conference on robotics and automation (ICRA).   IEEE, 2016, pp. 3406–3413.
  • [11] M. Veloso, J. Biswas, B. Coltin, and S. Rosenthal, “Cobots: Robust symbiotic autonomous mobile service robots,” in Twenty-fourth international joint conference on artificial intelligence, 2015.
  • [12] N. Bellotto and H. Hu, “Multisensor-based human detection and tracking for mobile service robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp. 167–181, 2008.
  • [13] Z. Kingston, M. Moll, and L. E. Kavraki, “Exploring implicit spaces for constrained sampling-based planning,” The International Journal of Robotics Research, vol. 38, no. 10-11, pp. 1151–1178, 2019.
  • [14] J. Ichnowski, M. Danielczuk, J. Xu, V. Satish, and K. Goldberg, “Gomp: Grasp-optimized motion planning for bin picking,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 5270–5277.
  • [15] S. Chitta, I. Sucan, and S. Cousins, “Moveit![ros topics],” IEEE Robotics & Automation Magazine, vol. 19, no. 1, pp. 18–19, 2012.
  • [16] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14.   Springer, 2016, pp. 21–37.
  • [17] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7464–7475.
  • [18] J. Mao, S. Shi, X. Wang, and H. Li, “3d object detection for autonomous driving: a review and new outlooks,” arXiv preprint arXiv:2206.09474, 2022.
  • [19] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
  • [20] T. Dang, T. Tran, K. Nguyen, T. Pham, N. Pham, T. Vu, and P. Nguyen, “iotree: a battery-free wearable system with biocompatible sensors for continuous tree health monitoring,” in Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, 2022, pp. 769–771.
  • [21] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based slam with rao-blackwellized particle filters by adaptive proposals and selective resampling,” in Proceedings of the 2005 IEEE international conference on robotics and automation.   IEEE, 2005, pp. 2432–2437.
  • [22] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
  • [23] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
  • [24] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
  • [25] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016.
  • [26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [27] J. Kim, T. Dang, J. Jeon, and B. Yeom, “Design of a seamless gateway for Mechatrolink?,” 2013 IEEE International Conference on Industrial Technology (ICIT), 2013.