This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Occlusion-Free Image-Based Visual Servoing using Probabilistic Control Barrier Certificates

Yanze Zhang    Yupeng Yang    Wenhao Luo Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223 USA
(e-mail: yzhang94,yyang52,wenhao.luo@uncc.edu).
Abstract

Image-based visual servoing (IBVS) is a widely-used approach in robotics that employs visual information to guide robots towards desired positions. However, occlusions in this approach can lead to visual servoing failure and degrade the control performance due to the obstructed vision feature points that are essential for providing visual feedback. In this paper, we propose a Control Barrier Function (CBF) based controller that enables occlusion-free IBVS tasks by automatically adjusting the robot’s configuration to keep the feature points in the field of view and away from obstacles. In particular, to account for measurement noise of the feature points, we develop the Probabilistic Control Barrier Certificates (PrCBC) using control barrier functions that encode the chance-constrained occlusion avoidance constraints under uncertainty into deterministic admissible control space for the robot, from which the resulting configuration of robot ensures that the feature points stay occlusion free from obstacles with a satisfying predefined probability. By integrating such constraints with a Model Predictive Control (MPC) framework, the sequence of optimized control inputs can be derived to achieve the primary IBVS task while enforcing the occlusion avoidance during robot movements. Simulation results are provided to validate the performance of our proposed method.

thanks: This work was supported in part by the Faculty Research Grant award at the University of North Carolina at Charlotte.

1 Introduction

Image-Based Visual Servoing (IBVS) is a key problem in robotics that involves using image data to control the robot’s movement to a desired location. Specifically, this involves selecting specific feature points in images to generate a sequence of motions that move the robot in response to observations from a camera, ultimately reaching a goal configuration in the real world (Chaumette et al., 2016). Many IBVS methods have been proposed and used in real-world applications such as the Visual Servoing Platform (Marchand et al., 2005), the Amazon Picking Challenge (Huang and Mok, 2018), and Surgery (Li et al., 2020). However, these methods often assume that there are no obstacles obstructing the camera’s field of view (FoV), allowing feature points to remain visible during servoing. When obstacles enter the camera’s FoV, direct feedback from objects in the scene may be lost, making navigation more difficult.

To address the problem, some researchers focus on using environment knowledge to design the robot controller in case there is occlusion during visual servoing. One idea is using feature estimation method to recover control properties during occlusions, either by estimating point feature depth with nonlinear observers (De Luca et al., 2008), or using a geometric approach to reconstruct a dynamic object characterized by point and line features (Fleurmond and Cadenat, 2016). Another idea is to plan a camera trajectory that would altogether avoid the occlusion by other obstacles. (Kazemi et al., 2010) presented an overview of the main visual servoing path-planning techniques which can be used to guarantee occlusion-free and collision-free trajectories, as well as to consider FoV limitations. However, these methods rely on accurate knowledge of the (partial) workspace and may require long computation time.

Other researchers used potential fields (Mezouar and Chaumette, 2002) or variable weighting Linear Quadratic control laws (Kermorgant and Chaumette, 2013) to preserve visibility and avoid self-occlusions. Although these approaches are suitable for real-time implementation, they may exhibit local minima and possible unwanted oscillations. Moreover, the existing methods generally do not address the uncertainty in the environment, which could easily jeopardize the performance guarantee in presence of realistic factors such as measurement noises on the image.

On the other hand, Model Predictive Control (MPC) has been widely used in robotic systems to generate a sequence of controller by considering a finite time horizon optimization. It can deal with constraints, non-minimum phase processes and implement robust control even if the system dynamics is time-varying. A large amount of work about MPC based visual servoing has been studied (Saragih et al., 2019; Nicolis et al., 2018).

Control Barrier Functions (CBFs) based methods have been used for safety critical applications such as automobile (Xu et al., 2017) and human-robot interaction (Landi et al., 2019) with its provable theoretical guarantees that allow for a forward invariant desired set. CBFs have been used in visual servoing to keep the target in the FoV in (Zheng et al., 2019). However, this method is not suitable for handling occlusion problems and does not account for possible measurement uncertainties.

In this paper, we present a control method that enables a chance-constrained occlusion-free guarantee for IBVS tasks under camera measurement uncertainty through the use of Probabilistic Control Barrier Certificates (PrCBC). The key idea of PrCBC, adopted in our IBVS problem, is to enforce the chance-constrained occlusion avoidance between the feature points and obstacle in the camera view with deterministic constraints over an existing robotic controller, so that the occlusion-free movements can be achieved with a satisfying probability. We further integrate the PrCBC control constraints into a standard model predictive control (MPC) framework. Without loss of generality, we take the general case of a 6-DOF robot arm with eye-in-hand configuration as an example for the IBVS task, and provide simulation results on such platform to demonstrate the effectiveness of our proposed approach. Our key insight in this paper is that the proposed PrCBC can filter out the unsatisfying robot control action from the primary IBVS controller that may lead to occlusion with the obstacles, and leverage optimal controller from the integrated MPC framework. This insight leads us to the following contributions:

  1. 1.

    We present a novel chance-constrained occlusion avoidance method for visual servoing tasks using Probabilistic Control Barrier Certificates (PrCBC) under camera measurement uncertainty, with theoretical analysis on the performance guarantee.

  2. 2.

    We integrate PrCBC with Model Predictive Control (MPC) to provide a high-level planner with a minimally invasive control behavior to guarantee the occlusion avoidance for IBVS task, and validate through extensive simulation results.

2 Preliminaries

2.1 Feature Points Dynamics

Consider a moving camera which is fixed on the robot end-effector with the eye-in-hand configuration and the camera is viewing objects in the workspace. c\mathcal{F}_{c} is a right-handed orthogonal coordinate frame whose origin is at the principle point of the camera and ZZ axis is collinear with the optical axis of the camera. Assuming 𝐪i=[xi,yi]T2\mathbf{q}_{i}=[x_{i},y_{i}]^{\rm T}\in\mathbb{R}^{2} is the pixel coordinate of a static point in the image plane, then we can define its normalized image plane coordinate as 𝐪¯i2\bar{\mathbf{q}}_{i}\in\mathbb{R}^{2} according to the camera projection model (Corke and Khatib, 2011).

𝐪¯i=[XiZi,YiZi]T=[f00f]1(𝐪i[pxpy])\bar{\mathbf{q}}_{i}=\left[{\begin{array}[]{cc}\frac{X_{i}}{Z_{i}},\frac{Y_{i}}{Z_{i}}\end{array}}\right]^{\rm T}=\left[{\begin{array}[]{cc}{f}&0\\ 0&{f}\end{array}}\right]^{-1}(\mathbf{q}_{i}-\left[{\begin{array}[]{c}{p_{x}}\\ {p_{y}}\end{array}}\right]) (1)

where f,px,pyf,p_{x},p_{y}\in\mathbb{R} are camera intrinsic parameters and [Xi,Yi,Zi]T[X_{i},Y_{i},Z_{i}]^{\rm T} is the 3D coordinate of the point in c\mathcal{F}_{c}.

The image vision feature can be constructed from the normalized image plane coordinates such that 𝐩i=𝐪¯i\mathbf{p}_{i}=\bar{\mathbf{q}}_{i}. Assume there are mm feature points extracted from the image and the state vectors of current and target vision feature points are denoted as 𝐬(t)=[𝐩1(t);;𝐩m(t)]2m\mathbf{s}(t)=\left[\mathbf{p}_{1}(t);\cdots;\mathbf{p}_{m}(t)\right]\in\mathbb{R}^{2m} at time tt and 𝐬=[𝐩1;;𝐩m]2m\mathbf{s}^{*}=\left[\mathbf{p}_{1}^{*};\cdots;\mathbf{p}_{m}^{*}\right]\in\mathbb{R}^{2m} respectively, then the IBVS task aims to regulate the positional image feature points error vector 𝐞(t)=𝐬(t)𝐬\mathbf{e}(t)=\mathbf{s}(t)-\mathbf{s}^{*} to zero through robot movements, which thus drives the robot to the desired position.

According to (Chaumette and Hutchinson, 2006), the dynamics of the mm feature points can be expressed as:

𝐬˙=𝐋s(Z)𝐕c\dot{\mathbf{s}}=\mathbf{L}_{s}(Z)\mathbf{V}_{c} (2)

where 𝐋s=[𝐋s1𝐋sm]T2m×d\mathbf{L}_{s}=\left[\mathbf{L}_{s_{1}}\cdots\mathbf{L}_{s_{m}}\right]^{\rm T}\in\mathbb{R}^{2m\times d} is the image interaction matrix and can be computed refer to (Chaumette and Hutchinson, 2006), 𝐕cd\mathbf{V}_{c}\in\mathbb{R}^{d} is a vector of the robot motion controller to express the camera translation and rotation velocity in the workspace and Z=[Z1Zm]TmZ=[Z_{1}\cdots Z_{m}]^{\rm T}\in\mathbb{R}^{m} denotes the depths of mm feature points. In this paper, we assume the depth information ZZ of the feature points has been acquired. Therefore in IBVS, (2) represents the system dynamics of the mm feature points with 𝐬\mathbf{s} as the system state and 𝐕c\mathbf{V}_{c} as the system controller.

Since 𝐬\mathbf{s}^{*} is predefined and time independent, we have:

𝐞˙=𝐬˙=𝐋s𝐕c\dot{\mathbf{e}}=\dot{\mathbf{s}}=\mathbf{L}_{s}\mathbf{V}_{c} (3)

According to (Chaumette and Hutchinson, 2006), the unconstrained gradient-based IBVS controller driving 𝐞0\mathbf{e}\to 0 can be defined as:

𝐕c=α𝐋s+𝐞\mathbf{V}_{c}=-\alpha{\mathbf{L}_{s}}^{+}\mathbf{e} (4)

where α\alpha\in\mathbb{R} is a constant as predefined control gain, and 𝐋s+d×2m{{\mathbf{L}}_{s}}^{+}\in\mathbb{R}^{d\times 2m} is the pseudo-inverse of 𝐋s\mathbf{L}_{s}, which is given by 𝐋s+=(𝐋sT𝐋s)1𝐋sT{{\mathbf{L}}_{s}}^{+}={(\mathbf{L}^{\rm T}_{s}\mathbf{L}_{s})}^{-1}{\mathbf{L}_{s}}^{\rm T}. To ensure the local asymptotic stability, we should have 𝐋sT𝐋s>0\mathbf{L}^{\rm T}_{s}\mathbf{L}_{s}>0, i.e. at least three feature points should be available in the FoV.

2.2 Model Predictive Control Policy

To optimize the IBVS controller over a finite time horizon, the MPC policy is used to construct a NN step time horizon planner rendering a sequence of candidate control actions. Consider the problem of IBVS that seeks to regulate the current feature points to the target feature points through robot movements. According to the error dynamics (3), the discrete-time control system can be described by:

𝐞(t+1)=𝐈𝐞(t)+𝐋s𝐕c(t)=g(𝐞(t),𝐕c(t))\mathbf{e}(t+1)=\mathbf{I}\mathbf{e}(t)+\mathbf{L}_{s}\mathbf{V}_{c}(t)=g(\mathbf{e}(t),\mathbf{V}_{c}(t)) (5)

where 𝐈2m×2m\mathbf{I}\in\mathbb{R}^{2m\times 2m} is an identity matrix and 𝐞(t)2m\mathbf{e}(t)\in\mathbb{R}^{2m} represents the state of the image feature error of mm feature points at time step tt. The system state is 𝐬(t)\mathbf{s}(t) with 𝐕c(t)𝒰d\mathbf{V}_{c}(t)\in\mathcal{U}\subset\mathbb{R}^{d} as the control input and gg is locally Lipschitz.

Therefore, the finite-time optimal control problem can be solved at time step tt using the following policy π\pi:

min𝐕c(t:t+N1){k=0N1(𝐞(t+k|t)T𝐐𝐞(t+k|t)\displaystyle\min_{\mathbf{V}_{c}(t:t+N-1)}\{\sum_{k=0}^{N-1}(\mathbf{e}({t+k|t})^{\rm T}\mathbf{Q}\mathbf{e}({t+k|t}) (6)
+𝐕c(t+k|t)T𝐑𝐕c(t+k|t))+𝐞(t+N|t)T𝐅𝐞(t+N|t)}\displaystyle\quad+\mathbf{V}_{c}({t+k|t})^{\rm T}\mathbf{R}\mathbf{V}_{c}({t+k|t}))+\mathbf{e}({t+N|t})^{\rm T}\mathbf{F}\mathbf{e}({t+N|t})\}
s.t. 𝐞(t+k+1|t+k)=g(𝐞(t+k|t),𝐕c(t+k|t))\displaystyle\mathbf{e}(t+k+1|t+k)=g(\mathbf{e}(t+k|t),\mathbf{V}_{c}(t+k|t)) (7)
𝐞(t+k|t)𝒳,𝐕c(t+k|t)𝒰,k=0,,N1\displaystyle\mathbf{e}(t+k|t)\in\mathcal{X},\mathbf{V}_{c}(t+k|t)\in\mathcal{U},k=0,\ldots,N-1 (8)
𝐞(t|t)=𝐞(t)\displaystyle\mathbf{e}(t|t)=\mathbf{e}(t) (9)

where NN is the prediction time horizon, and 𝐐2m×2m\mathbf{Q}\in\mathbb{R}^{2m\times 2m}, 𝐑d×d\mathbf{R}\in\mathbb{R}^{d\times d} and 𝐅2m×2m\mathbf{F}\in\mathbb{R}^{2m\times 2m} are weighting matrices which represent a trade-off between the small magnitude of the control input (with larger value of 𝐑\mathbf{R}) and fast response (with larger value of 𝐐\mathbf{Q} and 𝐅\mathbf{F}). 𝐞(t+k|t)\mathbf{e}(t+k|t) denotes the error vector at time step t+kt+k predicted at time step tt, which is obtained from the current image feature error 𝐞(t)\mathbf{e}(t) and the control input of 𝐕c(t:t+N1)\mathbf{V}_{c}({t:t+N-1}). Finally, the optimized control sequence can be obtained as 𝐕ct:t+N1\mathbf{V}_{c}^{t:t+N-1}.

2.3 Obstacle Model and Occlusion Avoidance

Without loss of generality, Consider an obstacle OO which is moving in the workspace and may occlude the feature points in the camera view. We model the obstacle as a rigid sphere with the radius RR in the workspace. Similar to the feature points, the normalized image plane coordinates of the obstacle center and radius of the obstacle can be defined as 𝐬o(t)=[XoZo(t),YoZo(t)]T2\mathbf{s}_{\mathrm{o}}(t)=[\frac{X_{\mathrm{o}}}{Z_{\mathrm{o}}(t)},\frac{Y_{\mathrm{o}}}{Z_{\mathrm{o}}(t)}]^{\rm T}\in\mathbb{R}^{2} and Rn(t)=RZo(t)R_{n}(t)=\frac{R}{Z_{\mathrm{o}}(t)} respectively. To simplify the notation, we will use 𝐬o\mathbf{s}_{o} and RnR_{n} to denote the real-time state of the obstacle center and obstacle radius in the normalized image plane.

In this way, for any pair-wise occlusion avoidance between the feature point ii and the obstacle OO in the normalized image plane, the occlusion-free condition and state set can be defined as follows:

hi,oc(𝐬,𝐬o)=𝐬i𝐬o2Rn2,ih^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}})={\left\|\mathbf{s}_{i}-\mathbf{s}_{\mathrm{o}}\right\|}^{2}-R_{n}^{2},\forall i (10)
i,oc={𝐬i,𝐬o2|hi,oc(𝐬,𝐬o)0,i}\mathcal{H}^{c}_{i,\mathrm{o}}=\left\{\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}\in\mathbb{R}^{2}|h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}})\geq 0,\forall i\right\} (11)

Then for all feature points, the desired set for occlusion-free states is thus defined as:

c={i}i,oc\mathcal{H}^{c}=\bigcap_{\{\forall i\}}\mathcal{H}^{c}_{i,\mathrm{o}} (12)

2.4 Occlusion-free Constraints using CBF

Control Barrier Functions (CBF) (Ames et al., 2019) have been widely applied to generate control constraints that render a set forward invariant, i.e. if the system state starts inside a set, it will never leave this set under the satisfying controller. The main idea of CBF is summarized as the following Lemma.

Lemma 1

[summarized from Ames et al. (2019)] Given a dynamical system affine in control and a desired set \mathcal{H} as the 0-superlevel set of a continuous differentiable function h(𝐱):𝒳h(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}, the function hh is a control barrier function if there exists an extended class-𝒦\mathcal{K} function κ()\kappa(\cdot) such that sup𝐮𝒰[h˙(𝐱,𝐮)+κ(h(𝐱))]0\sup_{\mathbf{u}\in\mathcal{U}}[\dot{h}(\mathbf{x},\mathbf{u})+\kappa(h(\mathbf{x}))]\geq 0 for all 𝐱\mathbf{x}. The admissible control space (𝐱)\mathcal{B}(\mathbf{x}) for Lipschitz continuous controller 𝐮\mathbf{u} rendering \mathcal{H} forward invariant (i.e. keeping the system state 𝐱\mathbf{x} staying in \mathcal{H} over time) thus becomes:

(𝐱)={𝐮𝒰|h˙(𝐱,𝐮)+κ(h(𝐱))0}\mathcal{B}(\mathbf{x})=\{\mathbf{u}\in\mathcal{U}|\dot{h}(\mathbf{x},\mathbf{u})+\kappa(h(\mathbf{x}))\geq 0\} (13)

Based on the occlusion avoidance condition  (10)-(11) and the Lemma 1, if the feature points are not occluded by the obstacle initially, then the admissible control space for the robot to keep the feature points free from occlusion can be represented by the following control constraints over 𝐕c\mathbf{V}_{c}:

(𝐬,𝐬o)={𝐕c𝒰|h˙i,oc(𝐬,𝐬o,𝐕c)+γ(hi,oc(𝐬,𝐬o))0,i}\displaystyle\mathcal{B}(\mathbf{s},\mathbf{s}_{\mathrm{o}})=\{\mathbf{V}_{c}\in\mathcal{U}|\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c})+\gamma(h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}))\geq 0,\forall i\} (14)

where (𝐬,𝐬o)\mathcal{B}(\mathbf{s},\mathbf{s}_{\mathrm{o}}) defines the Control Barrier Certificates (CBC) for the feature point-obstacle occlusion avoidance, and γ\gamma is a user-defined parameter in the particular choice of κ(h(𝐱))=γ(h(𝐱))\kappa(h(\mathbf{x}))=\gamma(h(\mathbf{x})) as in (Luo et al., 2020b). It will render the occlusion-free set c\mathcal{H}^{c} forward invariant, i.e. as long as we can guarantee the control input 𝐕c\mathbf{V}_{c} lies in the set (𝐬,𝐬o)\mathcal{B}(\mathbf{s},\mathbf{s}_{\mathrm{o}}), the feature points will not be occluded by the obstacle at all times.

2.5 Chance Constraints for Measurement Uncertainty

Although (14) indicates an explicit condition for occlusion avoidance with the perfect knowledge of the actual position 𝐬\mathbf{s}, the presence of measurement uncertainty on 𝐬\mathbf{s} makes it challenging to enforce (14), or even impossible when the measurement noise is unbounded, e.g. Gaussian noise. In this paper, we consider the realistic situation where the pixel coordinates of the extracted feature points acquired by the camera have Gaussian distribution noise that are formulated as follows:

𝐬^i=𝐬i+𝐰𝐢,𝐰𝐢N(0,Σi),𝐬^o=𝐬o+𝐰o,𝐰oN(0,Σo)\footnotesize\hat{\mathbf{s}}_{i}=\mathbf{s}_{i}+\mathbf{\mathbf{w}_{i}},\mathbf{\mathbf{w}_{i}}\sim N(0,\Sigma_{i}),\;\hat{\mathbf{s}}_{\mathrm{o}}=\mathbf{s}_{\mathrm{o}}+\mathbf{\mathbf{w}_{\mathrm{o}}},\mathbf{\mathbf{w}_{\mathrm{o}}}\sim N(0,\Sigma_{\mathrm{o}}) (15)

where 𝐰i,𝐰o2{\mathbf{w}_{i},\mathbf{w}_{\mathrm{o}}}\in\mathbb{R}^{2} are the Gaussian measurement noises and can be considered as independent random variables with zero mean and Σi,Σo\Sigma_{i},\Sigma_{\mathrm{o}} as the variances. With that, for the rest of the paper we assume only the noisy positions 𝐬^i\hat{\mathbf{s}}_{i} and 𝐬^o\hat{\mathbf{s}}_{\mathrm{o}} of extracted feature points are available when the uncertainty is considered.

Then the occlusion avoidance condition in (10)-(11) can be considered in a chance-constrained setting. Formally, given the user-defined satisfying probability threshold of the occlusion avoidance as σ(0,1)\sigma\in(0,1), we have:

Pr(𝐬,𝐬oc)σ{\rm{Pr}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}\in\mathcal{H}^{c})\geq\sigma (16)

where Pr(.) indicates the probability of an event. Note that when the σ\sigma is approaching 1, it will lead to a more conservative controller to maintain the probabilistic occlusion-free set. In Section 3.2, we will discuss how to transfer the chance constraints on the feature points states into a deterministic admissible control space 𝒮σ(𝐬^,𝐬^o)𝒰\mathcal{S}^{\sigma}(\mathbf{\hat{s}},\mathbf{\hat{s}}_{\mathrm{o}})\subset\mathcal{U} w.r.t. 𝐕c\mathbf{V}_{c}, so that the occlusion-free performance could be guaranteed with satisfying probability as implied in (16).

2.6 Problem Statement

To achieve the occlusion-free IBVS, we first use the MPC as a planner to generate the unconstrained control sequence 𝐕ct:t+N1\mathbf{V}_{c}^{t:t+N-1} at each time step tt. To address the occlusion problem brought by the obstacle, we discuss two situations to define the optimized problem.

Without Noise: First, we assume that the camera can get the accurate pixel coordinates from the image, i.e. we can extract the feature points precisely. In this way, we assume the feature points are not occluded by the obstacle at the initial location. Then we can formally define this occlusion-free constraint with the following step-wise Quadratic-Program (QP) at each time step tt:

𝐕c=argmin𝐕𝐕𝐕cmpc2\displaystyle\mathbf{V}_{c}^{*}=\operatorname*{arg\,min}_{\mathbf{V}}||\mathbf{V}-\mathbf{{V}}_{c}^{mpc}||^{2} (17)
s.t.𝐕(𝐬,𝐬o)𝐕Vmax\displaystyle\text{s.t.}\quad\mathbf{V}\in\mathcal{B}(\mathbf{s},\mathbf{s}_{\mathrm{o}})\quad||\mathbf{V}||\leq{V}_{\mathrm{max}} (18)

where 𝐕cmpcd\mathbf{V}_{c}^{mpc}\in\mathbb{R}^{d} is the first step of the control sequence 𝐕ct:t+N1\mathbf{V}_{c}^{t:t+N-1} generated from MPC at time tt, and Vmax{V}_{\mathrm{max}} is the maximum velocity. Hence the resulting 𝐕cd\mathbf{V}_{c}^{*}\in\mathbb{R}^{d} is the step-wise optimized controller to be executed at each tt that renders the occlusion-free set c\mathcal{H}^{c} in (12) forward invariant.

With Noise: In presence of camera measurement noise, then only the inaccurate vision feature information of the normalized image plane 𝐬^i\hat{\mathbf{s}}_{i} and 𝐬^o\hat{\mathbf{s}}_{\mathrm{o}} can be obtained for IBVS task. In this case, the chance-constrained occlusion-free problem can be formally defined as:

𝐕c=argmin𝐕𝐕𝐕cmpc2\displaystyle\mathbf{V}_{c}^{*}=\operatorname*{arg\,min}_{\mathbf{V}}||\mathbf{V}-\mathbf{{V}}_{c}^{mpc}||^{2} (19)
s.t.𝐕𝒮σ(𝐬^,𝐬^o),𝐕Vmax\displaystyle\text{s.t.}\quad\mathbf{V}\in\mathcal{S}^{\sigma}(\mathbf{\hat{s}},\mathbf{\hat{s}}_{\mathrm{o}}),\quad||\mathbf{V}||\leq{V}_{\mathrm{max}} (20)

3 Method

We consider the IBVS task scenario as shown in Fig. 1, where the camera is attached to a 6-DOF PUMA robot end-effector and four feature points are set (i.e. m=4m=4) as the extracted image feature for guiding the IBVS task as commonly assumed e.g. (Chaumette and Hutchinson, 2006). There is one moving obstacle with radius RR\in\mathbb{R} in the workspace which may occlude the feature points during the execution of the primary IBVS task. Therefore, the objective of the IBVS task for the robot to achieve a desired configuration is to move the PUMA robotic arm in a way that the feature points positions in the camera view eventually converge to the desired positions without occlusion from the obstacle at all times.

Refer to caption
Figure 1: The IBVS scenario with a moving obstacle.

With that, we have the feature points state as 𝐬(t)=[𝐩1(t);𝐩2(t);𝐩3(t);𝐩4(t)]8\mathbf{s}(t)=[\mathbf{p}_{1}(t);\mathbf{p}_{2}(t);\mathbf{p}_{3}(t);\mathbf{p}_{4}(t)]\in\mathbb{R}^{8} and the corresponding control input 𝐕c=[vx,vy,vz,ωx,ωy,ωz]T\mathbf{V}_{c}=\left[v_{x},v_{y},v_{z},\omega_{x},\omega_{y},\omega_{z}\right]^{\rm T} that expresses the 6-DOF motion controller of the camera. [vx,vy,vz]T[v_{x},v_{y},v_{z}]^{\rm T} and [ωx,ωy,ωz]T[\omega_{x},\omega_{y},\omega_{z}]^{\rm T} are the vector of the linear and angular velocities. 𝐋s=[𝐋s1;𝐋s2;𝐋s3;𝐋s4]8×6\mathbf{L}_{s}=\left[{\mathbf{L}_{s1}};{\mathbf{L}_{s2}};{\mathbf{L}_{s3}};{\mathbf{L}_{s4}}\right]\in\mathbb{R}^{8\times 6} is the interaction matrix which is detailed later in (21). Similar to 𝐬\mathbf{s}, we can acquire the state of the obstacle center as 𝐬o(t)=𝐩o(t)\mathbf{s}_{\mathrm{o}}(t)=\mathbf{p}_{\mathrm{o}}(t). According to (Chaumette and Hutchinson, 2006), we can calculate interaction matrix of ii-th feature point:

𝐋si=[1Zi0pi,1Zipi,1pi,2(1+pi,12)pi,201Zipi,2Zi1+pi,22pi,1pi,2pi,1]{\mathbf{L}_{{s_{i}}}}=\left[{\begin{array}[]{cccccc}{\frac{{-1}}{{{Z_{i}}}}}&0&{\frac{{{p_{i,1}}}}{{{Z_{i}}}}}&{{p_{i,1}}{p_{i,2}}}&{-\left({1+p_{i,1}^{2}}\right)}&{{p_{i,2}}}\\ 0&{\frac{{-1}}{{{Z_{i}}}}}&{\frac{{{p_{i,2}}}}{{{Z_{i}}}}}&{1+p_{i,2}^{2}}&{-{p_{i,1}}{p_{i,2}}}&{-{p_{i,1}}}\end{array}}\right] (21)

where [pi,1,pi,2]T=𝐩i[p_{i,1},p_{i,2}]^{\rm T}=\mathbf{p}_{i} and ZiZ_{i} is the depth of ii-th feature point in c\mathcal{F}_{c}. Similarly, the interaction matrix of the obstacle center and obstacle radius, 𝐋o\mathbf{L}_{\mathrm{o}} and 𝐋or\mathbf{L}_{\mathrm{or}} can be derived as:

𝐋o=[1Zo0po,1Zopo,1po,2(1+po,12)po,201Zopo,2Zi1+po,22po,1po,2po,1]{\mathbf{L}_{\mathrm{o}}}=\left[{\begin{array}[]{cccccc}{\frac{{-1}}{{{Z_{\mathrm{o}}}}}}&0&{\frac{{{p_{\mathrm{o},1}}}}{{{Z_{\mathrm{o}}}}}}&{{p_{\mathrm{o},1}}{p_{\mathrm{o},2}}}&{-\left({1+p_{\mathrm{o},1}^{2}}\right)}&{{p_{\mathrm{o},2}}}\\ 0&{\frac{{-1}}{{{Z_{\mathrm{o}}}}}}&{\frac{{{p_{\mathrm{o},2}}}}{{{Z_{i}}}}}&{1+p_{\mathrm{o},2}^{2}}&{-{p_{\mathrm{o},1}}{p_{\mathrm{o},2}}}&{-{p_{\mathrm{o},1}}}\end{array}}\right] (22)
𝐋or=[00RZo2Rpo,2ZoRpo,1Zo0]{\mathbf{L}_{\mathrm{or}}}=\left[{\begin{array}[]{cccccc}{0}&0&{\frac{R}{Z_{\mathrm{o}}^{2}}}&{\frac{{R}{p_{\mathrm{o},2}}}{Z_{\mathrm{o}}}}&{-\frac{{R}{p_{\mathrm{o},1}}}{{Z_{\mathrm{o}}}}}&{0}\end{array}}\right] (23)

where 𝐩o=[po,1,po,2]T\mathbf{p}_{\mathrm{o}}=[p_{\mathrm{o},1},p_{\mathrm{o},2}]^{\rm T} and ZoZ_{\mathrm{o}} is obstacle depth in c\mathcal{F}_{c}.

Intuitively, the distance will be used to derive the analytical form of the occlusion-free set. According to the different situations discussed in Section 2.6, the CBC and PrSBC for occlusion-free performance are detailed as follows.

3.1 Control Barrier Certificates for Occlusion Avoidance

First, we consider the condition where the camera can acquire the accurate pixel coordinate from the world space when there is no measurement noise. Given the occlusion free condition in (10)- (11) and the form of CBC in (14), we now formally define the CBC for the occlusion avoidance condition as follows:

Theorem 3.1

Given a desired occlusion-free set c\mathcal{H}^{c} in (11) with function hi,oc(𝐬,𝐬o)h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}) in (10), the admissible control space for Lipschitz continuous controller defined below renders c\mathcal{H}^{c} forward invariant, i.e., keeping feature points away from the obstacle in the image plane over time.

(𝐬,𝐬o)={𝐕c𝒰|h˙i,oc(𝐬,𝐬o,𝐕c)+γ(hi,oc(𝐬,𝐬o))0,i}\displaystyle\mathcal{B}(\mathbf{s},\mathbf{s}_{\mathrm{o}})=\{\mathbf{V}_{c}\in\mathcal{U}|\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c})+\gamma(h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}))\geq 0,\forall i\} (24)
h˙i,oc(𝐬i,𝐬o,𝐕c)=2(𝐬i𝐬o)T(𝐋si𝐋o)𝐕c2Rn𝐋or𝐕c\displaystyle\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c})=2(\mathbf{s}_{i}-\mathbf{s}_{\mathrm{o}})^{\rm T}(\mathbf{L}_{si}-\mathbf{L}_{\mathrm{o}})\mathbf{V}_{c}-2R_{n}\mathbf{L}_{\mathrm{or}}\mathbf{V}_{c} (25)
{pf}

We demonstrate that our proposed control barrier function hi,oc(𝐬i,𝐬o)h^{c}_{i,\mathrm{o}}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}) in (10) is a valid control barrier function. As summarized in (Capelli and Sabattini, 2020), from Lemma 1 the condition for a function h(𝐱)h(\mathbf{x}) to be a valid CBF should satisfy the following three conditions: (a) h(𝐱)h(\mathbf{x}) is continuously differentiable, (b) the first-order time derivative of h(𝐱)h(\mathbf{x}) depends explicitly on the control input 𝐮\mathbf{u} (i.e. h(𝐱)h(\mathbf{x}) is of relative degree one), and (c) it is possible to find an extended class-𝒦\mathcal{K} function κ()\kappa(\cdot) such that sup𝐮𝒰{h˙(𝐱,𝐮)+κ(h(𝐱))}0\sup_{\mathbf{u}\in\mathcal{U}}\{\dot{h}(\mathbf{x},\mathbf{u})+\kappa(h(\mathbf{x}))\}\geq 0 for all 𝐱\mathbf{x}.

Hence, consider our proposed candidate CBF hi,oc(𝐬i,𝐬o)h^{c}_{i,\mathrm{o}}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}) in (10), and according to the differential function in (25), it is straightforward that the first order derivative of (10) in the form of (25) depends explicitly on the control input 𝐕c\mathbf{V}_{c}. Thus, the function hi,oc(𝐬i,𝐬o){h}^{c}_{i,\mathrm{o}}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}) in (10) is (a) continuously differential and (b) is of relative degree one.

For condition (c) sup𝐕c𝒰[h˙i,oc(𝐬,𝐬o,𝐕c)+γ(hi,oc(𝐬,𝐬o))]0\sup_{\mathbf{V}_{c}\in\mathcal{U}}[\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c})+\gamma(h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}))]\geq 0, we need to prove that the following inequality has at least one solution.

h˙i,oc(𝐬,𝐬o,𝐕c)+γ(hi,oc(𝐬,𝐬o))0,i{1,2,3,4}\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c})+\gamma(h^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}}))\geq 0,\forall i\in\{1,2,3,4\} (26)

Given the form of h˙i,oc(𝐬,𝐬o,𝐕c)\dot{h}^{c}_{i,\mathrm{o}}(\mathbf{s},\mathbf{s}_{\mathrm{o}},\mathbf{V}_{c}) in (25), then (26) can be re-written as:

𝐌𝐕cn\mathbf{M}\mathbf{V}_{c}\geq n (27)

where 𝐌=2((𝐬[𝐬o;𝐬o;𝐬o;𝐬o])T(𝐋s[𝐋o;𝐋o;𝐋o;𝐋o])Rn𝐋or)1×6\mathbf{M}=2((\mathbf{s}-[\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}}])^{\rm T}(\mathbf{L}_{s}-[\mathbf{L}_{\mathrm{o}};\mathbf{L}_{\mathrm{o}};\mathbf{L}_{\mathrm{o}};\mathbf{L}_{\mathrm{o}}])-R_{n}\mathbf{L}_{\mathrm{or}})\in\mathbb{R}^{1\times 6}, and n=γ(Rn2𝐬[𝐬o;𝐬o;𝐬o;𝐬o]2)n=\gamma(R_{n}^{2}-{\left\|\mathbf{s}-[\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}};\mathbf{s}_{\mathrm{o}}]\right\|}^{2})\in\mathbb{R}. Given a specific state of 𝐬\mathbf{s} and 𝐬o\mathbf{s}_{o}, nn will be a constant. It is also straightforward that the six terms of 𝐌\mathbf{M} can not be zero at the same time. Therefore, if we consider an unbounded control input 𝐕c𝒰6\mathbf{V}_{c}\in\mathcal{U}\in\mathbb{R}^{6}, we can always find a solution such that 𝐌𝐕cn\mathbf{M}\mathbf{V}_{c}\geq n.

In the case that 𝐕c𝒰6\mathbf{V}_{c}\in\mathcal{U}\in\mathbb{R}^{6} is bounded, several existing approaches could be employed to enforce the feasibility of the condition in (27). For example, in (Lyu et al., 2021), we provided an optimization solution for parameter γ\gamma over time to guarantee that the admissible control set is always not empty when a feasible solution does exist. Besides, the authors in (Xiao et al., 2022) provided a novel method to find sufficient conditions, which are captured by a single constraint and enforced by an additional CBF, to guarantee the feasibility of the original CBF control constraint. Readers are referred to (Xiao et al., 2022) for further details. With that, we conclude the proof. \square

3.2 Probabilistic Control Barrier Certificates (PrCBC) for Occlusion Avoidance

In presence of uncertainty, similar to (Luo et al., 2020a), we have the sufficient condition of (16) as

Pr(𝐕c(𝐬i,𝐬o))σPr(𝐬ii,oc)σ,i{1,2,3,4}\footnotesize\textbf{Pr}(\mathbf{V}_{c}\in\mathcal{B}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}))\geq\sigma\Rightarrow\textbf{Pr}(\mathbf{s}_{i}\in\mathcal{H}^{c}_{i,\mathrm{o}})\geq\sigma,\forall i\in\{1,2,3,4\} (28)

Following (Luo et al., 2020a), we define our PrCBC for the chance-constrained occlusion free condition as follows:

Probabilistic Control Barrier Certificates (PrCBC): Given a confidence level σ(0,1)\sigma\in(0,1), the admissible control space 𝒮σ(𝐬^,𝐬^o)\mathcal{S}^{\sigma}(\mathbf{\hat{s}},\mathbf{\hat{s}}_{\mathrm{o}}) determined as below enforces the chance-constrained condition in (16) at all times.

𝒮σ(𝐬^,𝐬^o)={𝐕c𝒰|𝐕cT𝐀i,oσ𝐕c+𝐛i,oσ𝐕c+ci,o0,\displaystyle\mathcal{S}^{\sigma}(\mathbf{\hat{s}},\mathbf{\hat{s}}_{\mathrm{o}})=\{\mathbf{V}_{c}\in\mathcal{U}|\mathbf{V}_{c}^{\rm T}\mathbf{A}^{\sigma}_{i,\mathrm{o}}\mathbf{V}_{c}+\mathbf{b}^{\sigma}_{i,\mathrm{o}}\mathbf{V}_{c}+c_{i,\mathrm{o}}\leq 0,
𝐀i,oσ6×6,𝐛i,oσ1×6,ci,o,i{1,2,3,4}}\displaystyle\mathbf{A}^{\sigma}_{i,\mathrm{o}}\in\mathbb{R}^{6\times 6},\mathbf{b}_{i,\mathrm{o}}^{\sigma}\in\mathbb{R}^{1\times 6},c_{i,\mathrm{o}}\in\mathbb{R},\forall i\in\{1,2,3,4\}\} (29)

The analytical form of 𝐀i,oσ6×6,𝐛i,oσ1×6,ci,o\mathbf{A}^{\sigma}_{i,\mathrm{o}}\in\mathbb{R}^{6\times 6},\mathbf{b}_{i,\mathrm{o}}^{\sigma}\in\mathbb{R}^{1\times 6},c_{i,\mathrm{o}}\in\mathbb{R} will be given in the latter part of (3.2).

Computation of PrCBC: Given the confidence level σ(0,1)\sigma\in(0,1), the chance constraints of Pr(𝐕c(𝐬i,𝐬o))σ\textbf{Pr}(\mathbf{V}_{c}\in\mathcal{B}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}))\geq\sigma can be transformed into a deterministic quadratic constraint over the controller 𝐕c\mathbf{V}_{c} in the form of (3.2). We denote e=Φ1(σ)e=\Phi^{-1}(\sigma) as one solution that makes the joint cumulative distribution function (CDF) of random variable of 𝐬^i𝐬^o\hat{\mathbf{s}}_{i}-\hat{\mathbf{s}}_{\mathrm{o}} equal to σ\sigma, where ee represent the side length of the square that covers the cumulative distribution probability with σ\sigma. Then, the deterministic constraints below become the sufficient condition for Pr(𝐕c(𝐬i,𝐬o))σ\textbf{Pr}(\mathbf{V}_{c}\in\mathcal{B}(\mathbf{s}_{i},\mathbf{s}_{\mathrm{o}}))\geq\sigma:

Δ𝐬2+2Δ𝐬T(Δ𝐋4Rr𝐋r)𝐕cγ\displaystyle||\Delta\mathbf{s}||^{2}+2\cdot\frac{\Delta\mathbf{s}^{\rm T}(\Delta\mathbf{L}-4Rr\mathbf{L}_{r})\mathbf{V}_{c}}{\gamma} 2Rn2+𝐕cTΔ𝐋TΔ𝐋𝐕cγ2\displaystyle\geq 2R_{n}^{2}+\frac{\mathbf{V}_{c}^{\rm T}\Delta\mathbf{L}^{\rm T}\Delta\mathbf{L}\mathbf{V}_{c}}{\gamma^{2}}
+4e2,i{1,2,3,4}\displaystyle+4e^{2},\forall i\in\{1,2,3,4\} (30)

where Δ𝐬=𝐬^i𝐬^o\Delta\mathbf{s}=\hat{\mathbf{s}}_{i}-\hat{\mathbf{s}}_{\mathrm{o}} and Δ𝐋=𝐋si𝐋o\Delta\mathbf{L}=\mathbf{L}_{s_{i}}-\mathbf{L}_{\mathrm{o}}. Therefore, we formally construct the PrCBC as the following deterministic quadratic constraints:

𝒮σ(𝐬^,𝐬^o)\displaystyle\mathcal{S}^{\sigma}(\mathbf{\hat{s}},\mathbf{\hat{s}}_{\mathrm{o}}) ={𝐕c𝒰|𝐕cTΔ𝐋TΔ𝐋𝐕cγ22Δ𝐬T(Δ𝐋4Rn𝐋r)𝐕cγ\displaystyle=\{\mathbf{V}_{c}\in\mathcal{U}|\frac{\mathbf{V}_{c}^{\rm T}\Delta\mathbf{L}^{\rm T}\Delta\mathbf{L}\mathbf{V}_{c}}{\gamma^{2}}-2\frac{\Delta\mathbf{s}^{\rm T}(\Delta\mathbf{L}-4R_{n}\mathbf{L}_{r})\mathbf{V}_{c}}{\gamma}
+2Rn2+4e2||Δ𝐬||220,i{1,2,3,4}}\displaystyle+2R_{n}^{2}+4e^{2}-||\Delta\mathbf{s}||^{2}_{2}\leq 0,\forall i\in\{1,2,3,4\}\} (31)

With that, we have 𝐀i,oσ=Δ𝐋TΔ𝐋γ2\mathbf{A}^{\sigma}_{i,\mathrm{o}}=\frac{\Delta\mathbf{L}^{\rm T}\Delta\mathbf{L}}{\gamma^{2}}, 𝐛i,o=2Δ𝐬T(Δ𝐋4Rn𝐋r)γ\mathbf{b}_{i,\mathrm{o}}=\frac{-2\Delta\mathbf{s}^{\rm T}(\Delta\mathbf{L}-4R_{n}\mathbf{L}_{r})}{\gamma} and ci,o=2Rn2+4e2Δ𝐬22c_{i,\mathrm{o}}=2R_{n}^{2}+4e^{2}-||\Delta\mathbf{s}||^{2}_{2}. Due to the size limitation of the physical workspace in the visual servoing scenario, the quadratic control constraint may not be feasible, e.g. when the moving obstacle is very large, it is impossible to avoid occlusion. In this extreme case, one alternative solution is to hold the robot static and to wait until the obstacle no longer occlude the feature points.

3.3 MPC Planner with Occlusion Avoidance

To integrate the advantages of the Model Predictive Control policy for high-level planning and CBC/PrCBC to enforce the occlusion-free condition, we combine the procedure of the MPC and proposed CBC/PrCBC to generate an optimized control sequence. Specifically, after executing the CBC/PrCBC to minimally modify the control 𝐕cmpc(t)\mathbf{V}_{c}^{mpc}(t) to 𝐕c(t)\mathbf{V}_{c}^{*}(t) at time step tt, the process is repeatedly implemented at the time step t+1t+1: using policy π\pi in Section 2.2 to generate the control sequence 𝐕ct+1:t+N|t+1\mathbf{V}_{c}^{t+1:t+N|t+1}, obtain the optimized control 𝐕cmpc(t+1)\mathbf{V}_{c}^{mpc}(t+1), and calculate the occlusion-free control 𝐕c(t+1)\mathbf{V}_{c}^{*}(t+1) to execute next.

4 Experiment Results and Discussion

Refer to caption
(a) Initial time step for all experiments
Refer to caption
(b) Time step = 11 without noise(CBC)
Refer to caption
(c) Time step = 247 without noise(CBC)
Refer to caption
(d) Time step = 11 with noise(CBC)
Refer to caption
(e) Time step = 11 with noise(PrCBC)
Refer to caption
(f) Time step = 400 with noise(PrCBC)
Figure 2: The performance of CBC and PrCBC. The obstacle is represented by the black circle. The red and blue squares are the feature points location of the initial and target location, respectively. The number with different colors are the feature points in the camera. The colored curves indicate the trajectories of corresponding feature points.
Refer to caption
(a) Minimum feature points-obstacle distance without noise (CBC)
Refer to caption
(b) Minimum feature points-obstacle distance with noise (CBC)
Refer to caption
(c) Minimum feature points-obstacle distance with noise (PrCBC)
Refer to caption
(d) Errors between target and current locations of feature points (PrCBC)
Figure 3: Quantitative results of CBC and PrCBC.
Refer to caption
Figure 4: Quantitative results summary of PrCBC from 5 different obstacle locations with each having ten random trials. The Y-label DisDis is defined as (32).

In this section, we use Matlab as the simulation platform and present experimental results to validate the effectiveness of our proposed method. Matlab robotics toolbox (Corke, 2017) is used to construct the 6-DOF robot simulation model, and the solver IPOPT (Wächter and Biegler, 2006) is used to generate a high-level planner.

4.1 Simulation Performance

To validate the performance of our experiment, we implement our algorithm under different experiment setups:

CBC without noise: According to the experiment setup shown in Fig. 2(a), the obstacle is moving in the workspace with initial location [0.43,0.23,0.10]T[0.43,0.23,0.10]^{\rm T}. In this experiment, we assume the camera can acquire the accurate coordinates of the feature points and obstacle center. With designed controller CBC, the results are shown in Fig. 2(b) and Fig. 2(c). From Fig. 2(b) and Fig. 2(c), the feature points can avoid occlusion of the obstacle and converge to the pre-defined target locations successfully.

CBC with noise: With the same initial condition of Fig. 2(a), we assume the pixel coordinates of the feature points and obstacle center are acquired by the camera with Gaussian distributed noise 𝐰i,𝐰oN(0,10)\mathbf{w}_{i},\mathbf{w}_{\mathrm{o}}\sim N(0,10). From Fig. 2(d), it can be observed that the feature point #3\#3 has been occluded by the obstacle in the camera view.

PrCBC with noise: With the same initial condition and measurement noise in Fig. 2(a), the confidence level is set to be σ=0.8\sigma=0.8. As shown in Fig. 2(e), the feature points can avoid the occlusion of the obstacle in the camera view. As shown in Fig, 2(f), the robot could navigate the camera to the desired location where the four feature points converge to the pre-defined target locations in the camera view.

Hence, the CBC could enforce an occlusion-free IBVS only when accurate feature points information is available. In comparison, the PrCBC is able to guarantee chance-constrained probabilistic occlusion avoidance even in presence of the measurement noise.

4.2 Quantitative Results

Next, we present quantitative results from the simulation experiments. As shown in Fig. 3(a), without noise in the pixel coordinates, the CBC method performs well, always ensuring that the minimum feature points-obstacle distance exceeds the occlusion-free safety distance. However, when noise is added, the CBC method is unable to guarantee this, as shown in Fig. 3(b) (here we continue running the simulation until the obstacle moving out of the camera view). The PrCBC method, on the other hand, consistently maintains the minimum feature points-obstacle distance above the safety distance, even in the presence of observation noise. Fig. 3(d) shows the convergence of the feature points locations in the camera view. It validates that our method can accomplish the IBVS task with occlusion-free performance.

We also performed 50 random trials (5 different initial locations with each one having 10 random trails) under the confidence level of σ=0.9\sigma=0.9 to validate the effectiveness of the PrCBC controller in presence of random camera measurement noise. We define Dis=min(Disi(t))Dis=min(Dis_{i}(t)) to express the distance between the feature point and the obstacle edge in the image plane with Disi(t)Dis_{i}(t) denoted as:

Disi(t)=𝐪i(t)𝐪o(t)22r(t)Dis_{i}(t)=\sqrt{{\left\|\mathbf{q}_{i}(t)-\mathbf{q}_{\mathrm{o}}(t)\right\|}_{2}^{2}}-r(t) (32)

where r(t)=fRZo(t)r(t)=\frac{fR}{Z_{\mathrm{o}}(t)}\in\mathbb{R} is the obstacle radius in the image plane (pixel scale). Fig. 4 shows the mean DisDis with its corresponding variance, which verified that the feature points would not collide with the obstacle using the proposed PrCBC, indicating the occlusion-free performance.

5 Conclusion

In this paper, we present a control method to address the chance-constrained occlusion avoidance problem between the feature points and obstacle in Image Based Visual Servoing (IBVS) tasks. By adopting the probabilistic control barrier certificates (PrCBC), we transform the probabilistic occlusion-free conditions to deterministic control constraints, which formally guarantee the performance with satisfying probability under measurement uncertainty. Then we integrate the control constraints with Model Predictive Control (MPC) policy to generate a sequence of optimized control for high-level planning with enforced occlusion avoidance. The simulation results verify the effectiveness of the proposed method. Future works will further explore the real-world applications of the proposed method using feature points such as Scale-Invariant Feature Transform (SIFT), Oriented FAST and Rotated BRIEF (ORB), etc.

References

  • Ames et al. (2019) Ames, A.D., Coogan, S., Egerstedt, M., Notomista, G., Sreenath, K., and Tabuada, P. (2019). Control barrier functions: Theory and applications. In 18th European Control Conference (ECC), 3420–3431. IEEE.
  • Capelli and Sabattini (2020) Capelli, B. and Sabattini, L. (2020). Connectivity maintenance: Global and optimized approach through control barrier functions. In IEEE International Conference on Robotics and Automation (ICRA), 5590–5596. IEEE.
  • Chaumette and Hutchinson (2006) Chaumette, F. and Hutchinson, S. (2006). Visual servo control. i. basic approaches. IEEE Robotics & Automation Magazine, 13(4), 82–90.
  • Chaumette et al. (2016) Chaumette, F., Hutchinson, S., and Corke, P. (2016). Visual servoing. In Springer Handbook of Robotics, 841–866. Springer.
  • Corke (2017) Corke, P.I. (2017). Robotics, Vision & Control: Fundamental Algorithms in MATLAB. Springer, second edition. ISBN 978-3-319-54413-7.
  • Corke and Khatib (2011) Corke, P.I. and Khatib, O. (2011). Robotics, vision and control: fundamental algorithms in MATLAB, volume 73. Springer.
  • De Luca et al. (2008) De Luca, A., Oriolo, G., and Robuffo Giordano, P. (2008). Feature depth observation for image-based visual servoing: Theory and experiments. The International Journal of Robotics Research, 27(10), 1093–1116.
  • Fleurmond and Cadenat (2016) Fleurmond, R. and Cadenat, V. (2016). Handling visual features losses during a coordinated vision-based task with a dual-arm robotic system. In 2016 European Control Conference (ECC), 684–689. IEEE.
  • Huang and Mok (2018) Huang, P.C. and Mok, A.K. (2018). A case study of cyber-physical system design: Autonomous pick-and-place robot. In 2018 IEEE 24th international conference on embedded and real-time computing systems and applications (RTCSA), 22–31. IEEE.
  • Kazemi et al. (2010) Kazemi, M., Gupta, K., and Mehrandezh, M. (2010). Path-planning for visual servoing: A review and issues. Visual Servoing via Advanced Numerical Methods, 189–207.
  • Kermorgant and Chaumette (2013) Kermorgant, O. and Chaumette, F. (2013). Dealing with constraints in sensor-based robot control. IEEE Transactions on Robotics, 30(1), 244–257.
  • Landi et al. (2019) Landi, C.T., Ferraguti, F., Costi, S., Bonfè, M., and Secchi, C. (2019). Safety barrier functions for human-robot interaction with industrial manipulators. In 2019 18th European Control Conference (ECC), 2565–2570. IEEE.
  • Li et al. (2020) Li, W., Chiu, P.W.Y., and Li, Z. (2020). An accelerated finite-time convergent neural network for visual servoing of a flexible surgical endoscope with physical and rcm constraints. IEEE transactions on neural networks and learning systems, 31(12), 5272–5284.
  • Luo et al. (2020a) Luo, W., Sun, W., and Kapoor, A. (2020a). Multi-robot collision avoidance under uncertainty with probabilistic safety barrier certificates. Advances in Neural Information Processing Systems, 33, 372–383.
  • Luo et al. (2020b) Luo, W., Yi, S., and Sycara, K. (2020b). Behavior mixing with minimum global and subgroup connectivity maintenance for large-scale multi-robot systems. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 9845–9851. IEEE.
  • Lyu et al. (2021) Lyu, Y., Luo, W., and Dolan, J.M. (2021). Probabilistic safety-assured adaptive merging control for autonomous vehicles. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 10764–10770. IEEE.
  • Marchand et al. (2005) Marchand, E., Spindler, F., and Chaumette, F. (2005). Visp for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, 12(4), 40–52.
  • Mezouar and Chaumette (2002) Mezouar, Y. and Chaumette, F. (2002). Avoiding self-occlusions and preserving visibility by path planning in the image. Robotics and Autonomous Systems, 41(2-3), 77–87.
  • Nicolis et al. (2018) Nicolis, D., Palumbo, M., Zanchettin, A.M., and Rocco, P. (2018). Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots. IEEE Robotics and Automation Letters, 3(2), 796–803.
  • Saragih et al. (2019) Saragih, C.F.D., Kinasih, F.M.T.R., Machbub, C., Rusmin, P.H., and Rohman, A.S. (2019). Visual servo application using model predictive control (mpc) method on pan-tilt camera platform. In 2019 6th International Conference on Instrumentation, Control, and Automation (ICA), 1–7. IEEE.
  • Wächter and Biegler (2006) Wächter, A. and Biegler, L.T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1), 25–57.
  • Xiao et al. (2022) Xiao, W., Belta, C.A., and Cassandras, C.G. (2022). Sufficient conditions for feasibility of optimal control problems using control barrier functions. Automatica, 135, 109960.
  • Xu et al. (2017) Xu, X., Waters, T., Pickem, D., Glotfelter, P., Egerstedt, M., Tabuada, P., Grizzle, J.W., and Ames, A.D. (2017). Realizing simultaneous lane keeping and adaptive speed regulation on accessible mobile robot testbeds. In 2017 IEEE Conference on Control Technology and Applications (CCTA), 1769–1775. IEEE.
  • Zheng et al. (2019) Zheng, D., Wang, H., Wang, J., Zhang, X., and Chen, W. (2019). Toward visibility guaranteed visual servoing control of quadrotor uavs. IEEE/ASME Transactions on Mechatronics, 24(3), 1087–1095.