This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Situation-aware Autonomous Driving Decision Making
with Cooperative Perception on Demand

Wei Liu Wei Liu is with Department of Mechanical Engineering, National University of Singapore, Kent Ridge, Singapore liu_wei@ u.nus.eduThis research was supported by the Future Urban Mobility project of the Singapore-MIT Alliance for Research and Technology (SMART) Center, with funding from Singapore’s National Research Foundation.
Abstract

This paper investigates the impact of cooperative perception on autonomous driving decision making on urban roads. The extended perception range contributed by the cooperative perception can be properly leveraged to address the implicit dependencies within the vehicles, thereby the vehicle decision making performance can be improved. Meanwhile, we acknowledge the inherent limitation of wireless communication and propose a Cooperative Perception on Demand (CPoD) strategy, where the cooperative perception will only be activated when the extended perception range is necessary for proper situation-awareness. The situation-aware decision making with CPoD is modeled as a Partially Observable Markov Decision Process (POMDP) and solved in an online manner. The evaluation results demonstrate that the proposed approach can function safely and efficiently for autonomous driving on urban roads.

I Introduction

As autonomous vehicles being navigating on the urban road, the proper decision making requires the comprehensive awareness of the surrounding environment, which includes both the road context and the other vehicles’ motion intention. The knowledge of the road context can help to narrow down the reasoning scope of the other traffic participants’ behaviors [1]. Similarly, to understand the semantic meaning of the other vehicles’ behaviors, i.e., motion intention, can contribute a more robust prediction of their future motion, which in turn can be employed for a more accurate risk evaluation[2].

The difficulty of situation awareness, however, is always aggravated by the limited autonomous vehicle perception ability, which has to experience the difficulties introduced by the inevitable sensor noise and sensing occlusion. The resulting sensing limitation can dramatically weaken the autonomous vehicle’s ability of perceiving the obstacles’ states and blind the implicit behavior dependencies within the vehicles. As a preeminent candidate to extend perception range without substantial additional costs, cooperative perception is the exchange of local sensing information with other vehicles or infrastructures via wireless communications [3]. The perception range thereby can be considerably extended up to the boundary to the connected vehicle, which thereby can be useful for better situation awareness.

While the benefits of cooperative perception has been well recognized, there always come with some challenges of the cooperative perception applications on autonomous driving [4]. Firstly, the challenge can arise from the balance of information quantity and quality. There still lacks the comprehensive strategy and general rules to efficiently and robustly process and represent the information shared by the different participants. Secondly, the perception uncertainties might increase dramatically according to the growth of the extended sensing range due to the perception processing error, which hints that the perception failures need to be explicitly considered. Last but not least, the applicability of cooperative perception can be always obstructed by the inherent communication constraints, such as the limited bandwidth.

In this paper, we investigate the impact of cooperative perception on situation-aware decision making for autonomous driving on urban roads. Extending upon our previous work in [5], we explicitly consider a sensing occlusion problem and aim at employing cooperative perception to address this issue. Given the fact that the communication between vehicles is always constrained, we propose a Cooperative Perception on Demand (CPoD) strategy, which means that the cooperative perception will be activated only when the on-board sensing is not informative enough for proper situation awareness. To properly feature CPoD inside the situation-aware decision making module, the Partially Observable Markov Decision Making Process (POMDP) is utilized for its innate ability to balance the exploration and exploitation. The evaluation results show that the proposed algorithm is efficient and safe enough to handle some common urban driving scenarios.

Compared to the previous work, the contributions of this study are obvious as follows:

  • We investigate the impact of cooperative perception on autonomous vehicle decision making module.

  • The proposal of CPoD can actively alleviate the constraints on vehicle communication.

The remainder of this paper is organized as follows. Section II introduces some related works and Section III provides the preliminaries on both cooperative perception and POMDP. The problem statement is discussed in Section IV. Thereafter, the detailed discussion of the situation-aware decision making with CPoD is presented in Section V. The evaluations of the proposed algorithm are elaborated in Section VI and this paper is concluded in Section VII.

II Related Work

Three folds of literature directly related to our study will be identified. Firstly, the application of cooperative perception on autonomous driving will be briefly discussed. The second one is the multi-agent active sensing using decision-theoretic techniques, and the last one is how the POMDP has been employed for autonomous driving decision making.

Given the obvious benefits of cooperative perception, the extension of cooperative perception to autonomous driving has attracted dramatic research interests recently. Given the extended perception range, the motion planning horizon can be extended to the boundary of the connected vehicles, which thereby can contribute to an earlier hidden obstacle avoidance [6]. Similarly, the situation awareness can be improved as well, because each vehicle’s behavior is correlated to others, which hints that the expanded sensing information through cooperative perception can be useful for better reasoning of the vehicles’ behaviors [7].

The multi-agent active sensing using decision-theoretic techniques aims at properly coordinating the sensors or mobile platforms to achieve a wider sensing coverage or to reduce the uncertainties of object characteristics identification [8, 9]. While great achievements have been made, the communication constraint, however, is occasionally overlooked [10]. Moreover, while these approaches can properly handle the planning for perception issue, but not vice versa. In a sense, the investigation on how the enhanced perception can contribute to the planning is not comprehensive.

Regarding the autonomous driving decision making, the most common approach is to manually tailor the specific action sets for different situations using a Finite State Machine or similar frameworks like behavior trees [11]. These approaches, however, lack the comprehensive understanding of the environment. The absence of full situational awareness can make the driving decisions become danger-prone, which has been evidenced by an incident of DUC [12]. As an principle approach of balancing exploration and exploitation, POMDP has been widely adopted for autonomous driving decision making recently [13, 14, 15]. The POMDP applications around the literature can vary from designing some specific driving behaviors, such as lane changing and merging[13], to a more general traffic negotiation [5].

As reviewed above, a general situation-aware decision making approach to integrate the above three components is still an interesting and challenging problem.

III Preliminaries

III-A Cooperative Perception

In contrast to cooperative driving, where the vehicles are centrally controlled or share their motion intention directly, cooperative perception targets at sharing local sensing information only, which is more applicable to different traffic participants in typical urban traffic situations.

In the context of autonomous vehicle perception, the sensing information is typically projected into a local frame, which can be processed as a local observation that consists of a set of perceived obstacles or features representing the environment. Let Z[i]Z^{[i]} depict a local sensing map of the vehicle ii, and let φ[i]={iNf,,i1,i+1,,i+Nl}\varphi^{[i]}=\{i-N_{f},\cdots,i-1,i+1,\cdots,i+N_{l}\} be a string that defines the neighbors of a vehicle ii, where Nf,NlN_{f},N_{l} denotes the size of the following and leading neighbors respectively. Then cooperative perception is formulated as,

𝐙[i]=Z[i]jφ[i]Z[j]Q[j,i],\mathbf{Z}^{[i]}=Z^{[i]}\bigcup_{j\in\varphi^{[i]}}Z^{[j]}\otimes Q^{[j,i]}, (1)

where Q[j,i]=[τ[j,i],θ[j,i]]Q^{[j,i]}=[\tau^{[j,i]},\theta^{[j,i]}] denotes the relative pose of vehicle jj w.r.t. vehicle ii, which consists of the translation τ[j,i]\tau^{[j,i]} and rotation θ[j,i]\theta^{[j,i]}. Given the relative pose, the operator \otimes computes the pose transformation. The operation \bigcup thereby is called map merging and the resulting 𝐙[i]\mathbf{Z}^{[i]} denotes the extended sensing map of vehicle ii using cooperative perception. Since the agents participated in the cooperative perception are designed to work in a decentralized manner, we adhere to the following rules in this study:

  • There is no central entity required for the operation.

  • There is no common communication facility, which means only local point-to-point communication between neighbors is considered.

  • There is no communication delay being considered.

III-B Partially Observable Markov Decision Process

A POMDP is formally a tuple {𝒮,𝒜,𝒵,T,O,R,γ}\{\mathcal{S},\mathcal{A},\mathcal{Z},T,O,R,\gamma\}, where 𝒮\mathcal{S} is the state space, 𝒜\mathcal{A} is a set of actions and 𝒵\mathcal{Z} denotes the observation space. The transition function T(s,s,a)=Pr(s|s,a):𝒮×𝒜×𝒮T(s^{\prime},s,a)=\mathrm{Pr}(s^{\prime}|s,a):\mathcal{S}\times\mathcal{A}\times\mathcal{S} models the probability of transiting to state s𝒮s^{\prime}\in\mathcal{S} when the agent takes an action a𝒜a\in\mathcal{A} at state s𝒮s\in\mathcal{S}. The observation function O(z,s,a)=Pr(z|s,a):𝒮×𝒜×𝒵O(z,s^{\prime},a)=\mathrm{Pr}(z|s^{\prime},a):\mathcal{S}\times\mathcal{A}\times\mathcal{Z}, similarly, gives the probability of observing z𝒵z\in\mathcal{Z} when action a𝒜a\in\mathcal{A} is applied and the resulting state is s𝒮s^{\prime}\in\mathcal{S}. The reward function R(s,a):𝒮×𝒜R(s,a):\mathcal{S}\times\mathcal{A} is the reward obtained by performing action a𝒜a\in\mathcal{A} in state s𝒮s\in\mathcal{S}, and γ[0,1)\gamma\in[0,1) is a discount factor.

The solution to the POMDP thereby is an optimal policy π\pi^{*} that maximizes the expected accumulated reward 𝔼(t=0γtR(at,st))\mathbb{E}(\sum_{t=0}^{\infty}\gamma^{t}R(a_{t},s_{t})), where sts_{t} and ata_{t} denote the agent’s state and action at time tt, respectively. The true state, however, is not fully observable to the agent, thus the agent maintains a belief state bb\in\mathcal{B}, i.e. a probability distribution over 𝒮\mathcal{S}, instead. The policy π:𝒜\pi:\mathcal{B}\rightarrow\mathcal{A} therefore maps a prescribed action a𝒜a\in\mathcal{A} from a belief bb\in\mathcal{B}.

IV Problem Statement

Refer to caption
Figure 1: Illustrative example of CPoD: The blue obstacle vehicle is blind the red ego vehicle. Given the shared observation from the IOV, ego vehicle can conduct a more comprehensive situation reasoning.

Onwards, we name the autonomous vehicle as an ego vehicle and consider all the other vehicles as obstacle vehicles. Also, we define the Influential Obstacle Vehicle (IOV) as the obstacle vehicle that imposes the most significant impact on the ego vehicle’s driving decisions, such as the leading vehicle on the single-lane road, or the vehicles to be negotiated at the intersection.

In sight of the influence of IOV, the perception sharing between ego vehicle and IOV becomes more essential, where a more conscious decision can be made if an ego vehicle can see what IOV see. Taking Fig. 1 for instance, the ego vehicle is following the leading vehicle to conduct lane merging. Since the ego vehicle’s driving decision is heavily affected by the leading vehicle (identified as IOV), the inference over the leading vehicle’s motion intention becomes essential. Without cooperative perception, another coming obstacle vehicle (blue) is blind to the ego vehicle. Given the on-board observation that the IOV is driven in a normal speed, the ego vehicle might draw a confident but danger-prone conclusion that the IOV will merge into the lane directly. On the other hand, if the IOV shares what it sees with the ego vehicle, the ego vehicle can then properly acknowledge the dependencies between the IOV and the coming obstacle vehicle, such that a more reasonable inference can be made as: the IOV may have a fairly high chance to commit a deceleration.

Obviously, the perception sharing can contribute to a greatly extended perception range, and more importantly a comprehensive understanding of the situation evolution, but it does bring itself with some limitations, especially the communication constraints. Therefore, to properly identify the situation in which perception sharing should be activated can be quite helpful in sense. The difficulty of situation identification, however, can always be aggravated by the fact that the situation, including the obstacle vehicle behavior and environment context, is not fully observable. In sight of this, this paper proposes a Cooperative Perception on Demand (CPoD) strategy and seeks to feature the CPoD inside the decision making module. In a sense, perception sharing will only be activated when the situation awareness becomes difficult, such as the IOV’s motion intention is ambiguous or the collision risk is critical.

Formally speaking, let Z[e]Z^{[e]} and Z[iov]Z^{[iov]} denotes the sensing map of ego vehicle and IOV respectively, both of which consist of a set of environment features. As a function of the CPoD decision aCPoD{Active,Deactive}a_{\text{\tiny CPoD}}\in\{\textsc{Active},\textsc{Deactive}\}, the ego vehicle’s extended sensing map 𝐙[e]\mathbf{Z}^{[e]} is modeled as,

𝐙[e](aCPoD)={Z[e]Z[iov]Q[iov,e], if aCPoD=Active,Z[e], if aCPoD=Deactive.\mathbf{Z}^{[e]}(a_{\text{\tiny CPoD}})=\begin{cases}Z^{[e]}\bigcup Z^{[iov]}\otimes Q^{[iov,e]},\textrm{ if }a_{\text{\tiny CPoD}}=\textsc{Active},\\ Z^{[e]},\textrm{ if }a_{\text{\tiny CPoD}}=\textsc{Deactive}.\end{cases} (2)

As such, once the CPoD is activated, the ego vehicle’s perception range can be extended to as far as IOV can reach. Thereafter, this study seeks to solve a decision making problem to minimize the cost expectation as,

π=argminat𝒜𝔼st𝒮{t=0(at,st)Pr(st|𝐙t[e](at))},\pi^{*}=\arg\min_{a_{t}\in\mathcal{A}}\mathbb{E}_{s_{t}\in\mathcal{S}}\{\sum_{t=0}^{\infty}\mathcal{L}(a_{t},s_{t})\mathrm{Pr}(s_{t}|\mathbf{Z}^{[e]}_{t}(a_{t}))\}, (3)

where ss and 𝒮\mathcal{S} depict the situation state and state space respectively. The situation state ss is usually associated with certain uncertainties, which can include, to name a few, the road context, the obstacle vehicle’s moving intention, etc. The inference over the situation state, thereby, is driven by the sensing information 𝐙[e]\mathbf{Z}^{[e]}. Moreover, a=[aCPoD,aACC]a=[a_{\text{\tiny CPoD}},a_{\text{\tiny ACC}}] represents the ego vehicle’s action, which consists of both the CPoD decision aCPoDa_{\text{\tiny CPoD}} and the acceleration command aACCa_{\text{\tiny ACC}}, and 𝒜\mathcal{A} denotes the corresponding action space. The cost function (a,s):𝒜×𝒮+\mathcal{L}(a,s):\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}^{+} targets to encourage the driving efficiency and penalize both the colliding risk and the vehicle communication, which thereby can properly balance the driving safety and information gaining.

Worth mentioning that the sensing fusion and perception uncertainty modeling problems are out of the scope of this paper, the reader can refer to [4] and [16] for a more detailed discussion. In this context, we assume that a well-processed perception, especially the uncertainty modeling, is alway available.

V Situation-aware Decision Making with CPoD

V-A Overview

Refer to caption
Figure 2: System framework for situation-aware decision making with CPoD.

The overall framework of the autonomous driving decision making with CPoD is depicted in Fig. 2. The autonomous vehicle is featured with the on-board sensors and the V2V communication ability, where the V2V communication will only be enabled when CPoD has been activated. Once the V2V communication is enabled, the cooperative perception module will start to process the sensing data provided by both the on-board sensors and the IOV, otherwise only the on-board sensing data will be processed.

Given the augmented sensing information provided by cooperative perception, the decision making module will come into act. The decision making module is in change of making two sub-decisions: 1) decision about the CPoD and 2) decision about the ego vehicle’s acceleration. Both of these decisions are made upon the situation state belief provided by the belief tracker.

V-B POMDP Modeling

Hereafter, the POMDP model for situation-aware decision making with CPoD will be discussed. Generally speaking, the ego vehicle will receive the IOV’s sensing information once the CPoD is activated, thereafter the acceleration command will be chosen, which drives the ego vehicle’s state transition. Regarding the obstacle vehicles, their state transitions are driven by the inferred motion intention, the road context, and the implicit dependencies within the vehicles.

V-B1 Ego Vehicle State Transition Model

The ego vehicle state s[e]s^{[e]} consists of the vehicle pose [x[e],y[e],θ[e]]T[\rm x^{[e]},\rm y^{[e]},\theta^{[e]}]^{\rm T} and velocity v[e]\rm v^{[e]}. Given our previous discussion, the ego vehicle action space is discretized as,

𝒜=𝒜CPoD×𝒜ACC,\displaystyle\mathcal{A}=\mathcal{A}_{\text{CPoD}}\times\mathcal{A}_{\text{\tiny ACC}},
𝒜CPoD={Active,Deactive},\displaystyle\mathcal{A}_{\text{CPoD}}=\{\textsc{Active,Deactive}\},
𝒜ACC={Accelerate, Decelerate, Constant},\displaystyle\mathcal{A}_{\text{\tiny ACC}}=\{\textsc{Accelerate, Decelerate, Constant}\}, (4)

which has been verified to be qualified for an ego vehicle to achieve most tactical maneuvers, such as traffic negotiation at intersections, maintaining the safe distance with other obstacle vehicles, etc.[5]. The vehicle steering control, on the other hand, is accomplished by following a pre-specified reference route using a close-loop path tracking algorithm. Given the acceleration command and steering angle, the ego vehicle system can be forwarded for a fixed time duration Δt\Delta t using the kinematic car model in [17].

V-B2 Obstacle Vehicle State Transition Model

Due to the on-board sensing limitation, some obstacle vehicles cannot be visible to the ego vehicle, thus we define φvis\varphi_{\text{vis}} as the set of obstacle vehicles that are within the range of the extended sensing map 𝐙[e]\mathbf{Z}^{[e]}. Each obstacle vehicle is associated with an unique identification number, and the state of obstacle vehicle ii is modeled as s[i]=[x[i],y[i],θ[i],v[i],ι[i]]T,iφviss^{[i]}=[\rm x^{[i]},\rm y^{[i]},\theta^{[i]},\rm v^{[i]},\iota^{[i]}]^{\rm T},i\in\varphi_{\text{vis}}, where ι\iota\in\mathcal{I} denotes the motion intention and \mathcal{I} represents the motion intention space. Compared to the ego vehicle state, the inclusion of motion intention here is to provide a semantic meaning of the obstacle vehicle behavior and to properly model the obstacle vehicle’s state transition.

Rather than addressing the motion intention as hidden goals [15, 2, 14], the reaction based motion intention is employed in this study [5], which is abstracted from human driving behaviors by generalizing a bunch of vehicle behavior training data [18]. More specifically, given the vehicle behavior training data, the road context 𝒞\mathcal{C} can be learned first to represent the traffic rules and the typical vehicle motion pattern, thereafter the reaction is modeled as the deviation of the observed vehicle behavior from the generalized road context. Given the observed reaction, the motion intention space \mathcal{I} is defined as ={Stopping,Hesitating,Normal,Aggressive}\mathcal{I}=\{Stopping,Hesitating,Normal,Aggressive\}, where their specific meanings are illustrated in Table I.

TABLE I: Motion Intention Description
Motion Intention Description
Stopping Obstacle vehicle will commit a full stop for giving way or parking
Hesitating Obstacle vehicle will either decelerate or accelerate to adjust its driving behavior
Normal Obstacle vehicle’s behavior will comply with the generalized road context
Aggressive Obstacle vehicle’s behavior will be aggressive without any sense to negotiate

Provided the generalized road context 𝒞\mathcal{C} and the inferred motion intention ιi\iota_{i}, the state transition of obstacle vehicle ii is modeled as,

Pr(st+Δt[i]|st[i])=Pr(xt+Δt[i],ιt+Δt[i]|xt[i],ιt[i])\displaystyle\mathrm{Pr}(s^{[i]}_{t+\Delta t}|s^{[i]}_{t})=\mathrm{Pr}(x^{[i]}_{t+\Delta t},\iota^{[i]}_{t+\Delta t}|x^{[i]}_{t},\iota^{[i]}_{t})
=c(xt[i])Pr(xt+Δt[i],ιt+Δt[i]|xt[i],ιt[i],c(xt[i]))Pr(c(xt[i])|xt[i]),\displaystyle=\sum_{c(x^{[i]}_{t})}\mathrm{Pr}(x^{[i]}_{t+\Delta t},\iota^{[i]}_{t+\Delta t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}))\mathrm{Pr}(c(x^{[i]}_{t})|x^{[i]}_{t}), (5)

where x[i]=[x[i],y[i],θ[i],v[i]]Tx^{[i]}=[\rm x^{[i]},\rm y^{[i]},\theta^{[i]},\rm v^{[i]}]^{\rm T} is defined as the vehicle metric state for notation clarity, and c(xt[i])𝒞c(x^{[i]}_{t})\in\mathcal{C} denotes the road context information corresponding to vehicle state xt[i]x^{[i]}_{t} at time tt. In detail, the road context c(xt[i])c(x^{[i]}_{t}) features the possible driving directions according to the traffic rules and the typical (or reference) vehicle speed as well. As such, the obstacle vehicle state transition is driven by the inferred motion intention and constrained by the road context.

While Eqn. (5) can provide an reasonable modeling of the obstacle vehicle’s state transition, the dependencies within the vehicle behaviors are not considered. Therefore, we seek to reformulate the state transition and make it conditional on a joint metric state X[i]=x[e]j(φvisi)x[j]X^{[i]}=x^{[e]}\bigcup_{\forall j\in(\varphi_{\text{vis}}\setminus i)}x^{[j]} as,

Pr(xt+Δt[i],ιt+Δt[i]|xt[i],ιt[i],c(xt[i]))\displaystyle\mathrm{Pr}(x^{[i]}_{t+\Delta t},\iota^{[i]}_{t+\Delta t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}))
=Xt[i]Pr(xt+Δt[i],ιt+Δt[i]|xt[i],ιt[i],c(xt[i]),Xt[i])Pr(Xt[i]),\displaystyle=\sum_{X^{[i]}_{t}}\mathrm{Pr}(x^{[i]}_{t+\Delta t},\iota^{[i]}_{t+\Delta t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}),X^{[i]}_{t})\mathrm{Pr}(X^{[i]}_{t}), (6)

where the joint metric state Xt[i]X^{[i]}_{t} is assumed to be conditional independent on the vehicle ii’s state and the corresponding road context. The inclusion of Xt[i]X^{[i]}_{t} into the state transition enables us to model the implicit vehicle dependencies as each vehicle has to avoid the potential collision with any other vehicles. Due to the sensing occlusion, the vehicle dependencies can only considered for the obstacle vehicles that are not blind to the ego vehicle. In a sense, the size of the joint metric state Xt[i]X^{[i]}_{t} is a function of the sensing range. Therefore, the larger sensing range can be extended for ego vehicle, the more comprehensive vehicle behavior dependency can be considered. This is where cooperative perception can come into effect and contribute.

Referring to Eqn. (6), we can always marginalize the transition function over an implicit obstacle vehicle action aobst=[vobst,ϕobst]Ta_{\rm obst}=[v_{\rm obst},\phi_{\rm obst}]^{\rm T}, including speed vobstv_{\rm obst} and steering angle ϕobst\phi_{\rm obst}, to explicitly model the obstacle vehicle state transition as,

Pr(xt+Δt[i],ιt+Δt[i]|xt[i],ιt[i],c(xt[i]),Xt[i])\displaystyle\mathrm{Pr}(x^{[i]}_{t+\Delta t},\iota^{[i]}_{t+\Delta t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}),X^{[i]}_{t})
=aobstt[i]Pr(xt+Δt[i]|xt[i],aobstt[i])Pr(ιt+Δt[i]|ιt[i])\displaystyle=\sum_{{a_{\rm obst}}^{[i]}_{t}}\mathrm{Pr}(x^{[i]}_{t+\Delta t}|x^{[i]}_{t},{a_{\rm obst}}^{[i]}_{t})\mathrm{Pr}(\iota^{[i]}_{t+\Delta t}|\iota^{[i]}_{t})
×Pr(aobstt[i]|xt[i],ιt[i],c(xt[i]),Xt[i]),\displaystyle\times\mathrm{Pr}({a_{\rm obst}}^{[i]}_{t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}),X^{[i]}_{t}), (7)

where Pr(xt+Δt[i]|xt[i],aobstt[i])\mathrm{Pr}(x^{[i]}_{t+\Delta t}|x^{[i]}_{t},{a_{\rm obst}}^{[i]}_{t}) can follow the same transition model as that of ego vehicle given the inferred obstacle vehicle action aobsta_{\rm obst}, and the motion intention remains constant within the transition process Pr(ιt+Δt[i]|ιt[i])\mathrm{Pr}(\iota^{[i]}_{t+\Delta t}|\iota^{[i]}_{t}).

Regarding the inference of the obstacle vehicle action Pr(aobstt[i]|xt[i],ιt[i],c(xt[i]),Xt[i])\mathrm{Pr}({a_{\rm obst}}^{[i]}_{t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}),X^{[i]}_{t}), the obstacle vehicle speed can be properly modeled using the inferred motion intention and the implicit vehicle dependencies. The steering angle, however, is not featured by the motion intention, thus we aim at using the road context to enable the steering angle prediction. As such, the obstacle vehicle action inference can be reformulated as,

Pr(aobstt[i]|xt[i],ιt[i],c(xt[i]),Xt[i])\displaystyle\mathrm{Pr}({a_{\rm obst}}^{[i]}_{t}|x^{[i]}_{t},\iota^{[i]}_{t},c(x^{[i]}_{t}),X^{[i]}_{t})
=Pr(vobstt[i]|ιt[i],Xt[i])Pr(ϕobstt[i]|c(xt[i])).\displaystyle=\mathrm{Pr}({v_{\rm obst}}^{[i]}_{t}|\iota^{[i]}_{t},X^{[i]}_{t})\mathrm{Pr}({\phi_{\rm obst}}^{[i]}_{t}|c(x^{[i]}_{t})). (8)

In short, the speed inference Pr(vobstt[i]|ιt[i],X(aCPoDt))\mathrm{Pr}({v_{\rm obst}}^{[i]}_{t}|\iota^{[i]}_{t},X({a_{\text{\tiny CPoD}}}_{t})) is accomplished by mapping the acceleration command to the inferred motion intention ι\iota according to Table I, meanwhile the obstacle vehicle can still have certain chance to commit a deceleration if the collision risk with any other vehicles is high. The inference over the steering control is more straightforward, which is sampled from the possible driving directions according to the traffic rules featured by the road context c(xt[i])c(x^{[i]}_{t}). The reader can refer to [5] for a more detailed discussion.

V-B3 Observation Modeling

Thanks to the recent research advances in vehicle detection and tracking, the vehicle metric state xx can be properly observed with Gaussian noise imposed, thereby we can simply generate the observations with an one-on-one mapping directly from the corresponding metric states. As such, the observation of any vehicle ii is modeled as an vector consisting of the values of vehicle pose and velocity. The joint observation zz thereby is defined as z=z[e]iφvisz[i]z=z^{[e]}\bigcup_{\forall i\in\varphi_{\text{vis}}}z^{[i]}. In a sense, only the vehicles that are falling inside the extended sensing map 𝐙[e]\mathbf{Z}^{[e]} are observed.

V-B4 Belief Tracking

Since the obstacle vehicle motion intention can only be partially observable, the motion intention needs to be maintained as a belief state. Given the augmented observation, the belief tracker is dedicated for inferring and tracking the obstacle vehicle motion intentions using Bayes rules. The inference of obstacle vehicle ii’s motion intention thereby can be formulated as,

b(ιt+Δt[i])=η(ιt+Δt[i]|ιt[i])Pr(zt+Δt[i]|ιt+Δt[i])b(ιt[i]),b(\iota^{[i]}_{t+\Delta t})=\eta\mathbb{P}(\iota^{[i]}_{t+\Delta t}|\iota^{[i]}_{t})\mathrm{Pr}(z^{[i]}_{t+\Delta t}|\iota^{[i]}_{t+\Delta t})b(\iota^{[i]}_{t}), (9)

where η\eta is the normalization factor, and the motion intention remains constant within the transition process.

Regarding the observation function Pr(z[i]|ι[i])\mathrm{Pr}(z^{[i]}|\iota^{[i]}), we model it as a Gaussian with mean 𝐦(z[i]c(z[i]),ι[i])\mathbf{m}(z^{[i]}\ominus c(z^{[i]}),\iota^{[i]}) and covariance 𝚺(c(z[i]))\mathbf{\Sigma}(c(z^{[i]})), where the operator \ominus calculates the deviation of the observed vehicle state z[i]z^{[i]} from the corresponding road context c(z[i])c(z^{[i]}). Recall our earlier discussion, we aim at using this deviation as an interesting hint to infer the motion intention. More specifically, the speed deviation is employed in this study, where the observed vehicle speed is given as v[i]\rm v^{[i]}, and the road context c(z[i])c(z^{[i]}) provides the reference speed vref(z[i])\rm v_{\rm ref}(z^{[i]}) that is generalized from the vehicle behavior training data using Gaussian Process[18]. Thereafter, the mean value 𝐦(z[i]c(z[i]),ι[i])\mathbf{m}(z^{[i]}\ominus c(z^{[i]}),\iota^{[i]}) can be reformulated as 𝐦(v[i]vref(x[i]),ι[i])\mathbf{m}(\rm v^{[i]}\ominus\rm v_{\rm ref}(x^{[i]}),\iota^{[i]}) and defined in Table II, which is proposed according to the motion intention definition in Table I. Moreover, the covariance function 𝚺(c(z[i]))\mathbf{\Sigma}(c(z^{[i]})) is defined as 𝚺(c(z[i]))=vref(z[i])/σ\mathbf{\Sigma}(c(z^{[i]}))=\rm v_{\rm ref}(z^{[i]})/\sigma, where the scaling factor σ\sigma is to adjust the belief confidence.

TABLE II: Mean function w.r.t. Motion Intention
ι\iota\in\mathcal{I} StoppingStopping HesitatingHesitating NormalNormal AggressiveAggressive
𝐦(vvref(z),ι)\mathbf{m}(\rm v\ominus\rm v_{\rm ref}(z),\iota) 0.00.0 vref(z)/2\mathrm{v}_{\mathrm{ref}}(z)/2 vref(z)\mathrm{v}_{\mathrm{ref}}(z) 3vef(z)/23\mathrm{v}_{\mathrm{ef}}(z)/2

V-B5 Reward Function

The main objective of the ego vehicle is to arrive the destination as quickly as possible, meanwhile avoid collision with any other obstacle vehicles. Within this process, the communication within the vehicles needs to be actively requested if the situation becomes tricky. Two sub-decisions have to be made by an ego vehicle, thus the reward function R(s,a):𝒮×𝒜R(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R} is decoupled as,

R(s,a)=R(s,aACC)+R(s,aCPoD).R(s,a)=R(s,a_{\text{\tiny ACC}})+R(s,a_{\text{\tiny CPoD}}). (10)

For acceleration-based reward function R(s,aACC):𝒮×𝒜ACCR(s,a_{\text{\tiny ACC}}):\mathcal{S}\times\mathcal{A}_{\text{\tiny ACC}}\rightarrow\mathbb{R}, we want the vehicle move safely, efficiently and smoothly, thus it is defined as R(s,aACC)=Rgoal(s,aACC)+Rcrash(s,aACC)+Raction(aACC)+Rspeed(s)R(s,a_{\text{\tiny ACC}})=R_{\text{goal}}(s,a_{\text{\tiny ACC}})+R_{\text{crash}}(s,a_{\text{\tiny ACC}})+R_{\text{action}}(a_{\text{\tiny ACC}})+R_{\text{speed}}(s). The reward function Rgoal(s,aACC)R_{\text{goal}}(s,a_{\text{\tiny ACC}}) provides a reward when the vehicle reach the destination, and Rcrash(s,aACC)R_{\text{crash}}(s,a_{\text{\tiny ACC}}) outputs a high penalty if the ego vehicle is in collision. The action reward Raction(aACC)R_{\text{action}}(a_{\text{\tiny ACC}}) targets to smooth the vehicle velocity by proving a small penalty if aACCConstanta_{\text{\tiny ACC}}\neq\textsc{Constant}. The speed reward Rspeed(s)=kv/vmaxR_{\text{speed}}(s)=k\rm v/\rm v_{max} is to encourage high speed travel and improve the driving efficiency, where vmax\rm v_{max} is the ego vehicle’s maximum speed and kk is a scaling factor.

Regarding CPoD based reward function R(s,aCPoD):𝒮×𝒜CPoDR(s,a_{\text{\tiny CPoD}}):\mathcal{S}\times\mathcal{A}_{\rm CPoD}\rightarrow\mathbb{R}, we want to emphasize the cost of communication by introducing a constant penalty Rcomm.(aCPoD)R_{\text{comm.}}(a_{\text{\tiny CPoD}}) every time cooperative perception is activated. Once the situation awareness becomes tricky or the collision risk is high, the cooperative perception should be activated to gain more sensing information for proper decision making. As such, we introduce the Rintention(s)R_{\text{intention}}(s) to reward cooperative perception if the IOV’s motion intention ι[iov]Normal\iota^{[iov]}\neq\textsc{Normal}, in which case IOV’s behavior becomes harder to predict. Moreover, another reward RTTC(s,aCPoD)R_{\text{\tiny TTC}}(s,a_{\text{\tiny CPoD}}) will be imposed once the Time-To-Collision between ego vehicle and IOV is lower than the designed threshold. In summary, the CPoD\rm CPoD-driven reward function is formulated as R(s,aCPoD)=Rcomm.(aCPoD)+Rintention(s)+RTTC(s,aCPoD)R(s,a_{\text{\tiny CPoD}})=R_{\text{comm.}}(a_{\text{\tiny CPoD}})+R_{\text{intention}}(s)+R_{\text{\tiny TTC}}(s,a_{\text{\tiny CPoD}}).

Worth mentioning that we design the reward function R(s,aCPoD)R(s,a_{\text{\tiny CPoD}}) w.r.t. the vehicle state rather than the belief state, the purpose is to ensure that the POMDP value function stays piece-wise linear and convex, although the usage of belief state or entropy to evaluate information gain is more reasonable in some cases.

V-C POMDP Solver

Given the designed POMDP model, the online POMDP solver DESPOT is employed for its efficiency of handling the large observation space [19]. As an online POMDP solver, DESPOT only searches the belief space that is achievable from the current belief state and interleaves the planning and executing phases, Moreover, rather than searching the whole belief tree, DESPOT only samples a scenario set with constant size QQ. As a consequence, the belief tree of height HH contains only O(|𝒜|HQ)O(|\mathcal{A}|^{H}Q) nodes. The belief state is represented by the random particles within DESPOT, and for each particle the obstacle vehicles’ motion intentions are randomly sampled for belief state representation.

VI Evaluation

In this section, the proposed algorithm will be evaluated to validate its functionality.

VI-A Settings

Refer to caption
Figure 3: Evaluation environment and scenario: (a) shows the evaluation environment, which is a typical urban road within the campus of National University of Singapore. (b) depicts the evaluation scenario, where the red points represent the ego vehicle’s laser scan and the IOV’s laser scan is shown by the green points.

Given the environment and the corresponding road context as Fig. 3(a), the Stage simulator is employed for simulation evaluation, where Gaussian noise is purposely imposed on the vehicle pose and velocity measurement.

Implementation-wise, each vehicle is running an independent navigation system in ROS [20], where the vehicle communication is accomplished by sharing an unique ROS core. The proposed algorithm is running with 2 Hz for both CPoD and vehicle acceleration control, and the Pure-Pursuit tracking algorithm is employed for the vehicle steering control [21].

Refer to caption
Figure 4: Navigation process of situation-aware decision making with CPoD at T-junction. The actions are highlighted with the colorful text. For acceleration decision: [Red: Decelerate, Green: Accelerate, Blue: Constant]. For CPoD decision: [Red: Deactive, Green: Active] The motion intention is represented as the cubic marker: [Sky Blue: StoppingStopping, Brown: HesitatingHesitating, Yellow: NormalNormal, Purple: AggressiveAggressive], where the cubic height is proportional to the corresponding belief value.

VI-B Results

The proposed algorithm has been successfully applied for several urban road driving scenarios within the evaluation stage. Here, we want to highlight the ability of our algorithm to address the autonomous driving decision making at the T-junction (see Fig. 3(b)), where the evaluating scenario is same as what is discussed in Section IV.

As illustrated in Fig. 4(a), the ego vehicle is following the leading vehicle (identified as IOV) to approach the T-junction for lane merging. Meanwhile, there comes another obstacle vehicle that is blind to the ego vehicle, which means its motion intention is quite ambiguous because no observation is available. Before the IOV and the obstacle vehicle getting interacted, the ego vehicle reasonably maintains a smooth speed to follow the IOV while deactivating the CPoD to save the ”expensive” communication. This balance breaks down when the IOV commits a deceleration in order to give way to the obstacle vehicle. Right after the IOV’s motion intention is updated, the ego vehicle actively enables the CPoD to extend the sensing range and start to infer the obstacle vehicle’s motion intention as Fig. 4(b). Given the inferred situation evolution, especially the behavior correlation between IOV and obstacle vehicle, the ego vehicle decides to decelerate far before the obstacle clearance become critical, this is thanks to the cooperative perception which makes the vehicle behavior dependency analysis become achievable. After the obstacle vehicle becoming clear, the IOV starts to move forward for lane merging, the ego vehicle also decides to speed up while keeping the CPoD being deactivated (see Fig. 4(c)).

Refer to caption
Figure 5: Situation evolution analysis: (a) shows the obstacle clearance and (b) represents the acceleration command along with the speeds of both IOV and ego vehicle. (c) depicts the CPoD decision with the IOV’s Normal belief, and (d) illustrates the obstacle vehicle’s motion intention belief.

The detailed illustration of the situation evolution can be found in Fig. 5, where Fig. 5(a) depicts the obstacle clearance. The acceleration decision, along with the speed of both ego vehicle and IOV, are shown in Fig. 5(b), from which we can find that acceleration decision is quite responsive to the obstacle clearance and the driving efficiency. Moreover, the CPoD decision and the IOV’s motion intention is shown in Fig. 5(c), where only the Normal belief is plotted because it is directly related to the CPoD decision. An interesting observation is that the CPoD decision is highly depends on the IOV’s motion intention, which is exactly our expected behavior. In order to demonstrate the CPoD’s impact on the obstacle vehicle, we also plot the obstacle vehicle’s motion intention in Fig. 5(d), where the motion intention belief can be quickly updated when the CPoD is activated, otherwise the motion intention will stay ambiguous.

Acknowledging that the proposed approach is probabilistic in nature, we extensively conducted 100 evaluation trials for this T-junction scenario. For the sake of investigating the impact of cooperative perception on decision making, we also implemented an algorithm with the CPoD being deactivated constantly and we name it as CPWO. Moreover, in order to address the concern that the introduction of the CPoD might make the decision sub-optimal compared to the case where cooperative perception is always activated, another algorithm, CPW, with CPoD being activated all the time is implemented as well.

Refer to caption
Figure 6: Navigation performance comparison w.r.t. BOC and MOC: The evaluation results are depicted in (a) and the corresponding averaged values are shown in (b).

In order to compare the decision making performance, two evaluating metrics are proposed, which includes the Braking Obstacle Clearance (BOC) and Minimal Obstacle Clearance (MOC). More specifically, when the IOV is stopping for the coming obstacle vehicle, the BOC represents the distance between the ego vehicle and IOV when the ego vehicle decides to decelerate. Moreover, MOC measures the minimal distance between the ego vehicle and the IOV during the whole navigation process. The results are depicted in Fig. 6, where the BOC and MOC are plotted in two separate dimensions in Fig. 6(a) and the corresponding mean values are represented in Fig. 6(b). By comparing the navigation performance of CPoD and CPWO, we can safely conclude that the inclusion of cooperative perception can helpfully improve the driving safety, because the ego vehicle is able to react earlier and maintain a larger safety gap with the leading vehicle. Moreover, the CPoD can achieve almost the same performance as that of CPW, which hints that the modeling of CPoD inside POMDP can properly identify the situation where cooperative perception is necessary, whereas the communication constraints can be alleviated.

Refer to caption
Figure 7: Navigation process of situation-aware decision making with CPoD on a single-lane road (Top-down view.

Given the promising result we achieved for lane-merging at T-junction, we extend the evaluation scenario to an single-lane road where the ego vehicle is following a leading vehicle that blocks its sight-view in Fig. 7(a). Similarly, the obstacle vehicle is not visible to the ego vehicle, such that the ego vehicle need to reasonably active the CPoD and choose the optimal acceleration command to drive safely and efficiently. The overall navigation result is demonstrated in Fig. 7, which shows similar performance as that of the T-junction scenario. (a) shows the evaluating scenario, where the most front obstacle vehicle is blind to the ego vehicle. The most front obstacle vehicle suddenly commits a stop and the leading vehicle starts to decelerate as well, the CPoD is then activated accordingly as in (b). Thereafter, the ego vehicle safely commit an deceleration while making the CPoD be enabled in (c).

VII Conclusion

In this paper, we introduced a problem of situation-aware autonomous driving decision making with cooperative perception. The extended perception range contributed by the cooperative perception is properly employed to address the implicit dependencies within the vehicles. Meanwhile, we acknowledge the limitation of wireless communication and propose a CPoD scheme. The situation-aware decision making problem, together with the CPoD, is unified as a POMDP model and solved in an online manner. The extensive evaluations show that the proposed algorithm can properly improve the driving functionality and the perception sharing. The future work can be contributed to improving the identification of IOV and extending the simulation evaluations to the real experiments as well.

References

  • [1] S. Sivaraman, B. Morris, and M. Trivedi, “Observing on-road vehicle behavior: Issues, approaches, and perspectives,” in Intelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on.   IEEE, 2013, pp. 1772–1777.
  • [2] T. Bandyopadhyay, C. Z. Jie, D. Hsu, M. H. Ang Jr, D. Rus, and E. Frazzoli, “Intention-aware pedestrian avoidance,” in Experimental Robotics.   Springer International Publishing, 2013, pp. 963–977.
  • [3] H. Li, “Cooperative perception: Application in the context of outdoor intelligent vehicle systems,” Ph.D. dissertation, Paris, ENMP, 2012.
  • [4] S.-W. Kim, B. Qin, Z. J. Chong, X. Shen, W. Liu, M. Ang, E. Frazzoli, and D. Rus, “Multivehicle cooperative driving using cooperative perception: Design and experimental validation,” IEEE Transaction on Intelligent Transportation System, 2014.
  • [5] W. Liu, S.-W. Kim, and M. H. Ang, “Situation-aware decision making for autonomous driving on urban road using online POMDP,” in Intelligent Vehicles (IV), 2015 IEEE Symposium on.   IEEE, 2015.
  • [6] W. Liu, S. Kim, Z. Chong, X. Shen, and M. Ang, “Motion planning using cooperative perception on urban road,” in Robotics, Automation and Mechatronics (RAM), 2013 6th IEEE Conference on.   IEEE, 2013, pp. 130–137.
  • [7] W. Liu, S.-W. Kim, K. Marczuk, and M. H. Ang, “Vehicle motion intention reasoning using cooperative perception on urban road,” in Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on.   IEEE, 2014, pp. 424–430.
  • [8] M. T. Spaan and P. U. Lima, “A decision-theoretic approach to dynamic sensor selection in camera networks.” in ICAPS, 2009.
  • [9] M. T. Spaan, “Cooperative active perception using POMDPs,” in AAAI 2008 workshop on advancements in POMDP solvers, 2008.
  • [10] M. Otte and N. Correll, “Any-com multi-robot path-planning with dynamic teams: Multi-robot coordination under communication constraints,” in Experimental Robotics.   Springer, 2014, pp. 743–757.
  • [11] D. Ferguson, T. M. Howard, and M. Likhachev, “Motion planning in urban environments,” Journal of Field Robotics, vol. 25, no. 11-12, pp. 939–960, 2008.
  • [12] L. Fletcher, S. Teller, E. Olson, D. Moore, Y. Kuwata, J. How, J. Leonard, I. Miller, M. Campbell, D. Huttenlocher, et al., “The mit–cornell collision and why it happened,” Journal of Field Robotics, vol. 25, no. 10, pp. 775–807, 2008.
  • [13] S. Ulbrich and M. Maurer, “Probabilistic online POMDP decision making for lane changes in fully automated driving,” in Intelligent Transportation Systems, 2013, pp. 2063–2070.
  • [14] H. Bai, S. Cai, N. Ye, D. Hsu, and W. S. Lee, “Intention-aware online POMDP planning for autonomous driving in a crowd,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on.   IEEE, 2015, pp. 454–460.
  • [15] C. Shaojun, “Online POMDP planning for vehicle navigation in densely populated area,” 2014.
  • [16] X. Shen, S. Pendleton, and M. H. Ang Jr., “Scalable cooperative localization with minimal sensor configuration,” in International Symposium on Distributed Autonomous Robotics System, 2014.
  • [17] S. M. LaValle, Planning algorithms.   Cambridge university press, 2006.
  • [18] W. Liu, S.-W. Kim, and M. H. Ang, “Probabilistic road context inference for autonomous vehicles,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on.   IEEE, 2015.
  • [19] A. Somani, N. Ye, D. Hsu, and W. S. Lee, “DESPOT: Online POMDP planning with regularization,” in Advances In Neural Information Processing Systems, 2013, pp. 1772–1780.
  • [20] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA Workshop on Open Source Software, 2009.
  • [21] Z. Chong, B. Qin, T. Bandyopadhyay, T. Wongpiromsarn, B. Rebsamen, P. Dai, E. Rankin, and M. H. Ang Jr, “Autonomy for mobility on demand,” Intelligent Autonomous Systems 12, pp. 671–682, 2013.