This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\history

Received May 14, 2019, accepted June 6, 2019, date of publication June 10, 2019, date of current version July 18, 2019. 10.1109/ACCESS.2019.2922113

\tfootnote

The authors thank the financial support of National Natural Science Foundation of China (Grant No: 51605054), Key Technical Innovation Projects of Chongqing Artificial Intelligent Technology (Grant No. cstc2017rgzn-zdyfX0039), Chongqing Social Science Planning Project (No:2018QNJJ16), Fundamental Research Funds for the Central Universities (No: 2019CDXYQC003)

\corresp

Corresponding author: Ke Wang (e-mail: [email protected]);

A Design of Cooperative Overtaking Based on Complex Lane Detection and Collision Risk Estimation

JUNLAN CHEN1    KE WANG2    HUANHUAN BAO3    TAO CHEN3    School of Economics & Management, Chongqing Normal University, Chongqing 401331, China State Key Laboratory of Mechanical Transmission, Chongqing University, Chongqing 400044, China China Automotive Engineering Research Institute Company, Ltd., Chongqing 401122, China
Abstract

Cooperative overtaking is believed to have the capability of improving road safety and traffic efficiency by means of the real-time information exchange between traffic participants, including road infrastructures, nearby vehicles and others. In this paper, we focused on the critical issues of modeling, computation, and analysis of cooperative overtaking and made it playing a key role in the road overtaking area. In detail, for the purpose of extending the awareness of the surrounding environment, the lane markings in front of ego vehicle were detected and modeled with Bezier curve using an onboard camera. While the nearby vehicle positions were obtained through the vehicle-to-vehicle communication scheme making assure of the accuracy of localization. Then, Gaussian-based conflict potential field was proposed to guarantee the overtaking safety, which can quantitatively estimate the oncoming collision danger. To support the proposed method, many experiments were conducted on the human-in-the-loop simulation platform. The results demonstrated that our proposed method achieves better performance, especially in some unpredictable nature road circumstances.

Index Terms:
collision probability, cooperative overtaking, intelligent vehicle, lane detection, vehicle safety
\titlepgskip

=-15pt

I Introduction

Driving on the road, the safety issues are always the most major concerns, while a huge number of injuries and deaths reveal the story of the global crisis on this topic[1]. Around the world, approximately 1.35 million people die as a result of road traffic crashes [2] and 50 million are injured per year. While, vehicle overtaking is one of the important causes of casualties[3]. Among them, most of the accidents were caused by the incomprehensive understanding of the nearby environment [4] and the impulsive lane changing behaviors in the traffic stream[5].

Overtaking is generally affected by environment understanding and vehicular interactions. Hence, in some framework of intelligent vehicles, sophisticated environment perception module and vehicle-to-vehicle (V2V) wireless communication approach are used, aiming to enable automated cooperation among different vehicles on the road. With the extended capability of situational awareness, cooperative overtaking can enable drivers and virtual-drivers a longer perception range even beyond the field of view. This aspect enables better driving decisions for overtaking. Despite these application advantages, however, there has been comparatively fewer application on this cooperative driving field. Right now, most of the works focused on the communication scheme named VANETs (Vehicular Ad-hoc Networks)[6] discussing the fundamental problem of the communication network via wireless links. It has just solved low-level aspects of ad-hoc networks and standards. But, in the high application level, the practical use of V2V systems in the Advanced Driverless Associate System (ADAS) and Intelligent Vehicle (IV) is rarely noticed. How to promise a high safety benefit for overtaking procedure is still a critical issue and challenging task for intelligent vehicles.

In this paper, we exploit the cooperative overtaking problem by integrating collision probability model into driving decision procedure. We show that, with the fusion of information from vision sensors and V2V sensors, the risk estimation of overtaking procedure could become more robustness and efficiency.

Compared to the state-of-the-art works, there are three main contributions of this paper, which were shown as follows:

(1) A novel BDI based multi-vehicle collaboration framework was proposed, which uses different kinds of heterogeneous sensors, to extend the awareness of the surrounding environment;

(2) Both the Bezier curve and hybrid Gaussian anisotropic filter were adopted in lane marking detection and modeling algorithm to increase the accurateness and robustness of the estimation of the relative position of the ego vehicle with respect the forward lanes

(3) Gaussian-based probability density function and conflict potentials field were constructed to describe the uncertainty of the driving risk, which can quantitatively estimate the collision probability between nearby vehicles.

The remainder of this paper is organized as follows: the related works were given in Section 2, and the system architecture of the proposed approach was presented in Section 3. Then, the Bezier based lane detection and modeling method were given in Section 4. The collision probability prediction method was discussed in Section 5. At last, the proposed method is validated in Sections 6, and the conclusions are given in Section 7.

II Related Works

Overtaking is one of the most complex maneuvers for intelligent vehicles both in manual driving mode and automation driving mode[7]. In general, overtaking compose of some consecutive maneuvers. The lane changing behavior followed by traveling a planned path in the adjacent lane parallelled to the overtaken vehicle, and then, a change to the original lane. During the process, different kinds of sensor-based environment perception modules and the longitudinal-lateral motion control modules are comprehensively been undertaken. The two modules are not only associated but also interactive: the environment understanding is the prerequisite, and the following control is the purpose[8], while both of them are discussed in this paper.

How to use the environment information to make the overtaking procedure safely and smoothly is a key problem[9]. In this end, camera-based driver assistance systems have been equipped in some intelligent vehicles in order to make the front lanes observed by ego vehicle automatically[10]. However, because of the regular damage, fracture, and pollution, road marking lines sometimes not clear, the promising detection result cannot be effectively guaranteed, no mention of the noise, light unevenness, water and stains in the real road. Parabola[11], spline curve[12], and hyperbola [13] were chosen by researchers to model the road. And then, the lane model parameters are estimated by means of maximum posterior probability estimation[14]. However, due to the time-varying, complexity, nonlinearity, and uncertainty, it is normally hard to obtain accurate mathematical lane models[15]. For the purpose of increasing the calculation speed, RANSAC (RANdom SAmple Consensus) algorithm was used in many types of research to eject most of the outliers in the feature matching step [16].

Meanwhile, in order to estimate the relative positions of nearby vehicles, radar, Lidar, camera, and wireless V2V communication schemes have been explored by some researchers [17]. As the positive kinds of sensors, thanks of their inherent activeness characteristics, radar and Lidar can get the relative distance and velocity directly, however, their performance would drop dramatically in the fog, haze and rain days [18]. For the passive sensors, such as camera, they can only be used in the daytime and lost their capability in the night[19]. Meanwhile, V2V sensors were also comprehensively used in this area, including DSRC (Dedicated Short Range Communications) and IEEE 802.11p protocol [20]. As a wireless short-to-medium-range communications method, DSRC has the capability of permitting high data transmission. While, the IEEE 802.11p/Wireless Access has promulgated a suite of physical and medium access control layer specifications to enable communications in vehicular networks, which enables automated cooperation between different vehicles and road infrastructures [21]. In a unified on-road vehicular network, each connected vehicle periodically transmit and receive position, perception, and safety-related messages with switching on a CCH (common control channel). Use this information, the vehicle can tune into one of the available service channels (SCHs) to exchange all the driving-related information [22]. Compared to radars, Lidar, and camera, the V2V communication scheme can give out a super long range to the vehicle position information.

In order to plan an overtaking manoeuvre safely, the ego vehicle uses environment perception data and subject vehicle state data to check feasibility of the manoeuvre and design a collision free and safe local reference trajectory for an overtaking manoeuvre [23]. The local trajectory planning can be defined as real time planning of the vehicle’s transition from one feasible state to the next, while satisfying the vehicle’s kinematic limits based on vehicle dynamics and constrained by occupant comfort, lane boundaries and traffic rules, while, at the same time, avoiding obstacles [24]. Meanwhile, there are four well known techniques, including: potential fields, cell de-composition, interdisciplinary methods and optimal control, which were normally employed to construct the local trajectory planning method [25].

Although various land marking detection algorithms and plentiful localization strategies as mentioned before have been proposed and applied in the intelligent vehicle area, a combination of the two or the associated application into overtaking risk estimation are really few[26]. Among the previous studies, the overtaking assistant modules usually based on the assumption that the vehicles are going straight with constant speed, which would restrict the performance[27]. Besides, some researchers predefined a safety cell around each vehicle, and warning would be aroused by the disturbance of nearby vehicles[28]. By adopting the safety cell method, the traffic flow rate would reduce seriously. From the works, we can found that, right now, the collision risk in complex traffic is still tough to estimate, which need to be further discussed.

\Figure

(topskip=0pt, botskip=0pt, midskip=0pt)[width=320pt]Figure/Figure1.pdf Coordinates of rigid body motion model with the forward facing monocular camera and the onboard V2V sensor.

III Assumption, Coordinates, and collaboration framework

It is conspicuous that the proposed solution should avert from computationally demanding strategy and aim at developing a method with the ability to operate in real time and accuracy. Therefore, the emphasis of this paper was placed on developing efficient collision risk estimation scheme with the help of the front lane detection and nearby vehicle positions.

III-A Assumption and Coordinates Definition

Considering a sensor system including a forward facing monocular camera and an onboard V2V sensor, we assume that the ego-motion estimation model is a rigid body motion model and the monocular camera can be modeled by the pinhole camera model and vehicles travel on structured roads, including highways and city roads. Then, using the Zhang method [29], the intrinsic and extrinsic parameters can be easily calibrated in advance. With the calibration matrix, the relative pose transformation matrix between two sensors can be obtained. Hence, we can use a single coordinate system for both the V2V sensor and the forward facing monocular camera. The coordinate systems are defined in the following (see Figure 1 for illustration).

There are two coordinates used in our system, the global V2V based original global coordinate{CV}\left\{{{C}_{V}}\right\} and the forward facing monocular camera coordinates system{CC}\left\{{{C}_{C}}\right\}, which is defined as follows: (1) we paralleled XV{{X}_{V}}-OV{{O}_{V}}-YV{{Y}_{V}} plane of {CV}\left\{{{C}_{V}}\right\} to the horizontal plane. The ZV{{Z}_{V}}-axis points opposite to gravity. The YV{{Y}_{V}}-axis points forward of the vehicle platform, and the XV{{X}_{V}}-axis is determined by the right-hand rule. (2) forward camera coordinate system{CC}\left\{{{C}_{C}}\right\} is set originated at the optical center of the camera system. The XC{{X}_{C}}-axis points to the left, the YC{{Y}_{C}}-axis points downward, and the ZC{{Z}_{C}}-axis points forward coinciding with the camera principal axis.

III-B BDI based V2V collaboration framework

The communication mechanism satisfies the standard of IEEE 802.11p protocol, which was used as the information interaction channel. In this way, the effective communication distance is of 200 meters and the information was packaged by the basic safety message (BSM), including sharing information and behavioral collaboration information. In the context of wireless communication based vehicular network, belief-desire-intention (BDI) framework was adopted to construct a hierarchical and cooperation interaction model, which was shown in Figure 2.

In the framework, the belief module was mainly used to realize the environment perception task, especially for road environment perception using the onboard camera and V2V sensors. With the belief module, the system can monitor the nearby environment and vehicle states in real time, including lane detection and relative distance between nearby vehicles. While, the desire module was designed for environment risk evaluation and risk estimation, such as the collision estimation, which is the core function of the vehicle cooperation system. Then, the intention module is used to achieving the task of behavioral cooperation, such as path planning, cruise control, and overtaking collaboration. The detail is shown in Figure 3.

\Figure

(topskip=0pt, botskip=0pt, midskip=0pt)[width=380pt]Figure/Figure2.pdf BDI based V2V collaboration framework. \Figure(topskip=0pt, botskip=0pt, midskip=0pt)[width=380pt]Figure/Figure3.pdf BDI based V2V collaboration framework.

IV Bezier based lane detection and modeling

Lane detection and modeling is the key component of belief module, which can give us the representation of the nearby environment. However, there are lots of interference in the lane detection task, such as light unevenness, shadows of vehicles and buildings, water and stains, wears and tears, and belts on the roads. These obstructions yield great difficulties in the task of understanding lane markings [8]. Here, for the purpose of detecting and modeling the front lanes, both the Bezier curve and hybrid Gaussian anisotropic filter were adopted to increase the accurateness and robustness of the proposed method. With the Bezier curve, the different grades of roads can be flexibly modeled with corresponding degree of control points. Meanwhile, considering the anisotropy needs of road image preprocessing, along the road direction, we need a smoothing filter to eliminate image defects such as breakage, contamination, etc. and in the vertical direction, we need edge enhancement filters to enhance the characteristics of the road for subsequent road detection modules.

IV-A Image preprocessing and filtering

In order to improve the robustness, two layers of ROI (region of interests) was set to avoid the noise interference in non-road areas and to improve the algorithm’s real-time performance, which was shown in Figure 4. In the high-level layer, static ROI was set in the original image

R=(k×ImW,l×ImH,vpx+Δx,vpy+Δy)R=\left(k\times\text{ImW},l\times\text{ImH},vpx+\Delta x,vpy+\Delta y\right) (1)

Where, ImgW and ImgH are the width and height of the image, and kk, ll are proportional adjustment coefficients, (vpx+Δxvpx+\Delta x , vpy+Δyvpy+\Delta y) is the coordinates of the center of the area of interest, Δx\Delta x , Δy\Delta y Is the deviation adjustment coefficient.

When it comes to the low level, dynamic ROI was set on the bird’s-eye view image according to the current vehicle status and driving intention, as shown in Figure 4. As soon as the lane changing behavior is detected from steering signal, the width of dynamic ROI W will be increased and the deviation coefficient a will be decreased, for the purpose of extending the search area of nearby lanes. On the other hand, the height of dynamic ROI H is determined by the current vehicle speed. When the speed is high, the speed coefficient b and the regional height H will be increased dynamically to enlarge the perception area in front of ego vehicle.

When it comes to the low level, dynamic ROI was set on the bird’s-eye view image according to the current vehicle status and driving intention, as shown in Figure 4. For the purpose of extending the search area of nearby lanes, as soon as the lane changing behavior is detected from steering signal, the width of dynamic ROI W will be increased and the deviation coefficient aa will be decreased. Meanwhile, the height of dynamic ROI H is determined by the current vehicle speed. When the speed is high, the speed coefficient bb and the regional height H will be increased dynamically to enlarge the perception area in front of ego vehicle.

In the above image transformation step, according to the assumption of the pinhole camera model, the transformation matrix from vehicle to the camera can be calculated by the following equation.

T=[hfuc2hfvs1s2hfucuc2hfvcvs1s2hc1s20hfus2hfvs1c1hfucuc2hfvcvs1c2hc1c200hfvc1hfvcvc1+hs1001fvc11fvcvc1s10]T=\left[\begin{matrix}-\frac{h}{{{f}_{u}}}{{c}_{2}}&\frac{h}{{{f}_{v}}}{{s}_{\text{1}}}{{\text{s}}_{2}}&\frac{h}{{{f}_{u}}}{{c}_{u}}{{c}_{2}}-\frac{h}{{{f}_{v}}}{{c}_{v}}{{s}_{1}}{{s}_{2}}-h{{c}_{1}}{{s}_{2}}&0\\ \frac{h}{{{f}_{u}}}{{s}_{2}}&\frac{h}{{{f}_{v}}}{{s}_{1}}{{c}_{1}}&-\frac{h}{{{f}_{u}}}{{c}_{u}}{{c}_{2}}-\frac{h}{{{f}_{v}}}{{c}_{v}}{{s}_{1}}{{c}_{2}}-h{{c}_{1}}{{c}_{2}}&0\\ 0&\frac{h}{{{f}_{v}}}{{c}_{1}}&-\frac{h}{{{f}_{v}}}{{c}_{v}}{{c}_{1}}+h{{s}_{1}}&0\\ 0&-\frac{1}{{{f}_{v}}}{{c}_{1}}&\frac{1}{{{f}_{v}}}{{c}_{v}}{{c}_{1}}-{{s}_{1}}&0\\ \end{matrix}\right] (2)

Where, TT is the transformation matrix from vehicle coordinate to camera coordinate, the flxf{{l}_{x}}andflyf{{l}_{y}} are the horizontal and vertical shift of camera focus; ocxo{{c}_{x}} and ocyo{{c}_{y}} are the horizontal and vertical position deviation of camera optical center; and c1=cos(θpitch){{c}_{1}}=\cos({{\theta}_{pitch}}) c2=cos(θyaw){{c}_{2}}=\cos({{\theta}_{yaw}}) s1=sin(θpitch){{s}_{1}}=\sin({{\theta}_{pitch}}) s2=sin(θyaw){{s}_{2}}=\sin({{\theta}_{yaw}}). θpitch{{\theta}_{pitch}} and θyaw{{\theta}_{yaw}} are defined in Figure 1. Using transformation matrix TT, the inverse perspective transformation image can be easily obtained.

Hybrid Gaussian anisotropic filter was also used for improving the robustness of the proposed method in the image preprocessing step. Taking Gaussian function as scale function, a hybrid Gaussian anisotropic filter was constructed by using low-pass smooth Gaussian filter and second-order Mexican hat wavelet high-pass filter, as the following equation:

Gθ=Gx0cosθ+Gy90sinθ{{\text{G}}^{\theta}}\text{=}G_{x}^{{{0}^{\circ}}}\cos\theta+G_{y}^{{{90}^{\circ}}}\sin\theta (3)

Where,

Gv90=exp(12σv2v2)G_{v}^{{{90}^{\circ}}}=\exp(-\frac{1}{2\sigma_{v}^{2}}{{v}^{2}}) (4)
Gu0=1σu2(1u2σu2)exp(12σu2u2)G_{u}^{{{0}^{\circ}}}=\frac{1}{\sigma_{u}^{2}}(1-\frac{{{u}^{2}}}{\sigma_{u}^{2}})\exp(-\frac{1}{2\sigma_{u}^{2}}{{u}^{2}}) (5)

In the above equations, equation 4 is a Gaussian Lowpass Filter and equation 5 is a second order Mexican hat wavelet high pass filter. θ\theta is the angle input of filter direction, σu2\sigma_{u}^{2} depends on the width of the front lane, σv2\sigma_{v}^{2} depends on the length of road on the dynamic ROI.

\Figure

(topskip=5pt, botskip=0pt, midskip=3pt)[width=242pt]Figure/Figure4.pdf Image preprocessing and the setting of two layer ROI.

IV-B Bezier based uncertain deformation template

Commonly, driving roads can be modeled by different curve types, such as straight line model, quadratic curve model and higher order curve model[30]. While, simple line model was widely used in the highway area, but it is hard to fit complex road. The quadratic curve has a unidirectional boundary curvature resulting in poor model adaptability. Higher order curve model can be used to successfully describe the complex road, but it has an unbearable computational complexity. Therefore, in this paper, the Bezier spline curve was adopted to construct the Uncertain Deformation Template (UDT) for complex lane detection, which can automatically choose the complexity of model types.

The Bezier curve is constructed by Bernstein basis function, and the characteristics of the curve can only be determined by the position of its control points [31]. The definition of the Bezier curve is shown in the following equation:

P(t)=i=0nPin!i!(ni)!(1t)niti(0t1)P(t)=\sum\limits_{i=0}^{n}{{{P}_{i}}\frac{n!}{i!\left(n-i\right)!}{{\left(1-t\right)}^{n-i}}{{t}^{i}}}\begin{matrix}{}&(0\leq t\leq 1)\\ \end{matrix} (6)

From the definition, we can easily found that the different degree of Bezier equations can be modeled with different n values. In detail, commonly, Linear Bezier curve P(t)=(1t)P0+tP1P\left(t\right)=\left(1-t\right){{P}_{0}}+t{{P}_{1}} can be used to raise the straight road type for highway, and Quadratic Bezier curve P(t)=(1t)2P0+2t(1t)P1+t2P2P\left(t\right)={{\left(1-t\right)}^{2}}{{P}_{0}}+2t\left(1-t\right){{P}_{1}}+{{t}^{2}}{{P}_{2}}, which is constructed by two control points can be used to build the curve road. Cubic Bezier curve P(t)=(1t)3P0+3t(1t)2P1+3t2(1t)P2+t3P3P\left(t\right)={{\left(1-t\right)}^{3}}{{P}_{0}}+3t{{\left(1-t\right)}^{2}}{{P}_{1}}+3{{t}^{2}}\left(1-t\right){{P}_{2}}+{{t}^{3}}{{P}_{3}} can be used to put up S-shape path.

Thereafter, using the different degree of Bezier spline curve, a UDT can be constructed, which was shown in equation 7. Then, the land detection question can be transformed to the question of determining the parameters of UDT.

L=[n,Pn,c,s](2n4)L=\left[n,{{P}_{n}},c,s\right]\begin{matrix}{}&(2\leq n\\ \end{matrix}\leq 4) (7)

Where, nn is the order of the Bezier curve, Pn{{P}_{n}} is the curve control points determined by order, cc is the color of lane marking, ss is the credibility evaluation coefficient of the current template, and is normalized to the interval of [0 1].

\Figure

(topskip=0pt, botskip=0pt, midskip=0pt)[width=242pt]Figure/Figure5.pdf Data space setting of possible pixel belonging to lane marking.

IV-C Parameters solving of UDT under hypothesis and testing problem

In order to solve the hypothesis and testing problem, some of the previous works consider that every point in the image has the opportunity of belonging to the lane marking, so they make all the image points involved in the calculation of posterior probability density function, which consumes a lot of computing resources. [32]. In order to reduce the calculation, the data set of possible lane marking pixels was built, which was shown in Figure 5 with three steps. First, the dynamic ROI area was segmented from the original image for the further process, including hybrid Gaussian anisotropic filtering. Then, we compressed the 2D pixels in the processed image ROI into the image top and sum up the intensity of pixels in each image columns. Further, in the 1D intensity image, a reasonable threshold was set to predetermine the possible pixels belonging to the lane marking. In this way, the data space of possible lane marking pixels can be obtained, as shown in the following equation.

Ω={P1,P2,PM}\Omega=\left\{{{P}_{1}},{{P}_{2}},\cdots{{P}_{M}}\right\} (8)

Where, Ω\Omega is the sample space, MM is the number of samples, Pi{{P}_{i}} is the possible lane marking pixel. The possible pixel picking method can effectively avoid the negative impact of many road noises, such as light unevenness, road wears, and tears, etc.

In the hypothesis step, the Random Sample Consensus (RANSAC) algorithm was adopted to obtain the parameter hypothesis of current UDT. Thinking of the driving environment, most of roads are straight types, followed by curves, and the complicated road types are few. So, the template parameters are estimated by increasing the template order parameter nn from 2 to 4, which means that the parameter estimation process follows the logic from simple to complex. In details, first, N samples are randomly selected from the sample space to form a sample group, and this operation needs to be repeated Q times to obtain Q sample groups. By fitting the sample points in each sample set, Q fitting curves can be obtained. Then, the reliability of the Q fitting results are verified separately. If anyone of the Q fitting curves passes the consistency test, the search stops, and the curves with the highest credibility would be considered as the road fitting curve. Otherwise, N takes N+1 to repeat the above operation. Where the value of N directly determines the number of deformation template levels. It should be pointed out that when N is 2, the sample is fitted with a line; When N is 3, the parabola is used to fit the sample; The least squares fitting sample is used when N is 4.

During the process, the mathematical expression of the Bezier curve can be written in the matrix form as shown below:

[Q1Qn](n+1)×1=\displaystyle{{\left[\begin{matrix}{{Q}_{1}}\\ \cdots\\ {{Q}_{n}}\\ \end{matrix}\right]}_{(n+1)\times 1}}= (9)
[t1n1tnn1](n+1)×(n+1)M(n+1)×(n+1)[P1Pn](n+1)×1\displaystyle{{\left[\begin{matrix}t_{1}^{n}&\cdots&1\\ {}&\cdots&{}\\ t_{n}^{n}&\cdots&1\\ \end{matrix}\right]}_{(n+1)\times(n+1)}}{{M}_{(n+1)\times(n+1)}}{{\left[\begin{matrix}{{P}_{1}}\\ \cdots\\ {{P}_{n}}\\ \end{matrix}\right]}_{(n+1)\times 1}}

It can be abbreviate represented as:

Qn=TMPn{{Q}_{n}}=TM{{P}_{n}} (10)

The solution to this equation is

Pn=(TM)1Qn{{P}_{n}}={{(TM)}^{-1}}{{Q}_{n}} (11)

In the credibility testing step, pixel consistency and curve likelihood were used to determine whether the current fitting parameters of UDT can pass the consistency test. Firstly, the pixel consistency examining module was used to verify the grayscale weight of pixel points within the hypothesized curve. If the pixel consistency coefficient L(S)L(S) in the following equation was below the set threshold, the current parameters of UDT would lose the verification process.

L(S)=ci=0i=svalL(S)=c*\sum\limits_{i=0}^{i=s}{val} (12)

Where cc is the color compensation factor. When the road marking is white, c=1c=1, and if it is yellow, c=1.5c=1.5. SS is the number of positive pixels passing through the fitting curve, valval is the grayscale weight of pixel points. Meanwhile, the curve likelihood coefficient would evaluate the length and bending degree of the fitting curve for the current UDT, which should not deviate from the normal range. The curve likelihood coefficient Q(S)Q(S) can be calculated using the following equation:

Q(S)=k1lv+k2N2i=1i=N2cos(πθi)Q(S)={{k}_{1}}\frac{l}{v}+\frac{{{k}_{2}}}{N-2}\sum\limits_{i=1}^{i=N-2}{\cos(\pi-{{\theta}_{i}})} (13)

Where, k1{{k}_{1}}is the length coefficient, ll is the distance between the furthest two sample points on the curve, vv is the image height, k2{{k}_{2}} is the angle coefficient, NN is the number of sample points, θi{{\theta}_{i}} is the angle between adjacent sample points.

Then, use both of the pixel consistency and curve likelihood coefficients, we can construct a reliability evaluation index ss of UDT, which is shown in the following equation:

s=kL×L(S)+kQ×Q(S)s={{k}_{L}}\times L(S)+{{k}_{Q}}\times Q(S) (14)
\Figure

(topskip=0pt, botskip=0pt, midskip=0pt)[width=400pt]Figure/Figure6.pdf The brief introduction of overtaking procedure.

Here, kL{{k}_{L}} and kQ{{k}_{Q}} are the weight proportionality coefficient of pixel consistency coefficient L(S)L(S) and Q(S)Q(S) . In this way, we can reckon the parameters of UDT in real time using the RANSAC based hypothesis and testing method.

V Gaussian-based conflict probability estimation

With the models of the ahead lanes and the relative positions and velocities of nearby vehicles received from V2V sensors, the local dynamic environment can be built successfully. Using the time to collision (TTC) method, we can determine the proper time for the behavior decision of the overtaking. It should be noticed that the procedure should never be started if the front overtaking lane has been occupied by other slower vehicle or there is no enough space for the ego vehicle. Based on constant speed hypothesis, the TTC can be calculated by the following equation.

TTC=SabLa/2Lb/2vavb(va>vb)TTC=\frac{{{S}_{ab}}-{{L}_{a}}/2-{{L}_{b}}/2}{{{v}_{a}}-{{v}_{b}}}\ \ \ \ \text{(}{{v}_{a}}>{{v}_{b}}\text{)} (15)

Where, Sab{{S}_{ab}} is the starting distance between ego vehicle A and nearby vehicle B. La{{L}_{a}} is the length of vehicle A and Lb{{L}_{b}} is the length of vehicle B. va{{v}_{a}} is the current speed of vehicle A and vb{{v}_{b}} is the current speed of vehicle B.

During the process of overtaking, Gaussian-based conflict potential field was proposed to guarantee overtaking safety, which can be used to quantitatively estimate the oncoming collision danger. The brief introduction of overtaking procedure is shown in Figure 6.

V-A Conflict Potential Field

During the process of overtaking, the uncertainty of collision risk can be represented using the probability density function, as shown in Figure 7. Conflict potential fields can be established respectively, while, the distributions of potential fields are assumed to satisfy multivariate Gaussian distribution with the direction pointing from the reference center to the opposite.

\Figure

[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=240pt]Figure/Figure7.jpg Potential fields at overtaking stage.

Taking overtaking vehicle (OV) A as an instance, the potential field along the major principal axis and minor principal axis are independent. The probability density function of the potential field of OV A can be denoted by a multivariate Gaussian distribution. As given by the following formula:

𝐗A|μA,𝚲A)=1(2π)D/2|𝚲A|1/2exp(12ΔA2){{\mathbf{X}}_{A}}\left|{{\mathbf{\mu}}_{A}},{{\mathbf{\Lambda}}_{A}}\right.)=\frac{1}{{{(2\pi)}^{D/2}}{{\left|{{\mathbf{\Lambda}}_{A}}\right|}^{1/2}}}\exp(-\frac{1}{2}\Delta_{A}^{2}) (16)

Where, N(𝐗A|μA,𝚲A)N({{\mathbf{X}}_{A}}\left|{{\mathbf{\mu}}_{A}},{{\mathbf{\Lambda}}_{A}}\right.) is the probability density distribution of the conflict potential field, 𝐗A{{\mathbf{X}}_{A}} is the input of two-dimensional variable, 𝚲A{{\mathbf{\Lambda}}_{A}} is covariance matrix, |𝚲A|\left|{{\mathbf{\Lambda}}_{A}}\right| is the determinant of 𝚲A{{\mathbf{\Lambda}}_{A}}, DD is the dimension value of input variables, in this paper D=2D=2, μA{{\mathbf{\mu}}_{A}} is the mean-variance of two-dimensional Gaussian distribution, ΔA{{\Delta}_{A}} is the Mahalanobis distance from μA{{\mathbf{\mu}}_{A}} to 𝐗A{{\mathbf{X}}_{A}} and the calculation is given by:

ΔA2=(𝐗AμA)T𝚲A1(𝐗AμA){{\Delta}_{A}}^{2}={{({{\mathbf{X}}_{A}}-{{\mathbf{\mu}}_{A}})}^{\text{T}}}{{\mathbf{\Lambda}}_{A}}^{-1}({{\mathbf{X}}_{A}}-{{\mathbf{\mu}}_{A}}) (17)

The distribution of potential fields in the major principal axis and minor principal axis are independent of each other. Then, the covariance matrix of the potential field can be obtained as:

𝚲𝐀=[σAx200σAy2]{{\mathbf{\Lambda}}_{\mathbf{A}}}=\left[\begin{matrix}\sigma_{Ax}^{2}&0\\ 0&\sigma_{Ay}^{2}\\ \end{matrix}\right] (18)

Taking into account the impact of the relative speed, obviously, it would affect the collision risk and make the potential fields more deformable. So, the standard deviation of the covariance matrix σAx{{\sigma}_{Ax}} is constructed by two parts: basic value σx{{\sigma}_{x}} and compensate value which is constructed by the relative longitudinal velocity δv{{\delta}_{v}}. The standard deviation is given by:

σAx=σx±raδv{{\sigma}_{Ax}}=\sigma_{x}\pm{{r}_{a}}*{{\delta}_{v}} (19)

Where, ra{{r}_{a}} is the gain coefficient of forwarding direction variance. Considering the impact of σAx{{\sigma}_{Ax}} on the standard deviation of the lateral covariance matrix σAy{{\sigma}_{Ay}}, the lateral covariance matrix σAy{{\sigma}_{Ay}} can be reckoned by equation (13), while assuming the linear relationship between σAx{{\sigma}_{Ax}} and σAy{{\sigma}_{Ay}} .

σAy=min(rcσAx,σ¯y)\sigma_{Ay}=\min({{r}_{c}}*{{\sigma}_{Ax}},\bar{\sigma}_{y}) (20)

Where, rc{{r}_{c}} is the gain coefficient of lateral direction variance. σ¯y{{\bar{\sigma}}_{y}} is the saturation value for limiting σAy{{\sigma}_{Ay}} in a reasonable range. Formula (18) and formula (19) obviously implies that the relatively high velocity will not only increase the possibility of the collision before exceeding the overtaken vehicle but also decrease the possibility of collision after the exceeding. Similarly, the conflict potential field of overtaken vehicles can also be constructed in accordance with the method mentioned above.

V-B Estimation of Conflict Probability

Based on the constructed conflict potential fields at the surpassing stage, the estimation of conflict probability can be calculated by integrating the probability density of potential fields over the conflict area, which is shown in Figure 6.

In order to simplify the calculation, the two conflict potential fields of overtaking vehicle A and leading vehicle B can be uniformed to the world coordinate system. The transformation matrix from the vehicle coordinate system to the world coordinate system is given by:

R=[cosθsinθsinθcosθ]R=\left[\begin{matrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\\ \end{matrix}\right] (21)

Where, RR is the transformation matrix. θ\theta is the azimuth angle between the vehicle coordinate and world coordinate. Therefore, the covariance matrix can be transformed from the vehicle coordinate to the world coordinate, as given by:

𝚲AW=RA𝚲ARAT{{\mathbf{\Lambda}}_{\text{A}}}^{W}={{R}_{A}}{{\mathbf{\Lambda}}_{A}}{{R}_{A}}^{\text{T}} (22)
𝚲BW=RB𝚲BRBT{{\mathbf{\Lambda}}_{B}}^{W}={{R}_{B}}{{\mathbf{\Lambda}}_{B}}{{R}_{B}}^{\text{T}} (23)

Considering the irrelevant of the distribution of two potential fields, the joint error covariance matrix can be obtained according to the synthesis rules of multivariate Gaussian distribution, as follows:

𝚲=𝚲𝐀W+𝚲BW\mathbf{\Lambda}={{\mathbf{\Lambda}}_{\mathbf{A}}}^{W}+{{\mathbf{\Lambda}}_{B}}^{W} (24)

In order to facilitate the integration of conflict area, a conflict coordinate is established by taking the center of the potential field of leading vehicle B as the new origin, the major principal axis of conflict ellipses as new abscissa and the minor principal axis as new ordinate. Then, the center offset of the potential field of overtaking vehicle A can be given by:

μA=[xryr]{{\mathbf{\mu}}_{A}}=\left[\begin{matrix}{{x}_{r}}\\ {{y}_{r}}\\ \end{matrix}\right] (25)

Where,xr{{x}_{r}}is the center offset of overtaking vehicle A along the abscissa direction of conflict coordinate, yr{{y}_{r}} is the center offset in the conflict coordinate. According to the characteristic of the multidimensional normal distribution, a linear combination of normal distribution still meets the normal distribution. Hence, the joint probability density function of conflict can be given by:

f(x,y)=(2π)D/2|𝚲|1/2exp(12(𝐗μ)T𝚲1(𝐗μ))f(x,y)={{(2\pi)}^{-D/2}}{{\left|\mathbf{\Lambda}\right|}^{-1/2}}\exp(-\frac{1}{2}{{(\mathbf{X}-\mathbf{\mu})}^{\text{T}}}{{\mathbf{\Lambda}}^{-1}}(\mathbf{X}-\mathbf{\mu})) (26)

Then, the probability of collision at time t can be obtained through the integration of conflict probability density over the conflict area:

Scp=Scf(x,y)𝑑x𝑑y{{\text{S}}_{cp}}=\iint\limits_{{{S}_{c}}}{f(x,y)}dxdy (27)

Where, SCP{{\text{S}}_{\text{CP}}}is the estimation of conflict probability. f(x,y)f(x,y)is the conflict probability density function. Sc{{S}_{c}} is the conflict area.

VI Experiment Evaluation

Driver on-loop tests were adopted for the experiment evaluation. The testing platform is shown in Figure 8, which contains 4 main components: the host machine (DELL Precision T5600 workstation), the target machine (Ubuntu OS based real-time PC), driving simulation interface and the information collection-control interface, which contains the driving control module, hydraulic braking module, road feeling module and warning module and collection control interface. With the responsibility of building multiple kinds of simulation environments and algorithms, the host machine uses a 3.9G Hz CPU, 32G RAM and a 2T ROM to guarantee the efficiency.

\Figure

[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=240pt]Figure/Figure8.jpg The platform of the cooperative collision simulation system.

In the test, we assumed that the vehicle dimension is of 1.8*4.2m (dwitdth *dlenth) and the maximum acceleration is of ±2.7m/s2\pm 2.7\ m/{{s}^{2}}, and the maximum speed (vmax) is of 35m/s. The V2V sensors are installed to evaluate the relative position and distance between nearby vehicles. The effective information transmission distance is about 200 m with the typical interval frequency of 10 Hz. The lane width is set to 3.5m. The communication mechanism conforms to the standard of SAE J2735 protocol.

VI-A Bezier based lane detection and modeling

\Figure

[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=240pt]Figure/Figure9.pdf The recognition process of the road images.

In this testing part, 5932 frames of typical road pictures were tested for the proposed lane detection module and the results were statistically analyzed. The results show that the average recognition error rate of the algorithm is less than 6%, the average detection time of each frame lane line is 50ms meeting the real-time requirements

Figure 9 graphically demonstrated the recognition process of the road images. Where, figure 9(a) is the raw RGB image taking from onboard forward facing camera, and the time is in the evening with a good light condition. Figure 9(b) is the result of the image inverse perspective transformation. Fig.9 (c) is the RANSAC based parameters solving procedure of UDT under hypothesis and testing problem. where the red line is the curve fitted by the currently randomly selected deformation template. At this time, N=2, the current template matching credibility is 0.16, and the blue line is the best historical matching result in this search process, and the template matching degree is 1.14. Figure 9(d) is the final lane fitting model, and it has been transformed from the inverse perspective image to the original image.

VI-B Performance Evaluation of Cooperative Overtaking

VI-B1 Effects of the Conflict Area

To look into the influence of the conflict area on the collision probability estimation, using the same overtaking sceneries, different scope of conflict areas were tested in the experiment, where the expected velocity of the leading vehicle is 55 km/h and that of the overtaking vehicle is 80 km/h. Figure 10 presents the results of collision probability estimation and the consuming time with different conflict areas, including Sc{{S}_{c}} = 17.5 m × 4.2 m, Sc{{S}_{c}} = 18 m × 4.2 m, Sc{{S}_{c}} = 18.5 m × 4.2 m, and Sc{{S}_{c}} = 19 m × 4.2 m.

As shown in Figure 10, the result shows that the proposed collision probability estimation value has not remarkably increased with the increase of the conflict areas. Moreover, the risk level of overtaking maneuver was not significantly affected by the variation of the conflict area and was kept in a certain scope.

\Figure

[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=242pt]Figure/Figure10.pdf The effect of the conflict area to the collision probability.

VI-B2 Effects of the Collision Types

To investigate the effect of the collision types on the estimation of collision probability, different collision types have been tested in the experiment, where the relative collision velocity was set to 20km/h and the conflict areas was set to 17m×4 m. In the test, three typical collision modes including Rear-Rear collision (RRC), side-by-side collision (SRC), Front-to-Rear collision (FRC) were tested, as well as the normal overtaking maneuver for the comparison. The result in Figure11 shows that the proposed method has great adaptability to different collision types. Facing the inevitable oncoming collision, under different collision modes, the estimation curve of collision probability have a similar shape and trend, which is significantly useful for the collision avoidance technology and autonomous driving system. Obviously, if we set the overtaking prevention threshold to 0.3, then, more than 3s can be saved before the collision happens. If it is fully utilized, the collision accident would be greatly reduced.

\Figure

[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=242pt]Figure/Figure11.pdf Collision probability estimation of different collision modes.

VI-C Case Evaluation and Discussion

Case 1: Straight Road with vehicles moving at different relative distances \Figure[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=350pt]Figure/Figure12.pdf Collision probability estimation with different relative distance. \Figure[h](topskip=0pt, botskip=0pt, midskip=0pt)[width=350pt]Figure/Figure13.pdf Collision probability estimation with different relative speed.

In this case, a test environment of rural straight roads with medium concentration of fog was established. The visibility was only 120 m. The ego vehicle traveled with a slow speed and the maximum speed was just 35 km/h. Three straight lanes and four vehicles were set, including vehicle A (ego vehicle), vehicle B (forward leading vehicle), vehicle D (oncoming vehicle) and vehicle E (same direction vehicle). All of them traveled at constant speeds of 25, 18, 28 and 22 km/h respectively. Meanwhile, some slippery ice was added onto the surface of the road to decrease the adhesion coefficient of pavement. The visual-driver started up the overtaking action with different relative distances corresponding to the leading vehicles, which included 10m, 20m, and 25m. The aim of this test was to verify the capability of the proposed method to get correct and stabilized estimation of collision probability during the overtaking process. The test result was shown in Figure 12.

In the test, the ego vehicle started up the overtaking action at t\sim2s (before reminding signal), t\sim3s (after reminding signal and before warning signal) and t\sim5s (after warning signal) respectively, and returned back to the original lane after surpassing the leading vehicle. The simulation result shows that it would be very safe if the driver started the overtaking action before reminding signal, while the occupation time in the rapid lane would increase. On the other hand, as shown in Figure 12, if the driver started the overtaking action after warning signal, the collision risk would increase rapidly to 0.75, that portended a possible traffic accident would happen. If the driver complied with the instruction of the proposed method and start up the overtaking action at the right time, the collision risk would keep below 0.3 to guarantee the safety of the whole overtaking process.

Case 2: Curve Road with other vehicles moving at variable relative high speed on a sunshine day

In this case, a curved road was established with a minimum curvature radius of 150m. Meantime, the weather was changed to a good sunshine day and without lateral wind. In this weather and road condition, the influence of collision risk under variable relative speeds was tested to demonstrate the performance of the proposed method. According to the traffic rules, we set the maximum speed of the vehicles to 100 km/h and the minimum speed of 60 km/h, with the acceleration/deceleration of 3.8 m/s2. The aim of this simulation is to verify the performance of the proposed method under variable conditions of different relative speeds, which includes 7.2 km/h (low relative speed), 14.4 km/h (medium relative speed) and 21.6 km/h (high relative speed). The testing result was shown in Figure 13.

In the test, the ego vehicle cyclically detected the motion states of nearby vehicles in every 100ms, which was the same as the case 2, except that the communication distance was extended to 200m for the highway use. As shown in Figure 13, the ego vehicle started up the overtaking action at t\sim5s with three different relative speeds and it returned back to the original lane safely. The simulation result shows that, if the vehicle start the overtaking action with a low relative speed or a medium relative speed, collision risk would stay below the safety threshold of 0.3 to make sure the safety of the overtaking procedure. However, if the driver start up the overtaking action with a high relative speed of 21.6km/h, the collision risk would increase rapidly to 0.45, which is above the safety threshold. During the whole overtaking procedure, the proposed method can calculate the estimation of collision risk timely and correctly.

VII Conclusions

A novel methodology of cooperative overtaking based on BDI based multi-vehicle collaboration framework was proposed, which uses different kinds of heterogeneous sensors, to extend the awareness of the surrounding environment. The lane markings in front of ego vehicle were modeled with Bezier curves using the onboard cameras, which can adapt to different road types. While the nearby vehicles’ position and velocity were obtained through the V2V communication scheme. In addition, Gaussian-based conflict potential field was proposed to guarantee overtaking safety, which can be used to quantitatively estimate the oncoming collision danger. To support the proposed method, many experiments were conducted on the human-in-the-loop test. The results demonstrated that our proposed method achieves better performance, especially in some unpredictable nature road circumstances. In the future, we will focus on the implementation of the proposed method into the real vehicle and test its performance in the real road tests.

In addition I would like to know the predict performance in the real road test rather than the human-in-the-loop simulation test.

References

  • [1] S. Dixit, S. Fallah, U. Montanaro, M. Dianati, A. Stevens, F. Mccullough, and A. Mouzakitis, “Trajectory planning and tracking for autonomous overtaking: State-of-the-art and future prospects,” Annual Reviews in Control, vol. 45, pp. 76–86, 2018. [Online]. Available: <GotoISI>://WOS:000437957800006
  • [2] T. Pietrasik, “Road traffic injuries,” https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries, accessed December 7, 2018.
  • [3] T. Anjuman, S. Hasanat-E-Rabbi, C. K. A. Siddiqui, and M. M. Hoque, “Road traffic accident: A leading cause of the global burden of public health injuries and fatalities,” in Proc Int Conf Mech Eng Dhaka Bangladesh. 200AD Dec, Conference Proceedings, pp. 29–31.
  • [4] E. Zochmann, M. Hofer, M. Lerch, S. Pratschner, L. Bernado, J. Blumenstein, S. Caban, S. Sangodoyin, H. Groll, T. Zemen, A. Prokes, M. Rupp, A. F. Molisch, and C. F. Mecklenbrauker, “Position-specific statistics of 60 ghz vehicular channels during overtaking,” Ieee Access, vol. 7, pp. 14 216–14 232, 2019. [Online]. Available: <GotoISI>://WOS:000458817100001
  • [5] D. G. Yang, K. Jiang, D. Zhao, C. L. Yu, Z. Cao, S. C. Xie, Z. Y. Xiao, X. Y. Jiao, S. J. Wang, and K. Zhang, “Intelligent and connected vehicles: Current status and future perspectives,” Science China-Technological Sciences, vol. 61, no. 10, pp. 1446–1471, 2018. [Online]. Available: <GotoISI>://WOS:000446517900002
  • [6] A. Groza, B. Iancu, and A. Marginean, “A multi-agent approach towards cooperative overtaking in vehicular networks,” 4th International Conference on Web Intelligence, Mining and Semantics, 2014. [Online]. Available: <GotoISI>://WOS:000381010400049
  • [7] P. Petrov and F. Nashashibi, “Modeling and nonlinear adaptive control for autonomous vehicle overtaking,” Ieee Transactions on Intelligent Transportation Systems, vol. 15, no. 4, pp. 1643–1656, 2014. [Online]. Available: <GotoISI>://WOS:000340627700023
  • [8] X. Z. Zhang and X. L. Zhu, “Autonomous path tracking control of intelligent electric vehicles based on lane detection and optimal preview method,” Expert Systems with Applications, vol. 121, pp. 38–48, 2019.
  • [9] S. D. Pendleton, H. Andersen, X. X. Du, X. T. Shen, M. Meghjani, Y. H. Eng, D. Rus, and M. H. Ang, “Perception, planning, control, and coordination for autonomous vehicles,” Machines, vol. 5, no. 1, 2017. [Online]. Available: <GotoISI>://WOS:000401524900005
  • [10] K. Wang and Z. B. Xiong, “Visual enhancement method for intelligent vehicle’s safety based on brightness guide filtering algorithm thinking of the high tribological and attenuation effects,” Journal of the Balkan Tribological Association, vol. 22, no. 2A, pp. 2021–2031, 2016. [Online]. Available: <GotoISI>://WOS:000386865000009
  • [11] S. P. Narote, P. N. Bhujbal, A. S. Narote, and D. M. Dhane, “A review of recent advances in lane detection and departure warning system,” Pattern Recognition, vol. 73, pp. 216–234, 2018.
  • [12] Y. Li, X. Lu, and T. Tang, “Lane detection using spline model for freeway aerial videos,” in Tenth International Conference on Digital Image Processing (ICDIP 2018), vol. 10806.   International Society for Optics and Photonics, Conference Proceedings, p. 108060X.
  • [13] B. De Brabandere, W. Van Gansbeke, D. Neven, M. Proesmans, and L. Van Gool, “End-to-end lane detection through differentiable least-squares fitting,” arXiv preprint arXiv:1902.00293, 2019.
  • [14] Y. Huang, Y. Li, X. Hu, and W. Ci, “Lane detection based on inverse perspective transformation and kalman filter,” KSII Transactions on Internet and Information Systems, vol. 12, no. 2, 2018.
  • [15] K. Wang, Z. Huang, and Z. H. Zhong, “Simultaneous multi-vehicle detection and tracking framework with pavement constraints based on machine learning and particle filter algorithm,” Chinese Journal of Mechanical Engineering, vol. 27, no. 6, pp. 1169–1177, 2014. [Online]. Available: <GotoISI>://WOS:000344448800009
  • [16] S. Agrawal, I. K. Deo, S. Haldar, G. R. K. Kiran, V. Lodhi, and D. Chakravarty, “Off-road lane detection using superpixel clustering and ransac curve fitting,” in 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV).   IEEE, Conference Proceedings, pp. 1942–1946.
  • [17] M. Arzamendia, D. Gregor, D. G. Reina, and S. L. Toral, “An evolutionary approach to constrained path planning of an autonomous surface vehicle for maximizing the covered area of ypacarai lake,” Soft Computing, vol. 23, no. 5, pp. 1723–1734, 2019. [Online]. Available: <GotoISI>://WOS:000459315900019
  • [18] S. Noh, “Decision-making framework for autonomous driving at road intersections: Safeguarding against collision, overly conservative behavior, and violation vehicles,” Ieee Transactions on Industrial Electronics, vol. 66, no. 4, pp. 3275–3286, 2019.
  • [19] R. Rani, R. Kumar, and A. P. Singh, “A comparative study of object recognition techniques,” 2016 7th International Conference on Intelligent Systems, Modelling and Simulation (Isms), pp. 151–156, 2016. [Online]. Available: <GotoISI>://WOS:000406051500026
  • [20] P. Gomes, C. Olaverri-Monreal, and M. Ferreira, “Making vehicles transparent through v2v video streaming,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 2, pp. 930–938, 2012.
  • [21] V. Vukadinovic, K. Bakowski, P. Marsch, I. D. Garcia, H. Xu, M. Sybis, P. Sroka, K. Wesolowski, D. Lister, and I. Thibault, “3gpp c-v2x and ieee 802.11 p for vehicle-to-vehicle communications in highway platooning scenarios,” Ad Hoc Networks, vol. 74, pp. 17–29, 2018.
  • [22] J. Heinovski, F. Klingler, F. Dressler, and C. Sommer, “A simulative analysis of the performance of ieee 802.11 p and arib std-t109,” Computer Communications, vol. 122, pp. 84–92, 2018.
  • [23] L. Claussmann, A. Carvalho, and G. Schildbach, “A path planner for autonomous driving on highways using a human mimicry approach with binary decision diagrams,” 2015 European Control Conference (Ecc), pp. 2976–2982, 2015. [Online]. Available: <GotoISI>://WOS:000380485400474
  • [24] Y. S. Son, W. Kim, S. H. Lee, and C. Chung, “Robust multirate control scheme with predictive virtual lanes for lane-keeping system of autonomous highway driving,” Ieee Transactions on Vehicular Technology, vol. 64, no. 8, pp. 3378–3391, 2015. [Online]. Available: <GotoISI>://WOS:000361680000006
  • [25] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots.   Springer, 1986, pp. 396–404.
  • [26] S. Mori, “Us defense innovation and artificial intelligence,” Asia-Pacific Review, vol. 25, no. 2, pp. 16–44, 2018.
  • [27] C. Mo, Y. Li, and L. Zheng, “Simulation and analysis on overtaking safety assistance system based on vehicle-to-vehicle communication,” Automotive Innovation, vol. 1, no. 2, pp. 158–166, 2018.
  • [28] F. Feng, S. Bao, R. C. Hampshire, and M. Delp, “Drivers overtaking bicyclists—an examination using naturalistic driving data,” Accident Analysis and Prevention, vol. 115, pp. 98–109, 2018.
  • [29] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, 2000.
  • [30] Y. Son, E. S. Lee, and D. Kum, “Robust multi-lane detection and tracking using adaptive threshold and lane classification,” Machine Vision and Applications, vol. 30, no. 1, pp. 111–124, 2019. [Online]. Available: <GotoISI>://WOS:000459448100007
  • [31] R. T. Farouki, “The bernstein polynomial basis: A centennial retrospective,” Computer Aided Geometric Design, vol. 29, no. 6, pp. 379–419, 2012.
  • [32] C. B. Wu, L. H. Wang, and K. C. Wang, “Ultra-low complexity block-based lane detection and departure warning system,” Ieee Transactions on Circuits and Systems for Video Technology, vol. 29, no. 2, pp. 582–593, 2019. [Online]. Available: <GotoISI>://WOS:000458183500023
[Uncaptioned image] Ke Wang was born in Huaian, Jiangsu, China in 1984. He received the B.S. and M.S. degrees in vehicle engineering from the Hunan University, Hunan, China, in 2007 and in 2009 and the Ph.D. degree in mechanical engineering from Hunan University, Hunan, China, in 2013 . From 2014 to 2016, he was an Assistant Professor with the Automobile Engineering Department. From 2016 to 2017, He finished his Postdoctoral research at College of Engineering, Michigan University Ann Arbor, USA. Since 2017, he has been an Associate Professor with the State Key Laboratory of Mechanical Transmission, Chongqing University. He is the author of one book, more than 20 articles, and more than 15 inventions. His research interests are the intelligent vehicle, environment perception and AI.
[Uncaptioned image] Junlan Chen was born in Zhuzhou, Hunan, China in 1985.She received the B.S. in Economics from the Northwestern Polytechnical University, Shanxi, China, in 2007 and she received her M.S. degrees and Ph.D. degree in Management from Hunan University, Hunan, China, in 2009 and 2013 respectively. From 2014 to 2016, she was an Assistant Professor with the School of Economics & Management, Chongqing Normal University. From 2016 to 2017, she finished her Postdoctoral research at Research and Innovation Center, Ford motor company, Dearborn, USA. Until now, she is the author of more than 15 articles, and more than 10 inventions. Her research interests are the artificial intelligent, environment perception and economics in vehicle area.
[Uncaptioned image] Huanhuan Bao was born in Qufu, Shandong, China in 1987. He received the B.S. degree in communication engineering from the Ludong University, Shandong, China, in 2011, and the M.S. degree in vehicle engineering from the Hunan University, Hunan, China, in 2014. From 2015 to 2017, he was an engineer at the wind tunnel of China Automotive Engineering Research Institute. Since 2018, he was the head of the Department of Science and Technology. He is author of 3 articles and 5 inventions. His research interests are the intelligent vehicle, hydrogen fuel cell vehicle, and wind tunnel test.
[Uncaptioned image] Tao Chen received the PhD. and Bachelor degree in Automotive Engineering from Tsinghua University. He is currently working as Deputy Director for Intelligent Vehicle Testing & Evaluation Center, China Automotive Engineering Research Institute, and he is leading the V2X, Automated vehicle testing and engineering services to automotive industry. In his career from 2012 till now, he participated in several advanced research projects on Advanced Driver Assistant Systems(ADAS) , V2X, and automated vehicle development. He has published 20 SCI/EI indexed papers and owns 4 patents about intelligent vehicle technologies.
\EOD