Computer Stereo Vision for Autonomous Driving
Abstract
As an important component of autonomous systems, autonomous car perception has had a big leap with recent advances in parallel computing architectures. With the use of tiny but full-feature embedded supercomputers, computer stereo vision has been prevalently applied in autonomous cars for depth perception. The two key aspects of computer stereo vision are speed and accuracy. They are both desirable but conflicting properties, as the algorithms with better disparity accuracy usually have higher computational complexity. Therefore, the main aim of developing a computer stereo vision algorithm for resource-limited hardware is to improve the trade-off between speed and accuracy. In this chapter, we introduce both the hardware and software aspects of computer stereo vision for autonomous car systems. Then, we discuss four autonomous car perception tasks, including 1) visual feature detection, description and matching, 2) 3D information acquisition, 3) object detection/recognition and 4) semantic image segmentation. The principles of computer stereo vision and parallel computing on multi-threading CPU and GPU architectures are then detailed.
1 Introduction
Autonomous car systems enable self-driving cars to navigate in complicated environments, without any intervention of human drivers. An example of autonomous car system architecture is illustrated in Fig 1. Its hardware (HW) mainly includes: 1) car sensors, such as cameras, LIDARs, and Radars; and 2) car chassis, such as throttle, brake, and wheel. On the other hand, the software (SW) is comprised of four main functional modules fan2019key : 1) perception, 2) localization and mapping, 3) prediction and planning and 4) control. Computer stereo vision is an important component of the perception module. It enables self-driving cars to perceive environment in 3D.

This chapter first introduces the HW/SW aspects of an autonomous car system. Then, four autonomous car perception tasks are discussed, including: 1) visual feature detection, description and matching, 2) 3D information acquisition, 3) object detection/recognition and 4) semantic image segmentation. Finally, the principles of computer stereo vision and parallel computing on multi-threading CPU and GPU are detailed.
2 Autonomous Car System
2.1 Hardware
Car Sensors
The most commonly used car sensors include: a) passive sensors, such as cameras; b) active sensors, such as LIDARs, Radars and ultrasonic transceivers; and c) other types of sensors, such as global positioning systems (GPS), inertial measurement unit (IMU), among others. When choosing them, we need to consider many different factors, such as sampling rate, field of view (FoV), accuracy, range, cost and overall system complexity111autonomous-driving.org/2019/01/25/positioning-sensors-for-autonomous-vehicles.
Cameras capture 2D images, by collecting light reflected on 3D objects. Images captured from different views can be utilized to reconstruct the 3D driving scene geometry. Most autonomous car perception tasks, such as visual semantic driving scene segmentation and object detection/recognition, are developed for images. In Sec. 3, we provide readers with a comprehensive overview of these tasks. The perspective (or pinhole) camera model and the mathematical principles of multi-view geometry are discussed in Sec. 4. Acquired image quality is always subject to environmental conditions, such as weather and illumination pitas2000digital . Therefore, the visual information fusion from other sensors is typically required for robust autonomous car perception.
LIDAR illuminates a target with pulsed laser light and measures the source-target distance, by analyzing the reflected laser pulses detection2013ranging . Due to its ability to generate highly accurate 3D driving scene geometry models, LIDARs are generally mounted on autonomous cars for depth perception. Current industrial autonomous car localization and mapping systems are generally based on LIDARs. Furthermore, Radars can measure both the range and radial velocity of an object, by transmitting an electromagnetic wave and analyzing its reflections bureau2013radar . Radars have already been established in the automotive industry, and they have been prevalently employed to enable intelligent vehicle advanced driver assistance system (ADAS) features, such as adaptive cruise control and autonomous emergency braking\@footnotemark. Similar to Radar, ultrasonic transceivers calculate the source-object distance, by measuring the time between transmitting an ultrasonic signal and receiving its echo westerveld2014silicon . Ultrasonic transceivers are commonly used for autonomous car localization and navigation.
In addition to the aforementioned passive and active sensors, GPS and IMU systems are commonly used to enhance autonomous car localization and mapping performance zheng2019low . GPS can provide both time and geolocation information for autonomous cars. However, its signals can become very weak, when GPS reception is blocked by obstacles in GPS-denied regions, such as urban regions samama2008global . Hence, GPS and IMU information fusion is widely adopted to provide continuous autonomous car position and velocity information zheng2019low .
Car Chassis
Car chassis technologies, especially Drive-by-Wire (DbW), are required for building autonomous vehicles. DbW technology refers to the electronic systems that can replace traditional mechanical controls liu2019chassis . DbW systems can perform vehicle functions, which are traditionally achieved by mechanical linkages, through electrical or electro-mechanical systems. There are three main vehicle control systems that are commonly replaced with electronic controls: 1) throttle, 2) braking and 3) steering.
A Throttle-by-Wire (TbW) system helps accomplish vehicle propulsion via an electronic throttle, without any cables from the accelerator pedal to the engine throttle valve. In electric vehicles, TbW system controls the electric motors, by sensing accelerator pedal for a pressure (input) and sending signal to the power inverter modules. Compared to traditional hydraulic brakes, which provide braking effort, by building hydraulic pressure in the brake lines, a Brake-by-Wire (BbW) system completely eliminates the need for hydraulics, by using electronic motors to activate calipers. Furthermore, in vehicles that are equipped with Steer-by-Wire (SbW) technology, there is no physical connection between the steering wheel and the tires. The control of wheels’ direction is established through electric motor(s), which are actuated by electronic control units monitoring steering wheel inputs.
In comparison to traditional throttle systems, electronic throttle systems are much lighter, hence greatly reducing modern car weight. In addition, they are easier to service and tune, as a technician can simply connect a laptop to perform tuning. Moreover, an electronic control system allows more accurate control of throttle opening, compared to a cable control that stretches over time. Furthermore, since the steering wheel can be bypassed as an input device, safety can be improved by providing computer controlled intervention of vehicle controls with systems, such as Adaptive Cruise Control and Electronic Stability Control.
2.2 Software
Autonomous car perception module analyzes the raw data collected by car sensors (see Sec. 2.1) and outputs its understanding to the environment. This process is similar to human visual cognition. We discuss different autonomous car perception tasks in Sec. 3.
Perception module outputs are then used by other modules. The localization and mapping module not only estimates autonomous car location, but also constructs and updates the 3D environment map bhuttaloop . This topic has become very popular, since the concept of Simultaneous Localization and Mapping (SLAM) was introduced in 1986 smith1986representation .
Prediction and planning module first analyzes the motion patterns of other traffic agents and predicts their future trajectories. Such prediction outputs are then used to determine possible safe autonomous car navigation routes katrakazas2015real using different path planning techniques, such as Dijkstra cormen2001section , A-star (or simply A*) delling2009engineering , etc.
Finally, autonomous car control module sends appropriate commands to car controllers (see Sec. 2.1), based on its predicted trajectory and the estimated car state. This enables the autonomous car to follow the planned trajectory, as closely as possible. Traditional controllers, such as proportional-integral-derivative (PID) willis1999proportional , linear-quadratic regulator (LQR) goodwin2001control and model predictive control (MPC) garcia1989model are still the most commonly used ones in autonomous car control module.
3 Autonomous Car Perception
The autonomous car perception module has four main functionalities:
-
1.
visual feature detection, description and matching;
-
2.
3D information acquisition;
-
3.
objection detection/recognition;
-
4.
semantic image segmentation.
Visual feature detectors and descriptors have become very popular research topics in the computer vision and robotics communities. They have been applied in many application domains hassaballah2016image , such as image classification liu2012discriminative , 3D scene reconstruction moreels2007evaluation , object recognition dollar2011pedestrian and visual tracking danelljan2016discriminative . The matched visual feature correspondences between two (or more) images can be utilized to establish image relationships hassaballah2016image . The most well-known visual features are scale-invariant feature transform (SIFT) lowe2004distinctive , speeded up robust feature (SURF) bay2006surf , oriented FAST and rotated BRIEF (ORB) rublee2011orb , binary robust invariant scalable keypoints (BRISK) leutenegger2011brisk , and so forth.

The digital images captured by cameras are essentially 2D fan2018real . In order to extrapolate the 3D information from a given driving scene, images from multiple views are required hartley2003multiple . These images can be captured using either a single moving camera wang2020cot or an array of synchronized cameras, as shown in Fig. 2. The former is typically known as structure from motion (SfM) ullman1979interpretation or optical flow wang2020cot , while the latter is typically referred to as stereo vision or binocular vision (in case two cameras are used) fan2018real . SfM methods estimate both camera poses and the 3D points of interest from images captured from multiple views, which are linked by a collection of visual features. They also leverage bundle adjustment (BA) triggs1999bundle technique to refine the estimated camera poses and 3D point locations, by minimizing a cost function known as total re-projection error wang2020atg . Optical flow describes the motion of pixels between consecutive frames of a video sequence wang2020atg . It is also regarded as an effective tool for dynamic object detection wang2020cot . Stereo vision acquires depth information by finding the horizontal positional differences (disparities) of the visual feature correspondence pairs between two synchronously captured images. More details on computer stereo vision will be given in Sec. 4.3.

Object detection/recognition refers to the recognition and localization of particular objects in images/videos zhao2019object . It can be used in various autonomous car perception subtasks, such as pedestrian detection wang2019deep , vehicle detection wang2020atg , traffic sign detection mogelmose2012vision , cyclist detection wu2017squeezedet , etc., as shown in Fig. 3. Object detection approaches can be classified as either computer vision-based or machine/deep learning-based. The former typically consists of three steps zhao2019object : 1) informative region selection (scanning the whole image by sliding windows with particular templates to produce candidate regions); 2) visual feature extraction, as discussed above; and 3) object classification (distinguishing a target object from all the other categories using a classifier). With recent advances in machine/deep learning, a large number of convolutional neural networks (CNNs) have been proposed to recognize objects from images/videos. Such CNN-based approaches have achieved very impressive results. The most popular ones include: regions with CNN features (R-CNN) girshick2014rich , fast R-CNN girshick2015fast , faster R-CNN ren2015faster , you only look once (YOLO) redmon2016you , YOLOv3 redmon2018yolov3 , YOLOv4 bochkovskiy2020yolov4 , etc.

Semantic image segmentation labels every pixel in the image with a given object class fan2020sne , such as lane marking, vehicle, collision-free space, or pedestrian, as illustrated in Fig. 4. The state-of-the-art semantic image segmentation approaches are mainly categorized into two main groups wang2020applying : 1) single-modal and 2) data-fusion. The former typically segments RGB images with an encoder-decoder CNN architecture fan2020learning . In recent years, many popular single-model semantic image segmentation algorithms, such as Fully Convolutional Network (FCN) long2015fully , U-Net ronneberger2015u , SegNet badrinarayanan2017segnet , DeepLabv3+ chen2018encoder , DenseASPP yang2018denseaspp , DUpsampling tian2019decoders , etc., have been proposed. Data-fusion semantic image segmentation approaches generally learn features from two different types of vision data fan2020we , such as RGB and depth images in FuseNet hazirbas2016fusenet , RGB and surface normal images fan2020three in SNE-RoadSeg fan2020sne , RGB and transformed disparity fan2018ipl ; fan2019pothole ; fan2019road images in AA-RTFNet fan2020we , or RGB and thermal images in MFNetha2017mfnet . The learned feature maps are then fused to provide a better semantic prediction.
Please note: a given autonomous car perception application can always be solved by different types of techniques. For instance, lane marking detection can be formulated as a linear/quadratic/quadruplicate pattern recognition problem and solved using straight line detection algorithms fan2016ist or dynamic programming algorithms ozgunalp2016multiple ; fan2018lane ; jiao2019using . On the other hand, it can also be formulated as a semantic image segmentation problem and solved with CNNs.
4 Computer Stereo Vision
4.1 Preliminaries
1. Skew-symmetric matrix
In linear algebra, a skew-symmetric matrix satisfies the following property: its transpose is identical to its negative, i.e., . In 3D computer vision, the skew-symmetric matrix of a vector can be written as hartley2003multiple :
(1) |
A skew-symmetric matrix has two important properties:
(2) |
where is a zero vector. Furthermore, the cross product of two vectors and can be formulated as a matrix multiplication process hartley2003multiple :
(3) |
These properties are generally used to simplify the equations related to vector cross-product.
2. Lie group SO(3) and SE(3)
A 3D point can be transformed into another 3D point using a rotation matrix and a translation vector :
(4) |
satisfies matrix orthogonality:
(5) |
where is an identity matrix and represents the determinant of . The group containing all rotation matrices is referred to as a special orthogonal group and is denoted as SO(3). and , the homogeneous coordinates of and , can be used to describe rotation and translation, as follows:
(6) |
where
(7) |
is a homogeneous transformation matrix222seas.upenn.edu/~meam620/slides/kinematicsI.pdf. The group containing all homogeneous transformation matrices is referred to as a special Euclidean group and is denoted as SE(3).
4.2 Multi-View Geometry
Perspective Camera Model
The perspective (or pinhole) camera model, as illustrated in Fig. 5, is the most common geometric camera model describing the relationship between a 3D point in the camera coordinate system (CCS) and its projection on the image plane . is the camera center. The distance between and is the camera focal length . are the normalized coordinates of . Optical axis is the ray originating from and passing perpendicularly through . The relationship between and is as follows trucco1998introductory :
(8) |

Intrinsic Matrix
Since lens distortion does not exist in a perspective camera model, on the image plane can be transformed into a pixel in the image using fan2018real :
(9) |
where is the principal point; and and are the effective size measured (in pixels per millimeter) in the horizontal and vertical directions, respectively trucco1998introductory . To simplify the expression of the camera intrinsic matrix , two notations and are introduced. , , , and hartley2003multiple are five camera intrinsic parameters. Combining (8) and (9), a 3D point in the CCS can be transformed into a pixel in the image using fan2019cvprw :
(10) |
where denotes the homogeneous coordinates of . Plugging (10) into (8) results in:
(11) |
Therefore, an arbitrary 3D point lying on the ray, which goes from and through , is always projected at on the image plane.
Lens Distortion
In order to get better imaging results, a lens is usually installed in front of the camera trucco1998introductory . However, this introduces image distortions. The optical aberration caused by the installed lens typically deforms the physically straight lines provided by projective geometry to curves in the images zhang2000flexible , as shown in Fig. 6(a). We can observe in Fig. 6(b) that the bent checkerboard grids become straight when the lens distortion is corrected.

Lens distortion can be categorized into two main types: 1) radial distortion and 2) tangential distortion fan2018real . The presence of radial distortion is due to the fact that geometric lens shape affects straight lines. Tangential distortion occurs because the lens is not perfectly parallel to the image plane trucco1998introductory . In practical experiments, the image geometry is affected by radial distortion to a much higher extent than by tangential distortion. Therefore, the latter is sometimes neglected in the process of distorted image correction.
Radial distortion
Radial distortion mainly includes 1) barrel distortion, 2) pincushion distortion and 3) mustache distortion, as illustrated in Fig. 7.

It can be observed that a) radial distortions are symmetric about the image center and b) straight lines are no longer preserved. In barrel distortion, the image magnification decreases with the distance from the optical axis (lines curve outwards). In contrast, the pincushion distortion pinches the image (lines curve inwards). Mustache distortion is a mixture of the above two distortion types. It starts out as barrel distortion close to the optical axis and gradually turns into pincushion distortion close to image periphery. Barrel distortion is commonly applied in fish-eye lenses to produce wide-angle/panoramic images, while pincushion distortion is often associated with telephoto/zoom lenses. Radial distortions can be corrected using333docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html:
(12) |
where the corrected point will be ; ; and 444docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html can be obtained from the distorted image. , and are three intrinsic parameters used for radial distortion correction. They can be estimated using a collection of images containing a planar checkerboard pattern.
Tangential distortion
Similar to radial distortion, tangential distortion can also be corrected using:
(13) |
where and are two intrinsic parameters, which can also be estimated using a collection of images containing a planar checkerboard pattern.
Epipolar Geometry
The generic geometry of stereo vision is known as epipolar geometry. An example of the epipolar geometry is shown in Fig. 8.

and represent the left and right image planes, respectively. and denote the origins of the left camera coordinate system (LCCS) and the right camera coordinate system (RCCS), respectively. The 3D point in the WCS, is projected at on and at on , respectively. and are the focal lengths of the left and right cameras, respectively; The representations of in the LCCS and RCCS are and , respectively. According to (4), can be transformed into using:
(14) |
where is a rotation matrix and is a translation vector. and denote the left and right epipoles, respectively. The epipolar plane is uniquely defined by , and . It intersects and giving rise to two epipolar lines, as shown in Fig. 8. Using (11), and can be normalized using:
(15) |
where and denote the intrinsic matrices of the left and right cameras, respectively. and are the homogeneous coordinates of the image pixels and , respectively.
Essential Matrix
Essential matrix was first introduced by Longuet-Higgins in 1981 longuet1981computer . A simple way of introducing the defining equation of is to multiply both sides of (14) by :
(16) |
According to (3), (16) can be rewritten as follows:
(17) |
(18) |
The essential matrix is defined by:
(19) |
Plugging (15) into (18) results in:
(20) |
which depicts the relationship between each pair of normalized points and lying on the same epipolar plane. It is important to note here that has five degrees of freedom: both and have three degrees of freedom, but the overall scale ambiguity causes the degrees of freedom to be reduced by one hartley2003multiple . Hence, in theory, can be estimated with at least five pairs of and . However, due to the non-linearity of , its estimation using five pairs of correspondences is always intractable. Therefore, is commonly estimated with at least eight pairs of and trucco1998introductory , as discussed in Sec. 4.2.
Fundamental Matrix
As introduced in Sec. 4.2, the essential matrix creates a link between a given pair of corresponding 3D points in the LCCS and RCCS. When the intrinsic matrices of the two cameras in a stereo rig are unknown, the relationship between each pair of corresponding 2D image pixels and can be established, using the fundamental matrix . It can be considered as a generalization of , where the assumption of calibrated cameras is removed hartley2003multiple . Applying (15) to (20) yields:
(21) |
where the fundamental matrix is defined as:
(22) |
has seven degrees of freedom: a 33 homogeneous matrix has eight independent ratios, as there are nine entries, but the common scaling is not significant. However, also satisfies the constraint , which removes one degree of freedom hartley2003multiple . The most commonly used algorithm to estimate and is “eight-point algorithm” (EPA), which was introduced by Hartley in 1997 hartley1997defense . This algorithm is based on the scale invariance of and : and , where . By setting one element in and to , eight unknown elements still need to be estimated. This can be done using at least eight correspondence pairs. If the intrinsic matrices and of the two cameras are known, the EPA only needs to be carried out once to estimate either or , because the other one can be easily worked out using (21).
Homography Matrix
An arbitrary 3D point lying on a planar surface satisfies:
(23) |
where is the normal vector of the planar surface. Its corresponding pixels and in the left and right images, respectively, can be linked by a homography matrix . The expression of the planar surface can be rearranged as follows:
(24) |
Assuming that and plugging (24) and (15) into (14) results in:
(25) |
Therefore, and can be linked using:
(26) |
The homography matrix is generally used to distinguish obstacles from a planar surface fan2018road . For a well-calibrated stereo vision system, , , as well as are already known, and is typically equal to . Thus, only relates to and , and it can be estimated with at least four pairs of correspondences and fan2018road .
4.3 Stereopsis
Stereo Rectification
3D scene geometry reconstruction with a pair of synchronized cameras is based on determining pairs of correspondence pixels between the left and right images. For an uncalibrated stereo rig, finding the correspondence pairs is a 2D search process (optical flow estimation), which is extremely computationally intensive. If the stereo rig is calibrated, 1D search should be performed along the epipolar lines. An image transformation process, referred to as stereo rectification, is always performed beforehand to reduce the dimension of the correspondence pair search. The stereo rectification consists of four main steps trucco1998introductory :

-
1.
Rotate the left camera by so that the left image plane is parallel to the vector ;
-
2.
Apply the same rotation to the right camera to recover the original epipolar geometry;
-
3.
Rotate the right camera by ;
-
4.
Adjust the left and right image scales by allocating an identical intrinsic matrix to both cameras.
After the stereo rectification, the left and right images appear as if they were taken by a pair of parallel cameras with the same intrinsic parameters, as shown in Fig. 9, where and are the original image planes; and are the rectified image planes. Also, each pair of conjugate epipolar lines become collinear and parallel to the horizontal image axis trucco1998introductory . Hence, determining the correspondence pairs is simplified to a 1D search problem.
Stereo Vision System
A well-rectified stereo vision system is illustrated in Fig. 10,

which can be regarded as a special epipolar geometry introduced in Sec. 4.2, where the left and right cameras are perfectly parallel to each other. and axes are collinear. and are the left and right camera optical centers, respectively. The baseline of the stereo rig , is defined as the distance between and . The intrinsic matrices of the left and right cameras are given by:
(27) |
respectively. Let be a 3D point of interest in the WCS. Its representations in the LCCS and RCCS are and , respectively. Since the left and right cameras are considered to be exactly the same in a well-rectified stereo vision system, and in (9) are simply set to and . is projected on and at and , respectively. , the origin of the WCS, is at the center of the line segment . axis is parallel to the camera optical axes and perpendicular to and . Therefore, an arbitrary point in the WCS can be transformed to and using:
(28) |
where and ; Applying (11) and (15) to (28) results in the following expressions:
(29) |
Applying (29) to (9) yields the following expressions:
(30) |
The relationship between the so-called disparity and depth is as follows fan2018real :
(31) |
It can be observed that is inversely proportional to . Therefore, for a distant 3D point , and are close to each other. On the other hand, when lies near the stereo camera rig, the position difference between and is large. Therefore, disparity estimation can be regarded as a task of 1) finding the correspondence ( and ) pairs, which are on the same image row, on the left and right images and 2) producing two disparity images and , as shown in Fig. 11.

Disparity Estimation
The two key aspects of computer stereo vision are speed and accuracy tippetts2016review . Over the past two decades, a lot of research has been carried out to improve disparity estimation accuracy while reducing computational complexity. However, the stereo vision algorithms designed to achieve better disparity accuracy typically have higher computational complexity fan2018real . Hence, speed and accuracy are two desirable but conflicting properties. It is very challenging to achieve both of them simultaneously tippetts2016review .
In general, the main motivation of designing a stereo vision algorithm is to improve the trade-off between speed and accuracy. In most circumstances, a desirable trade-off entirely depends on the target application tippetts2016review . For instance, a real-time performance is required for stereo vision systems employed in autonomous driving, because other systems, such as data-fusion semantic driving scene segmentation, usually take up only a small portion of the processing time, and can be easily implemented in real-time if the 3D information is available fan2018real . Although stereo vision execution time can definitely be reduced with future HW advances, algorithm and SW improvements are also very important tippetts2016review .
State-of-the-art stereo vision algorithms can be classified as either computer vision-based or machine/deep learning-based. The former typically formulates disparity estimation as a local block matching problem or a global energy minimization problem fan2018road , while the latter basically considers disparity estimation as a regression problem luo2016efficient .
Computer vision-based stereo vision algorithms
Computer vision-based disparity estimation algorithms are categorized as: 1) local, 2) global and 3) semi-global scharstein2003high . Local algorithms simply select an image block from the left image and match it with a series of image blocks selected from the right image. Optimal disparity estimation corresponds to either the lowest difference costs or the highest correlation costs. Global algorithms translate disparity estimation into a probability maximization problem or an energy minimization problem mozerov2015accurate , which can be solved using Markov random field (MRF)-based optimization algorithms tappen2003comparison . Semi-global matching (SGM) hirschmuller2007stereo approximates MRF inferences by performing cost aggregation along all image directions, which greatly improves both disparity estimation accuracy and efficiency. Generally, a computer vision-based disparity estimation algorithm consists of four main steps: 1) cost computation, 2) cost aggregation, 3) disparity optimization and 4) disparity refinement vzbontar2016stereo .
1. Cost computation
Disparity is a random variable with possible discrete states, each of them being associated with a matching cost . The two most commonly used pixel-wise matching costs are the absolute difference (AD) cost and the squared difference (SD) cost vzbontar2016stereo . Since the left and right images are typically in gray-scale format, and can be computed using hirschmuller2008evaluation :
(32) |
where , denotes the pixel intensity of in the left image and represents the pixel intensity of in the right image.
2. Cost aggregation
In order to minimize incorrect matches, pixel-wise difference costs are often aggregated over all pixels within a support region scharstein2003high :
(33) |
where the center of the support region is at . The corresponding disparity is . denotes the aggregated cost. is a kernel that represents the support region. represents a neighborhood system containing the pixel-wise matching costs of all pixels within the support region. can be obtained by performing a convolution between and . A large support region can help reduce disparity optimization uncertainties, but also increase the algorithm execution time significantly.
Since the support regions are always rectangular blocks, these algorithms are also known as stereo block matching fan2018road . When the convolution process is a uniform box filtering (all the elements in are 1), the aggregations of and are referred to as the sum of absolute difference (SAD) and the sum of squared difference (SSD), respectively, which can be written as fan2018real :
(34) |
where is the support region (or neighborhood system) of . Although the SAD and the SSD are computationally efficient, they are very sensitive to image intensity noise. In this regard, some other cost or similarity functions, such as the normalized cross-correlation (NCC), are more prevalently used for cost computation and aggregation. The cost function of the NCC is as follows fan2018road :
(35) |
where
(36) |
and represent the means of the pixel intensities within the left and right image block, respectively. and denote the standard deviations of the left and right image block, respectively. represents the number of pixels within each image blocks. The NCC cost reflects the similarity between the given pair of left and right image blocks. A higher corresponds to a better block matching.
In addition to the cost aggregation via uniform box filtering, many adaptive cost aggregation strategies have been proposed to improve disparity accuracy. One of the most famous algorithms is fast bilateral stereo (FBS) yang2008stereo ; fan2018fbs , which uses a bilateral filter to aggregate the matching costs adaptively. A general expression of cost aggregation in FBS is as follows fan2018ist :
(37) |
where functions and are based on spatial distance and color similarity, respectively fan2018fbs . The costs within a rectangular block are aggregated adaptively to produce .
3. Disparity Optimization
The local algorithms simply select the disparities that correspond to the lowest difference costs or the highest correlation costs as the best disparities in a Winner-Take-All (WTA) way.
Unlike WTA applied in the local algorithms, matching costs from neighboring pixels are also taken into account in the global algorithms, e.g., graph cuts (GC) boykov2001fast and belief propagation (BP) ihler2005loopy . The MRF is a commonly used graphical model in such algorithms. An example of the MRF model is depicted in Fig. 12. The graph is a set of vertices connected by edges , where and . Two edges sharing one common vertex are called a pair of adjacent edges blake2011markov . Since the MRF is considered to be undirected, and refer to the same edge here. is a neighborhood system of .

For stereo vision problems, is a pixel disparity image and is a graph vertex (or node) at the site of with a disparity node value . Because the consideration of more candidates usually makes true disparity inference intractable, only the neighbors adjacent to are considered for stereo matching tappen2003comparison , in a pairwise MRF fashion, as the disparity of node tends to have a strong correlation with its vicinities, while it is linked implicitly to any other random nodes in the disparity map. The joint MRF probability can be written as tappen2003comparison :
(38) |
where represents image intensity differences, expresses the compatibility between possible disparities and the corresponding image intensity differences, while expresses the compatibility between and its neighborhood system. Now, the aim of finding the best disparity is equivalent to maximizing in (38), by formulating it as an energy function fan2019cvprw :
(39) |
where and are two energy functions. corresponds to the matching cost and determines the aggregation from the neighbors. In the MRF model, the method to formulate an adaptive is important, because image intensity in discontinuous areas usually varies greatly from that of its neighbors fan2018real . However, the process of minimizing (39) results in high computational complexities, rendering real-time performance challenging. Therefore, SGM hirschmuller2007stereo breaks down (39) into:
(40) |
where is the disparity image, is the matching cost, is a pixel in the neighborhood system of . penalizes the neighboring pixels with small disparity differences, i.e., one pixel; penalizes the neighboring pixels with large disparity differences, i.e., larger than one pixel. returns 1 if its argument is true and 0 otherwise.
4. Disparity Refinement
Disparity refinement usually involves several post-processing steps, such as the left-and-right disparity consistency check (LRDCC), subpixel enhancement and weighted median filtering scharstein2002taxonomy . The LRDCC can remove most of the occluded areas, which are only visible in one of the left/right image fan2018road . In addition, a disparity error larger than one pixel may result in a non-negligible 3D geometry reconstruction error fan2018road . Therefore, subpixel enhancement provides an easy way to increase disparity image resolution by simply interpolating the matching costs around the initial disparity scharstein2002taxonomy . Moreover, a median filter can be applied to the disparity image to fill the holes and remove the incorrect matches scharstein2002taxonomy . However, the above disparity refinement algorithms are not always necessary and the sequential use of these steps depends entirely on the chosen algorithm and application needs.
Machine/deep learning-based stereo vision algorithms
With recent advances in machine/deep learning, CNNs have been prevalently used for disparity estimation. For instance, Žbontar and LeCun zbontar2015computing utilized a CNN to compute patch-wise similarity scores, as shown in Fig. 13. It consists of a convolutional layer and seven fully-connected layers -. The inputs to this CNN are two 99-pixel gray-scale

image patches. consists of 32 convolution kernels of size 551. and have 200 neurons each. After , the two 200-dimensional feature vectors are concatenated into a 400-dimensional vector and passed through - layers. Layer maps output into two real numbers, which are then fed through a softmax function to produce a distribution over the two classes: a) good match and b) bad match. Finally, they utilize computer vision-based cost aggregation and disparity optimization/refinement techniques to produce the final disparity images. Although this method has achieved the state-of-the-art accuracy, it is limited by the employed matching cost aggregation technique and can produce wrong predictions in occluded or texture-less/reflective regions zhang2019GANet .
In this regard, some researchers have leveraged CNNs to improve computer vision-based cost aggregation step. SGM-Nets seki2017sgm is one of the most well-known methods of this type. Its main contribution is a CNN-based technique for predicting SGM penalty parameters and in (40) hirschmuller2007stereo , as illustrated in Fig. 14. A -pixel gray-scale image patch and its normalized position are used as the CNN inputs. It has a) two convolution layers, each followed by a rectified linear unit (ReLU) layer; b) a concatenate layer for merging the two types of inputs; c) two fully connected (FC) layers of size 128 each, followed by a ReLU layer and an exponential linear unit (ELU); d) a constant layer to keep SGM penalty values positive. The costs can then be accumulated along four directions. The CNN output values correspond to standard parameterization.

Recently, end-to-end deep CNNs have become very popular. For example, Mayer et al. mayer2016large created three large synthetic datasets555lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html ( FlyingThings3D, Driving and Monkaa) and proposed a CNN named DispNet for dense disparity estimation. Later on, Pang et al. pang2017cascade proposed a two-stage cascade CNN for disparity estimation. Its the first stage enhances DispNet mayer2016large by equipping it with extra up-convolution modules and the second stage rectifies the disparity initialized by the first stage and generates residual signals across multiple scales. Furthermore, GCNet kendall2017end incorporated feature extraction (cost computation), cost aggregation and disparity optimization/refinement into a single end-to-end CNN model, and it achieved the state-of-the-art accuracy on the FlyingThings3D benchmark mayer2016large as well as the KITTI stereo 2012 and 2015 benchmarks geiger2012we ; menze2015joint ; menze2018object . In 2018, Chang et al. chang2018pyramid proposed Pyramid Stereo Matching Network (PSMNet), consisting of two modules: a) spatial pyramid pooling and b) 3D CNN. The former aggregates the context of different scales and locations, while the latter regularizes the cost volume. Unlike PSMNet chang2018pyramid , guided aggregation net (GANet) zhang2019GANet replaces the widely used 3D CNN with two novel layers: a semi-global aggregation layer and a local guided aggregation layer, which help save a lot of memory and computational cost.
Although the aforementioned CNN-based disparity estimation methods have achieved compelling results, they usually have a huge number of learnable parameters, resulting in a long processing time. Therefore, current state-of-the-art CNN-based disparity estimation algorithms have hardly been put into practical uses in autonomous driving. We believe these methods will be applied in more real-world applications, with future advances in embedded computing HW.
Performance Evaluation
As discussed above, disparity estimation speed and accuracy are two key properties and they are always pitted against each other. Therefore, the performance evaluation of a given stereo vision algorithm usually involves both of these two properties tippetts2016review .
The following two metrics are commonly used to evaluate the accuracy of an estimated disparity image barron1994performance :
-
1.
Root mean squared (RMS) error :
(41) -
2.
Percentage of error pixels (PEP) (tolerance: pixels):
(42)
where and represent the estimated and ground truth disparity images, respectively; denotes the total number of disparities used for evaluation; represents the disparity evaluation tolerance.
Additionally, a general way to depict the efficiency of an algorithm is given in millions of disparity evaluations per second tippetts2016review as follows:
(43) |
However, the speed of a disparity estimation algorithm typically varies across different platforms, and it can be greatly boosted by exploiting the parallel computing architecture.
5 Heterogeneous Computing
Heterogeneous computing systems use multiple types of processors or cores. In the past, heterogeneous computing meant that different instruction-set architectures (ISAs) had to be handled differently, while modern heterogeneous system architecture (HSA) systems allow users to utilize multiple processor types. As illustrated in Fig. 15,

a typical HSA system consists of two different types of processors: 1) a multi-threading central processing unit (CPU) and 2) a graphics processing unit (GPU) mittal2015survey , which are connected by a peripheral component interconnect (PCI) express bus. The CPU’s memory management unit (MMU) and the GPU’s input/output memory management unit (IOMMU) comply with the HSA HW specifications. CPU runs the operating system and performs traditional serial computing tasks, while GPU performs 3D graphics rendering and CNN training.
5.1 Multi-Threading CPU
The application programming interface (API) Open Multi-Processing (OpenMP) is typically used to break a serial code into independent chunks for parallel processing. It supports multi-platform shared-memory multiprocessing programming in C/C++ and Fortran jin2011high . An explicit parallelism programming model, typically known as a fork-join model, is illustrated in Fig. 16,

where the compiler instructs a section of the serial code to run in parallel fan2019icvs . The master thread (serial execution on one core) forks a number of slave threads. The tasks are divided to run in parallel amongst the slave threads on multiple cores. Synchronization waits until all slave threads finish their allocated tasks fan2016ist . Finally, the slave threads join together at a subsequent point and resume sequential execution.
5.2 GPU
GPUs have been extensively used in computer vision and deep learning to accelerate the computationally intensive but parallelly-efficient processing and CNN training. Compared with a CPU, which consists of a low number of cores optimized for sequentially serial processing, GPU has a highly parallel architecture which is composed of hundreds or thousands of light GPU cores to handle multiple tasks concurrently.
A typical GPU architecture is shown in Fig. 17, which consists of streaming multiprocessors (SMs) with streaming processors (SPs) on each of them. The single instruction multiple data (SIMD) architecture allows the SPs on the same SM to execute the same instruction but process different data at each clock cycle. The device has its own dynamic random access memory (DRAM) which consists of global memory, constant memory and texture memory. DRAM can communicate with the host memory via the graphical/memory controller hub (GMCH) and the I/O controller hub (ICH), which are also known as the Intel northbridge and the Intel southbridge, respectively. Each SM has four types of on-chip memories: register, shared memory, constant cache and texture cache. Since they are on-chip memories, the constant cache and texture cache are utilized to speed up data fetching from the constant memory and texture memory, respectively. Due to the fact that the shared memory is small, it is used for the duration of processing a block. The register is only visible to the thread. The details of different types of GPU memories are illustrated in Table 1.

Memory | Location | Cached | Access | Scope |
register | on-chip | n/a | r/w | one thread |
shared | on-chip | n/a | r/w | all threads in a block |
global | off-chip | no | r/w | all threads + host |
constant | off-chip | yes | r | all threads + host |
texture | off-chip | yes | r | all threads + host |
In CUDA C programming, the threads are grouped into a set of 3D thread blocks which are then organized as a 3D grid. The kernels are defined on the host using the CUDA C programming language. Then, the host issues the commands that submit the kernels to devices for execution. Only one kernel can be executed at a given time. Once a thread block is distributed to an SM, the threads are divided into groups of 32 parallel threads which are executed by SPs. Each group of 32 parallel threads is known as a warp. Therefore, the size of a thread block is usually chosen as a multiple of 32 to ensure efficient data processing.
6 Summary
In this chapter, we first introduced the autonomous car system, from both HW aspect (car sensors and car chassis) and SW aspect (perception, localization and mapping, prediction and planning, and control). Particularly, we introduced the autonomous car perception module, which has four main functionalities: 1) visual feature detection, description and matching, 2) 3D information acquisition, 3) object detection/recognition and 4) semantic image segmentation. Later on, we provided readers with the preliminaries for the epipolar geometry and introduced computer stereo vision from theory to algorithms. Finally, heterogeneous computing architecture, consisting of a multi-threading CPU and a GPU, was introduced.
Acknowledgments
This chapter has received partial funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 871479 (AERIALCORE).
References
- (1) R. Fan, J. Jiao, H. Ye, Y. Yu, I. Pitas, and M. Liu, “Key ingredients of self-driving cars,” 2019.
- (2) I. Pitas, Digital image processing algorithms and applications. John Wiley & Sons, 2000.
- (3) “Lidar–light detection and ranging–is a remote sensing method used to examine the surface of the earth,” NOAA. Archived from the original on, vol. 4, 2013.
- (4) T. Bureau, “Radar definition,” Public Works and Government Services Canada, 2013.
- (5) W. J. Westerveld, “Silicon photonic micro-ring resonators to sense strain and ultrasound,” 2014.
- (6) L. Zheng, Y. Zhu, B. Xue, M. Liu, and R. Fan, “Low-cost gps-aided lidar state estimation and map building,” in 2019 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2019, pp. 1–6.
- (7) N. Samama, Global positioning: Technologies and performance. John Wiley & Sons, 2008, vol. 7.
- (8) S. Liu, “Chassis technologies for autonomous robots and vehicles,” 2019.
- (9) M. U. M. Bhutta, M. Kuse, R. Fan, Y. Liu, and M. Liu, “Loop-box: Multiagent direct slam triggered by single loop closure for large-scale mapping,” IEEE transactions on cybernetics, 2020.
- (10) R. C. Smith and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” The international journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.
- (11) C. Katrakazas, M. Quddus, W.-H. Chen, and L. Deka, “Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions,” Transportation Research Part C: Emerging Technologies, vol. 60, pp. 416–442, 2015.
- (12) T. H. Cormen, “Section 24.3: Dijkstra’s algorithm,” Introduction to algorithms, pp. 595–601, 2001.
- (13) D. Delling, P. Sanders, D. Schultes, and D. Wagner, “Engineering route planning algorithms,” in Algorithmics of large and complex networks. Springer, 2009, pp. 117–139.
- (14) M. Willis, “Proportional-integral-derivative control,” Dept. of Chemical and Process Engineering University of Newcastle, 1999.
- (15) G. C. Goodwin, S. F. Graebe, M. E. Salgado et al., Control system design. Upper Saddle River, NJ: Prentice Hall,, 2001.
- (16) C. E. Garcia, D. M. Prett, and M. Morari, “Model predictive control: theory and practice—a survey,” Automatica, vol. 25, no. 3, pp. 335–348, 1989.
- (17) M. Hassaballah, A. A. Abdelmgeid, and H. A. Alshazly, “Image features detection, description and matching,” in Image Feature Detectors and Descriptors. Springer, 2016, pp. 11–45.
- (18) S. Liu and X. Bai, “Discriminative features for image classification and retrieval,” Pattern Recognition Letters, vol. 33, no. 6, pp. 744–751, 2012.
- (19) P. Moreels and P. Perona, “Evaluation of features detectors and descriptors based on 3d objects,” International journal of computer vision, vol. 73, no. 3, pp. 263–284, 2007.
- (20) P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 34, no. 4, pp. 743–761, 2011.
- (21) M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Discriminative scale space tracking,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 39, no. 8, pp. 1561–1575, 2016.
- (22) D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
- (23) H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in ECCV. Springer, 2006, pp. 404–417.
- (24) E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision. Ieee, 2011, pp. 2564–2571.
- (25) S. Leutenegger, M. Chli, and R. Y. Siegwart, “Brisk: Binary robust invariant scalable keypoints,” in 2011 International conference on computer vision. Ieee, 2011, pp. 2548–2555.
- (26) R. Fan, “Real-time computer stereo vision for automotive applications,” Ph.D. dissertation, University of Bristol, 2018.
- (27) R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
- (28) H. Wang, R. Fan, and M. Liu, “Cot-amflow: Adaptive modulation network with co-teaching strategy for unsupervised optical flow estimation,” 2020.
- (29) S. Ullman, “The interpretation of structure from motion,” Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 203, no. 1153, pp. 405–426, 1979.
- (30) B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment—a modern synthesis,” in International workshop on vision algorithms. Springer, 1999, pp. 298–372.
- (31) H. Wang, Y. Liu, H. Huang, Y. Pan, W. Yu, J. Jiang, D. Lyu, M. J. Bocus, M. Liu, I. Pitas et al., “Atg-pvd: Ticketing parking violations on a drone,” European Conference on Computer Vision (ECCV) Workshops, 2020.
- (32) Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE transactions on neural networks and learning systems, vol. 30, no. 11, pp. 3212–3232, 2019.
- (33) D. Wang, C. Devin, Q.-Z. Cai, F. Yu, and T. Darrell, “Deep object-centric policies for autonomous driving,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 8853–8859.
- (34) A. Mogelmose, M. M. Trivedi, and T. B. Moeslund, “Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484–1497, 2012.
- (35) B. Wu, F. Iandola, P. H. Jin, and K. Keutzer, “Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 129–137.
- (36) R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
- (37) R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
- (38) S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
- (39) J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
- (40) J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” 2018.
- (41) A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” 2020.
- (42) R. Fan, H. Wang, P. Cai, and M. Liu, “Sne-roadseg: Incorporating surface normal information into semantic segmentation for accurate freespace detection,” in European Conference on Computer Vision. Springer, 2020, pp. 340–356.
- (43) H. Wang, R. Fan, Y. Sun, and M. Liu, “Applying surface normal information in drivable area and road anomaly detection for ground mobile robots,” 2020.
- (44) R. Fan, H. Wang, P. Cai, J. Wu, M. J. Bocus, L. Qiao, and M. Liu, “Learning collision-free space detection from stereo images: Homography matrix brings better data augmentation,” 2020.
- (45) J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
- (46) O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
- (47) V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 39, no. 12, pp. 2481–2495, 2017.
- (48) L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in ECCV, 2018, pp. 801–818.
- (49) M. Yang, K. Yu, C. Zhang, Z. Li, and K. Yang, “Denseaspp for semantic segmentation in street scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3684–3692.
- (50) Z. Tian, T. He, C. Shen, and Y. Yan, “Decoders matter for semantic segmentation: Data-dependent decoding enables flexible feature aggregation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3126–3135.
- (51) R. Fan, H. Wang, M. J. Bocus, and M. Liu, “We learn better road pothole detection: from attention aggregation to adversarial domain adaptation,” 2020.
- (52) C. Hazirbas, L. Ma, C. Domokos, and D. Cremers, “Fusenet: Incorporating depth into semantic segmentation via fusion-based cnn architecture,” in Asian conference on computer vision. Springer, 2016, pp. 213–228.
- (53) R. Fan, H. Wang, B. Xue, H. Huang, Y. Wang, M. Liu, and I. Pitas, “Three-filters-to-normal: An accurate and ultrafast surface normal estimator,” 2020.
- (54) R. Fan, M. J. Bocus, and N. Dahnoun, “A novel disparity transformation algorithm for road segmentation,” Information Processing Letters, vol. 140, pp. 18–24, 2018.
- (55) R. Fan, U. Ozgunalp, B. Hosking, M. Liu, and I. Pitas, “Pothole detection based on disparity transformation and road surface modeling,” IEEE Transactions on Image Processing, vol. 29, pp. 897–908, 2019.
- (56) R. Fan and M. Liu, “Road damage detection based on unsupervised disparity map segmentation,” IEEE Transactions on Intelligent Transportation Systems, 2019.
- (57) Q. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada, “Mfnet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 5108–5115.
- (58) R. Fan, V. Prokhorov, and N. Dahnoun, “Faster-than-real-time linear lane detection implementation using soc dsp tms320c6678,” in 2016 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2016, pp. 306–311.
- (59) U. Ozgunalp, R. Fan, X. Ai, and N. Dahnoun, “Multiple lane detection algorithm based on novel dense vanishing point estimation,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 3, pp. 621–632, 2016.
- (60) R. Fan and N. Dahnoun, “Real-time stereo vision-based lane detection system,” Measurement Science and Technology, vol. 29, no. 7, p. 074005, 2018.
- (61) J. Jiao, R. Fan, H. Ma, and M. Liu, “Using dp towards a shortest path problem-related application,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 8669–8675.
- (62) E. Trucco and A. Verri, Introductory techniques for 3-D computer vision. Prentice Hall Englewood Cliffs, 1998, vol. 201.
- (63) R. Fan, J. Jiao, J. Pan, H. Huang, S. Shen, and M. Liu, “Real-time dense stereo embedded in a uav for road inspection,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 535–543.
- (64) Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 22, no. 11, pp. 1330–1334, 2000.
- (65) H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature, vol. 293, no. 5828, pp. 133–135, 1981.
- (66) R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 19, no. 6, pp. 580–593, 1997.
- (67) R. Fan, X. Ai, and N. Dahnoun, “Road surface 3d reconstruction based on dense subpixel disparity map estimation,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 3025–3035, 2018.
- (68) B. Tippetts, D. J. Lee, K. Lillywhite, and J. Archibald, “Review of stereo vision algorithms and their suitability for resource-limited systems,” Journal of Real-Time Image Processing, vol. 11, no. 1, pp. 5–25, 2016.
- (69) W. Luo, A. G. Schwing, and R. Urtasun, “Efficient deep learning for stereo matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5695–5703.
- (70) D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., vol. 1. IEEE, 2003, pp. I–I.
- (71) M. G. Mozerov and J. van de Weijer, “Accurate stereo matching by two-step energy minimization,” IEEE Transactions on Image Processing, vol. 24, no. 3, pp. 1153–1163, 2015.
- (72) M. F. Tappen and W. T. Freeman, “Comparison of graph cuts with belief propagation for stereo, using identical mrf parameters,” in null. IEEE, 2003, p. 900.
- (73) H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 30, no. 2, pp. 328–341, 2007.
- (74) J. Žbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” The journal of machine learning research, vol. 17, no. 1, pp. 2287–2318, 2016.
- (75) H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 31, no. 9, pp. 1582–1599, 2008.
- (76) Q. Yang, L. Wang, R. Yang, H. Stewénius, and D. Nistér, “Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 31, no. 3, pp. 492–504, 2008.
- (77) R. Fan, Y. Liu, M. J. Bocus, L. Wang, and M. Liu, “Real-time subpixel fast bilateral stereo,” in 2018 IEEE International Conference on Information and Automation (ICIA). IEEE, 2018, pp. 1058–1065.
- (78) R. Fan, Y. Liu, X. Yang, M. J. Bocus, N. Dahnoun, and S. Tancock, “Real-time stereo vision for road surface 3-d reconstruction,” in 2018 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2018, pp. 1–6.
- (79) Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 23, no. 11, pp. 1222–1239, 2001.
- (80) A. T. Ihler, A. S. Willsky et al., “Loopy belief propagation: Convergence and effects of message errors,” Journal of Machine Learning Research, vol. 6, no. May, pp. 905–936, 2005.
- (81) A. Blake, P. Kohli, and C. Rother, Markov random fields for vision and image processing. Mit Press, 2011.
- (82) D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International journal of computer vision, vol. 47, no. 1-3, pp. 7–42, 2002.
- (83) J. Žbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1592–1599.
- (84) F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided aggregation net for end-to-end stereo matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 185–194.
- (85) A. Seki and M. Pollefeys, “Sgm-nets: Semi-global matching with neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 231–240.
- (86) N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4040–4048.
- (87) J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade residual learning: A two-stage convolutional neural network for stereo matching,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 887–895.
- (88) A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, “End-to-end learning of geometry and context for deep stereo regression,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 66–75.
- (89) A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012, pp. 3354–3361.
- (90) M. Menze, C. Heipke, and A. Geiger, “Joint 3d estimation of vehicles and scene flow.” ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. 2, 2015.
- (91) C. Menze, Moritz; Heipke and A. Geiger, “Object scene flow,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 140, pp. 60–76, 2018.
- (92) J.-R. Chang and Y.-S. Chen, “Pyramid stereo matching network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5410–5418.
- (93) J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” International journal of computer vision, vol. 12, no. 1, pp. 43–77, 1994.
- (94) S. Mittal and J. S. Vetter, “A survey of cpu-gpu heterogeneous computing techniques,” ACM Computing Surveys (CSUR), vol. 47, no. 4, pp. 1–35, 2015.
- (95) H. Jin, D. Jespersen, P. Mehrotra, R. Biswas, L. Huang, and B. Chapman, “High performance computing using mpi and openmp on multi-core parallel systems,” Parallel Computing, vol. 37, no. 9, pp. 562–575, 2011.
- (96) R. Fan, S. Duanmu, Y. Liu, Y. Zhu, J. Jiao, M. J. Bocus, Y. Yu, L. Wang, and M. Liu, “Real-time binocular vision implementation on an soc tms320c6678 dsp,” in International Conference on Computer Vision Systems. Springer, 2019, pp. 13–23.
- (97) R. Fan and N. Dahnoun, “Real-time implementation of stereo vision based on optimised normalised cross-correlation and propagated search range on a gpu,” in 2017 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2017, pp. 1–6.