Explaining RADAR features for detecting spoofing attacks
in Connected Autonomous Vehicles
Abstract
Connected autonomous vehicles (CAVs) are anticipated to have built-in AI systems for defending against cyberattacks. Machine learning (ML) models form the basis of many such AI systems. These models are notorious for acting like black boxes, transforming inputs into solutions with great accuracy, but no explanations support their decisions. Explanations are needed to communicate model performance, make decisions transparent, and establish trust in the models with stakeholders. Explanations can also indicate when humans must take control, for instance, when the ML model makes low confidence decisions or offers multiple or ambiguous alternatives. Explanations also provide evidence for post-incident forensic analysis. Research on explainable ML to security problems is limited, and more so concerning CAVs. This paper surfaces a critical yet under-researched sensor data uncertainty problem for training ML attack detection models, especially in highly mobile and risk-averse platforms such as autonomous vehicles. We present a model that explains certainty and uncertainty in sensor input- a missing characteristic in data collection. We hypothesize that model explanation is inaccurate for a given system without explainable input data quality. We estimate uncertainty and mass functions for features in radar sensor data and incorporate them into the training model through experimental evaluation. The mass function allows the classifier to categorize all spoofed inputs accurately with an incorrect class label.
Introduction
Machine learning (ML) applications are driving (?) innovation in connected and autonomous vehicles (CAVs) (?). To enable ”self-awareness,” CAVs are fitted with a range of perception sensors (lidar, radar, ultrasonic, camera), control sensors, communication systems (cellular, wifi, Bluetooth), control systems (velocity, steering, and braking), and ML-trained semi- and full-autonomy services. Controller area network (CAN) buses share critical sensory information with the onboard intelligent processing units (?), or other networks, forming the backbone of self-driving car perception. These components require a continuous assessment of potential threats to safety to allow the CAVs to navigate the road. ML models used for perception, anomaly detection, and emergency maneuvers in CAVs, rely heavily upon sensor data. Research has demonstrated how these models are vulnerable to spoofing and adversarial examples resulting from small-magnitude perturbations in the input data (?). Therefore, understanding inputs (e.g., sensor data) is a critical step toward creating resilient infrastructure within which smart agents like CAVs and human actors can co-exist, thereby reducing risks to life and property (?; ?; ?).
Challenges. A unique challenge from using ML-based attack detection models results from their black-box processing characteristics. ML models generate decisions that are challenging to understand or opaque, even to those of the experts who designed them. For example, experts cannot directly correlate weights assigned to sensor signals and the actual decisions. Understanding these correlations is vital to creating effective cyber threat detection and response systems since experts need to generate models in which the training (known attack and non-attack) and test (unknown attack) data may vary considerably from each other.
Motivation. Experts need to understand when a poor decision on the part of a CAV is the result of an action by an adversary (an attack), a fault in the system (programming, design, implementation, or quality control error), or the impact of some other problem. Unfortunately, ML model opacity prevents experts from explaining these decisions swiftly. The challenge increases in a CAV since it is subject to frequent environmental changes, potential sensor performance degradation, and behavioral modification resulting from cyber attacks. Our goal is to help experts understand and improve security-related CAV ML decisions.
While some decisions require certain data, other decisions require integrating uncertain with certain data to justify potential actions. The integration of uncertain data allows decision-makers to legitimize their choices in particular circumstances. When uncertain, additional knowledge can help decision-makers increase their confidence level and decrease their margin of error, to the extent that would not be possible using only data that could be confirmed with certainty. In this paper, we define explainability as a mechanism for reasoning about partial knowledge of uncertain information and describing that reasoning to specific stakeholders.
Proposed Work. Explanation combines artifacts such as models, sensors, and contextual information to describe security “events.” We present an empirical evaluation and generation of explanations that allow CAVs better perceive attacks in the face of uncertain information. We use the Dempster-Shafer Theory (DST) of evidential reasoning (?) to capture the uncertainty in sensor data, such as delay in the validity of measured environmental data. DST is a generalization of probability theory and has previously been employed in map building for CAVs (?), and in CAV sensor fusion (?).
Contributions. In this paper, we make the following contributions:
-
1.
We propose an explanation methodology that captures the fidelity of input sensor data using the Dempster-Shafer Theory (DST) of belief functions. Using this evidence, we hypothesize that security experts can reliably train attack detection models for an autonomous vehicle, a platform enabled by a network of sensors to determine a safe and secure vehicle trajectory.
-
2.
Explanations describe data reliability in determining the presence of an obstacle as well as a spoofing attack by an adversary. We experimentally demonstrate a proof-of-concept implementation of the proposed explanation methodology on Radio Detection And Ranging (RADAR) sensor data. Specifically, we demonstrate through experimental evaluation that quantifying the uncertainty in detecting an obstacle and considering this evidence to train ML-based attack models can produce more reliable attack predictions.
-
3.
We identify a set of challenges for providing explanations for CAV security, ranging from CAV-specific physical limitations to the vulnerability of explanations to limitations in the ability to provide meaningful explanations.
Background and Related Work
Sensor spoofing attacks in CAVs. Sensor spoofing consists of manipulating a sensor’s perception of the environment or its output data to generate or simulate erroneous measurements. These attacks are dangerous because they can cause unexpected changes in CAV automated driving operations, resulting in loss of steering control or sudden brake activation (?; ?; ?), or other problems. Sensor manipulation can also compromise detection of obstacles, road lanes, traffic lights, and signs, resulting in severe consequences for the safety of drivers, other vehicles, and pedestrians (?; ?; ?; ?). Literature describes two main types of sensor spoofing attacks.
In the first, the attacker accesses the vehicle Controller Area Network (CAN) bus to manipulate messages sent from the sensors to the car Engine Control Units (ECUs) (?; ?). In the second category, the attacker injects external signals (e.g., sound, light, electromagnetic interference) to alter signals captured by the sensors to remotely manipulate vehicle behavior (?; ?; ?).
CAN bus vulnerabilities. The CAN bus has become a common target for spoofing attacks because it is a widely used standard for in-vehicle network communications. The protocol allows the car engine control units (ECUs) to broadcast information and receive data from the CAV sensors in the form of CAN packets. Each packet contains vehicle status signals and sensor data, including vehicle velocity, acceleration, steering wheel angle, steering signal, brake (?). The standard CAN bus has no authentication or encryption mechanism in place. Thus, ECU cannot ensure the packets they receive are from a good source or contain altered information, which allows an adversary with access to the bus to eavesdrop on the communications and inject malicious data.
Physical attacks on sensors. By design, sensors are sensitive to external physical stimulus, even if the stimulus is unintended for measurement. An attacker can generate malicious physical signals transduced to altered measurements produced by the sensors. For example, attackers can inject modulated electromagnetic interference into a radar to make it perceive fake obstacles or use a magnetic field to corrupt anti-lock Braking Systems sensor measurements (?; ?; ?; ?).
Spoofing detection. Current spoofing data-driven detection techniques for CAVs use ML algorithms to analyze the sensor data in-vehicle communication networks. These intrusion detection systems (IDS) use large amounts of data collected from heterogeneous CAV sensors (single or multiple) and associated detection labels for training and optimization in a supervised or semi-supervised fashion. Kang et al. (?) in 2016 proposed the restricted Boltzmann machine (RBM) to separate the normal from the altered CAN packets. Taylor et al. (?) developed a supervised long short-term memory (LSTM) model to predict the next packet value for a given input sequence. More recently, Zhou et al. (?) developed a ML model that learns the parameters with shared weight to improve detection. In contrast, Song et al. have proposed an IDS based on a deep convolutional neural network able to learn the vehicle network traffic patterns without hand-designed features (?).
Theory of Evidence. The theory of evidence, also known as Dempster-Shafer Theory (DST), is a general framework for reasoning with uncertainty and applies Dempster’s combination rule to combine information from different sources. DST widely applies to several tasks in the field of autonomous driving (?). Environment perception, object tracking, classification, and decision-making tasks are examples of sources. DST creates a more accurate representation of the environment by combining evidential data. Fusing information from different sensors (e.g., car sensors data) (?) is a way to combine evidence. For high-level decision-making, Clausmann et al. deploy Dempster’s rule of combination to gain a risk value for hypothesis and trajectories (?). Magnier et al. use the theory of evidence for the classification of lidar sensor data (?). As far as we know, this is the first paper on the application of DST in sensor data for explainable security.
RADAR sensors. Radar supports adaptive cruise control and advanced driving assistance systems (ADAS) for collision avoidance, pedestrian detection, and complement camera and lidar systems (?; ?). Radar sensors emit electromagnetic waves and receive the reflection to measure flight time. These sensors are frequency-modulated continuous-wave (FMCW) radars: the sensor transmits a chirp, and the time delay of the received chirp determines the distance to the reflecting object. The phase difference determines the velocity based on the Doppler effect. We consider the radar sensor as the spoofing attack target, such as in previous work (?; ?).
Sensor uncertainty. A CAV may need to assign different weights to different sensor inputs as sensor inputs become more or less trusted (?), or produce measurements of different quality or accuracy level (?). Changing environmental conditions, sensor degradation or obstruction, and spoofing attacks are potential causes of sensor uncertainty.
Stakeholders. The stakeholders (?) for our explanations are the machine learning experts and engineers who design and manage decision-making algorithms for CAV perception, planning, and security. While passengers, vehicle operators, and third parties are potential future stakeholders in the future, researchers should address many of the challengesChallenges with Explanations listed below first.
Challenges with Explanations
Our stakeholders need explanations that address sensor uncertainty to help them improve security and decision-making models. However, beyond this, CAVs present a unique set of challenges for explanation generation that relate to security, hard and soft timing constraints, explanation efficacy, and data availability. Note that many of these challenges generalize beyond explanations for CAVs. We elaborate these challenges below:
-
1.
Providing Meaningful Feedback to ML and stakeholders: How does the ML model know that it is making a wrong decision? Is there a way of providing feedback to the stakeholder regarding the ML model’s decision? What kind of meaningful information do various types of stakeholders need in order to be able to react to bad ML model decisions effectively?
-
2.
Negative effect of erroneous explanations: Incorrect, confusing, or misleading explanations may lead to poor future actions or reduce ML models’ efficacy that incorporate the explanation data as input. An explanation need not be intentionally incorrect or deceptive to cause problems. It may simply be wrong but still trusted by the stakeholders that rely upon it.
-
3.
Attacks on Explanations: Explanations can be attacked by adding, deleting, or altering parts of the explanation. Similarly, attacks on model generating the explanation can lead to wrong and potentially confusing explanations. These attacks potentially put stakeholders who depend upon these explanations at risk. They can also put other connected entities at risk by stealthily incorporating attacks into ML model training datasets.
-
4.
Misleading explanations: Explanations may cause intentional misinterpretation. An explanation can mislead, deceive, hide potentially malicious activities, obfuscate, delay actions, prevent analysis, impact legal outcomes, using information that, while correct, is likely to be misinterpreted by stakeholders in a manner of an attacker’s choice.
-
5.
Missing validations for edge case explanations: Excessive trust placed on explanations not validated earlier against edge cases. When an expert in a field provides a seemingly accurate explanation most or all of the time, others are likely to trust their explanations based on their reputations. If the expert provides an incorrect or incomprehensible explanation, their reputation can be damaged, diminishing the trust stakeholders place in the expert’s future explanations. An ML model that provides correct explanations for the vast majority of scenarios is likely to earn the trust of its stakeholders. However, this does not mean that the model can explain every edge case correctly. These edge cases could result in explanations that are wrong, misleading or conceal attacks.
-
6.
Real-time processing requirements: Explanation generation in real-time for automated driving decisions has high processing requirements. CAV prediction models cannot postpone feedback concerning numerous decisions. For example, if a vehicle witnesses an obstacle, it must make decisions within milliseconds to guarantee collision avoidance. A fraction of explanations may take longer to generate than permitted time scales. Likewise, the explanation generation requirements for stakeholders may function on time scales with different orders of magnitude. Thus, generating explanations could cause some decisions to violate real-time processing requirements.
-
7.
Varying temporal requirements: The time requirements for security events in progress, or forensic analysis, may be violated by the time required to generate explanations. Explanations that take a long time to generate or analyze could violate the soft time requirements of attack detection and post-attack analysis. The longer the delay between an event and its analysis, the more the value of that analysis may diminish. Likewise, the detection of a security event may become less valuable if that detection takes place after an attack has already been completed, rather than while the attack is at a stage where a defender could stop or mitigate the attack.
-
8.
Insufficient Data: There may be insufficient data to provide a meaningful explanation. Without sufficient information, it may not be possible to generate an unambiguous explanation or a high probability of correctness.
Explaining Uncertainty
We propose a novel approach for modeling attacks on connected autonomous vehicle sensors. Our approach encompasses the certainty and reliability of various sensor signals prior to their use in attack models. Following DST of belief functions, our approach combines both detected sensor signals and the degree of belief in the sensors. The resultant belief considers the available evidence of belief from all sensors. Following DST, we allocate probability mass to sets or intervals.
In this paper, we demonstrate a proof-of-concept for our approach, as applied to radar sensor spoofing attacks, using the DST classification algorithm (?). DST is relevant in situations where non-random uncertainty is present, and only a subset of sensor data contains the states that are present with certainty. It also has a distinct advantage of not requiring prior knowledge, making it particularly suitable for classifying previously unseen information. Therefore, DST can be used to classify CAV sensor signals, which areuncertainand without precedent. Combining evidence using Dempster’s Rule of Combination (DRC) achieves each sensor data item classification. In other words, DRC combines probabilities of independent items of evidence from sensor data (?). The output label is a class with normalized probabilities according to some underlying Dempster-Shafer mass function. Each feature from sensor data computes individual mass functions. The “hidden” mass function provides a more informative description of the classifier output than class probabilities and uses it for decision-making.

Class Definition. We assume the scenario of automatic obstacle tracking such as in CAV perception models (?). We define two classes to assign the observed radar signals correlated to the perceived obstacle status - stationary {s}, and moving {m}. The set containing the two classes is called and represents the frame of discernment:
(1) |
The power set P() is the set of all possible sub-sets of including the empty set . For our problem, P() is defined as:
(2) |
We postulate that the quality of evidence (signals) collected by radar is generated randomly from a Gaussian distribution with mean and variance calculated for the specific feature (see Eq. 3). We assign a “mass” value to each element of the power set, P(). Also, “mass” is associated with certainty in evidence collection and is derived from radar signals.
Feature extraction and modeling. The feature model stores the description of the characteristics mapped to two classes – stationary s, and moving m. Our dataset contains features of an object in front of the radar - timestamp, density, reflection, and velocity. Description of features from the dataset is in Table 1. In supervised learning, the dataset is labeled by setting the value of a flag in the radar signal. The flag uses two variables to represent the altered data in a spoofing attack or otherwise normal values for binary classification. The problem with this or a similar approach is that the malicious spoofed pattern will always match the attack detection model. In addition, the other assumption is that the data features will always be captured by the sensors in the same physical and environmental conditions as they were during model training. Especially for attack detection, this assumption can be overly stringent. Our approach is to assign instead the “likelihood” of observing values of features such that collectively they can determine if the captured data is “certain” (e.g., expected behavior) or not (?).
This likelihood can be modeled using normal distribution for every feature in the radar dataset as follows:
(3) |
For individual classes (stationary or moving), we have the following function, which is the summation of all features:
(4) |
with S P(). An example of S = {s, m} regarding the radar signals is:
(5) |
Classification
In the formalism of the DST, a “mass value” is a value between 0 and 1. It is assigned to each element of the power set P(). DST does not regulate the method of creating these values. In our approach, these values correlate to the characteristics of the data.
The mass values for the universal set, P(), are computed for each element. Dempster’s combination rule reduces the resulting mass values for each feature to one set. The lower and upper bounds of probability are computed based on the remaining set of mass values. These values, computed for each element of P(), represent the final result. In DST’s formalism, these bounds are called “belief” and “plausibility.”
Computing Mass Values
To apply the Dempster-Shafer Theory, a mass m for each element of P() is required. We use the following equation to assign the mass for a given element S P():
(6) |
with S P()
In Eq. 6, the feature value of one class S is divided by the sum of the feature values of all possible classes of the power set. Therefore, we achieve a normalization where all mass values sum up to one:
(7) |
Dempster’s Rule of Combination
The mass value function provides a mass value for each class and feature of the power set. For example, represents the mass value for the class stationary for the velocity feature. Furthermore, there are the mass values and for the velocity and reflection intensity features respectively. The following example shows how to combine several values from different features to a combined mass value . We use Dempster’s rule of combination to combine each feature’s mass values sequentially.
(8) |
where,
(9) |
The Eq. 8 is the standard equation for Dempster’s rule of combination. It is extended to take into account the feature values and of the two features that are combined using the mass functions of these features and . To be brief: eq. 8 sums up all the mass values supporting a class. This sum is multiplied with a normalization factor (with K from eq. 9). The result is a combined normalized mass value for each class.
Calculating Belief and Plausibility
DST uses the mass values to compute a belief (bel) and a plausibility (pl). The belief is the sum of all mass values supporting a class A, where each mass value is the summation of elements that are subsets of A:
(10) |
The plausibility adds all elements related to A. This is achieved by adding all mass values that intersect A:
(11) |
The belief represents a lower boundary value and the plausibility of an upper boundary value for a hypothesis. Both values are in the interval between 0 and 1. The difference between them represents uncertainty. Figure 2 visualizes this relationship. The specialty of using the theory of evidence is that the result is comprehensive. Instead of presenting one score value for one class, our approach provides a lower and an upper bound of the belief in each hypothesis of the power set of the classes.

Feature | Description |
---|---|
Timestamp | recorded time in seconds |
Density | width of obstruction ahead of AV |
Reflection | Intensity of refection from obstruction |
Velocity | velocity of object in km/h |
Evaluation
Attack Model Assumption. In our scenario, we consider a sensor spoofing attack where the attacker can manipulate the radar sensor readings to change the perceived velocity of the obstacle (e.g., the victim car perceives a stationary vehicle in its trajectory when in reality the vehicle is moving far away) such as in previous work (?). The spoofing is achievable by external signal injection (e.g., victim vehicle behind the attacker’s vehicle with a modified radar system facing back) or tampering with the victim car to inject malicious data in the vehicle network (?).
Data. We generate a dataset that combines an existing simulated dataset111 with our simulated dataset to match the frame of discernment. The format resembles radar sensor readings and is in human-readable format (CSV) with features described in Table 1. While class membership was preassigned, we used DST to evaluate class membership (stationary or moving) to each column (feature) in the dataset. Using Eq. 2, belief, plausibility and uncertainty are created for each element of the power set, P(), see Table 2. We provide individual values for each class (frame of discernment). We get 96% accuracy when the combined feature velocity and reflection use class membership. The accuracy drops below 90% when distance is used as the feature to assign class membership.
Belief | Plausibility | Uncertainty | |
---|---|---|---|
s | 0.99773 | 0 | 0.00226 |
m | 0 | 0.99773 | 0.00226 |
Detecting spoofing attacks. We evaluate our method on a set of spoofed inputs by an attacker. We exclude detection from sensor errors or those caused by unknown factors to keep the scenario simple and focus on capturing “uncertainty” in the signal. Belief with the highest value determines class prediction and is the accepted result corresponding to the input feature.
Analysis. Using the theory of evidence, the number of classes increases to which helps us deal with scenarios where the evidence does not allow clear mapping of an input signal to one predefined class. In other words, when the CAV is under an adversarial attack, a change in feature value does not change the prediction. However, this changes the class from s or m to sm. We postulate that this change in class membership is an improvement over an otherwise incorrect prediction that can change the trajectory of the CAV. Using our method, we provide more information for forensics purposes that the security experts can use to analyze the cause of the anomalous behavior.
DST requires no prior knowledge of previously unseen information to detect anomalies. It can also express the value of ignorance (neither stationary nor moving). We utilize the mass assigned to different features to determine if an attacker has spoofed a data packet. Emulating an attacker in the experiment, we inject eleven spoofed packets into the dataset and flip the class membership – for a stationary object, we assign the class as “moving”. The DST method can detect all the spoofed packets as stationary, where the mass of velocity feature is higher for moving than stationary. See results in Table 4.
Class | s | m | s, m |
---|---|---|---|
1 | 0.347 | 0.347 | 0.306 |
2 | 0.3575 | 0.3575 | 0.2849 |
3 | 0 | 0.625 | 0.375 |
4 | 0.312 | 0.312 | 0.376 |
5 | 0 | 0.475 | 0.525 |
Class | s | m | s, m |
---|---|---|---|
1 | 0.64 | 0 | 0.36 |
2 | 0.6976 | 0.3023 | 1.1847e-41 |
3 | 0.65 | 0 | 0.35 |
4 | 0.8682 | 0.1317 | 5.161e-42 |
5 | 0.6599 | 0 | 0.3400 |
6 | 0.9509 | 0.0490 | 1.921e-42 |
7 | 0.655 | 0 | 0.345 |
8 | 0.9825 | 0.0174 | 6.850e-43 |
9 | 0.64 | 0 | 0.36 |
10 | 0.9936 | 0.00636 | 2.494e-43 |
11 | 0.645 | 0 | 0.355 |

Conclusion and Future Work
Machine Learning has become a fundamental tool for securing automated systems. However, this technology remains a black box that takes inputs and generates prediction or classification without explaining why. Explanations are crucial for humans to decide whether an ML model decision can be trustworthy. These decisions become paramount, particularly when considering systems with high stakes that attackers can compromise. This paper investigates how explainability is essential for critical automated systems security, such as CAV. We discuss the principles, unique challenges of model explainability for security, and concrete use case described in the context of sensor spoofing attacks for autonomous vehicles. In our future work, we will develop our explainable framework to assist in complex real-time constraints such as obstacle avoidance in CAVs.
Acknowledgement
The authors would like to thank the anonymous reviewers and Praveen Chandrasekaran (RIT) for reviewing this paper and providing their invaluable feedback.
References
- [Barreno et al. 2010] Barreno, M.; Nelson, B.; Joseph, A. D.; and Tygar, J. D. 2010. The security of machine learning. Machine Learning 81(2):121–148.
- [Cao et al. 2019] Cao, Y.; Xiao, C.; Cyr, B.; Zhou, Y.; Park, W.; Rampazzi, S.; Chen, Q. A.; Fu, K.; and Mao, Z. M. 2019. Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, 2267–2281.
- [Chavez-Garcia and Aycard 2015] Chavez-Garcia, R. O., and Aycard, O. 2015. Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems 17(2):525–534.
- [Checkoway et al. 2011] Checkoway, S.; McCoy, D.; Kantor, B.; Anderson, D.; Shacham, H.; Savage, S.; Koscher, K.; Czeskis, A.; Roesner, F.; Kohno, T.; et al. 2011. Comprehensive experimental analyses of automotive attack surfaces. In USENIX Security Symposium, volume 4, 2021. San Francisco.
- [Chen et al. 2014] Chen, Q.; Whitbrook, A.; Aickelin, U.; and Roadknight, C. 2014. Data classification using the dempster–shafer method. Journal of Experimental & Theoretical Artificial Intelligence 26(4):493–517.
- [Claussmann et al. 2018] Claussmann, L.; O’Brien, M.; Glaser, S.; Najjaran, H.; and Gruyer, D. 2018. Multi-criteria decision making for autonomous vehicles using fuzzy dempster-shafer reasoning. In 2018 IEEE Intelligent Vehicles Symposium (IV), 2195–2202. IEEE.
- [Denoeux 2008] Denoeux, T. 2008. A k-nearest neighbor classification rule based on dempster-shafer theory. In Classic works of the Dempster-Shafer theory of belief functions. Springer. 737–760.
- [Eykholt et al. 2018] Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; Prakash, A.; Kohno, T.; and Song, D. 2018. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1625–1634.
- [Halla-aho, Nigussie, and Isoaho 2021] Halla-aho, L.; Nigussie, E.; and Isoaho, J. 2021. Conceptual design of a trust model for perceptual sensor data of autonomous vehicles. Procedia Computer Science 184:156–163.
- [He et al. 2020] He, Q.; Meng, X.; Qu, R.; and Xi, R. 2020. Machine learning-based detection for cyber security attacks on connected and autonomous vehicles. Mathematics 8(8):1311.
- [Kang and Kang 2016] Kang, M.-J., and Kang, J.-W. 2016. Intrusion detection system using deep neural network for in-vehicle network security. PloS one 11(6):e0155781.
- [Klein 2004] Klein, L. A. 2004. Sensor and data fusion: a tool for information assessment and decision making, volume 138. SPIE press.
- [Komissarov and Wool 2021] Komissarov, R., and Wool, A. 2021. Spoofing attacks against vehicular fmcw radar. arXiv preprint arXiv:2104.13318.
- [Kusenbach, Luettel, and Wuensche 2020] Kusenbach, M.; Luettel, T.; and Wuensche, H.-J. 2020. Fast object classification for autonomous driving using shape and motion information applying the dempster-shafer theory. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 1–6. IEEE.
- [Magnier, Gruyer, and Godelle 2017] Magnier, V.; Gruyer, D.; and Godelle, J. 2017. Automotive lidar objects detection and classification algorithm using the belief theory. In 2017 IEEE Intelligent Vehicles Symposium (IV), 746–751. IEEE.
- [Matei, Baras, and Jiang 2009] Matei, I.; Baras, J. S.; and Jiang, T. 2009. A composite trust model and its application to collaborative distributed information fusion. In 2009 12th International Conference on Information Fusion, 1950–1957. IEEE.
- [McDaniel, Papernot, and Celik 2016] McDaniel, P.; Papernot, N.; and Celik, Z. B. 2016. Machine learning in adversarial settings. IEEE Security & Privacy 14(3):68–72.
- [Miller and Valasek 2015] Miller, C., and Valasek, C. 2015. Remote exploitation of an unaltered passenger vehicle. Black Hat USA 2015(S 91).
- [Nassi et al. 2019] Nassi, D.; Ben-Netanel, R.; Elovici, Y.; and Nassi, B. 2019. Mobilbye: attacking adas with camera spoofing. arXiv preprint arXiv:1906.09765.
- [Nassi et al. 2020] Nassi, B.; Mirsky, Y.; Nassi, D.; Ben-Netanel, R.; Drokin, O.; and Elovici, Y. 2020. Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 293–308.
- [Nie, Liu, and Du 2017] Nie, S.; Liu, L.; and Du, Y. 2017. Free-fall: Hacking tesla from wireless to can bus. Briefing, Black Hat USA 25:1–16.
- [Pagac, Nebot, and Durrant-Whyte 1998] Pagac, D.; Nebot, E. M.; and Durrant-Whyte, H. 1998. An evidential approach to map-building for autonomous vehicles. IEEE Transactions on Robotics and Automation 14(4):623–629.
- [Papernot et al. 2016] Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami, A. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), 372–387. IEEE.
- [Peng et al. 2020] Peng, Z.; Yang, J.; Chen, T.-H.; and Ma, L. 2020. A first look at the integration of machine learning models in complex autonomous driving systems: a case study on apollo. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 1240–1250.
- [Preece et al. 2018] Preece, A.; Harborne, D.; Braines, D.; Tomsett, R.; and Chakraborty, S. 2018. Stakeholders in explainable ai. arXiv preprint arXiv:1810.00184.
- [Qayyum et al. 2020] Qayyum, A.; Usama, M.; Qadir, J.; and Al-Fuqaha, A. 2020. Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward. IEEE Communications Surveys & Tutorials 22(2):998–1026.
- [Shafer 1976] Shafer, G. 1976. A Mathematical Theory of Evidence. Princeton University Press.
- [Shoukry et al. 2013] Shoukry, Y.; Martin, P.; Tabuada, P.; and Srivastava, M. 2013. Non-invasive spoofing attacks for anti-lock braking systems. In International Conference on Cryptographic Hardware and Embedded Systems, 55–72. Springer.
- [Sitawarin et al. 2018] Sitawarin, C.; Bhagoji, A. N.; Mosenia, A.; Chiang, M.; and Mittal, P. 2018. Darts: Deceiving autonomous cars with toxic signs. arXiv preprint arXiv:1802.06430.
- [Song, Woo, and Kim 2020] Song, H. M.; Woo, J.; and Kim, H. K. 2020. In-vehicle network intrusion detection using deep convolutional neural network. Vehicular Communications 21:100198.
- [Taylor, Leblanc, and Japkowicz 2016] Taylor, A.; Leblanc, S.; and Japkowicz, N. 2016. Anomaly detection in automobile control network data with long short-term memory networks. In 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 130–139. IEEE.
- [Thomopoulos 1994] Thomopoulos, S. C. 1994. Sensor selectivity and intelligent data fusion. In Proceedings of 1994 IEEE International Conference on MFI’94. Multisensor Fusion and Integration for Intelligent Systems, 529–537. IEEE.
- [Wen, Chen, and Lin 2020] Wen, H.; Chen, Q. A.; and Lin, Z. 2020. Plug-n-pwned: Comprehensive vulnerability analysis of obd-ii dongles as a new over-the-air attack surface in automotive iot. In 29th USENIX Security Symposium (USENIX Security 20), 949–965.
- [Winner et al. 2016] Winner, H.; Hakuli, S.; Lotz, F.; and Singer, C. 2016. Automotive radar. Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort 325–403.
- [Woo, Jo, and Lee 2015] Woo, S.; Jo, H. J.; and Lee, D. H. 2015. A practical wireless attack on the connected car and security protocol for in-vehicle can. IEEE Transactions on Intelligent Transportation Systems 16(2):993–1006.
- [Xing et al. 2019] Xing, Y.; Lv, C.; Wang, H.; Wang, H.; Ai, Y.; Cao, D.; Velenis, E.; and Wang, F.-Y. 2019. Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges. IEEE Transactions on Vehicular Technology 68(5):4377–4390.
- [Yan, Xu, and Liu 2016] Yan, C.; Xu, W.; and Liu, J. 2016. Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle. Def Con 24(8):109.
- [Yang, Duan, and Tehranipoor 2020] Yang, Y.; Duan, Z.; and Tehranipoor, M. 2020. Identify a spoofing attack on an in-vehicle can bus based on the deep features of an ecu fingerprint signal. Smart Cities 3(1):17–30.
- [Zhou, Li, and Shen 2019] Zhou, A.; Li, Z.; and Shen, Y. 2019. Anomaly detection of can bus messages using a deep neural network for autonomous vehicles. Applied Sciences 9(15):3174.