A Scenario-Based Development Framework for Autonomous Driving
Abstract
This article summarizes the research progress of scenario-based testing and development technology for autonomous vehicles. We systematically analyzed previous research works and proposed the definition of scenario, the elements of the scenario ontology, the data source of the scenario, the processing method of the scenario data, and scenario-based V-Model. Moreover, we summarized the automated test scenario construction method by random scenario generation and dangerous scenario generation.
Index Terms:
Autonomous Driving, Scenario Ontology, Virtual Testing, Scenario Generation.1 Scenario in Autonomous Driving
The word ”scene” (Scenario) comes from the Latin Olinda, which means stage drama, and now refers to a specific situation in life. With the development of technology, the concept of scenes is gradually applied in the development and testing process of industrial production.
1.1 Scenario Definition
Scenario-based testing was first applied to the development of software systems. “Scenarios” were used to describe the use of the system, the requirements for use, the use environment, and the construction of more feasible systems [5, 17, 8]. Since then, many fields have defined the term scene in their respective disciplines, such as climate change [37], energy industry [6] and so on.
However, in the field of autonomous driving at this stage, “scenario” has not yet been clearly defined. Since Schieben et al. [42] applied the concept of scenario to automatic driving tests, many scholars have put forward their own understanding of the term “scenarios”. Elrofai et al. [10] defined that “the scene is to test the continuous changes of the dynamic environment around the vehicle in a specific time range, including the behavior of the test vehicle in this environment”. Koskimies [25] defined that “a scene is an informal description of a series of events when the system performs a specific task”, and an object-oriented modeling method can be used to describe the scene. RAND proposed in the autonomous driving research report that “scenarios are a combination of a series of elements used to detect and verify the behavioral capabilities of autonomous driving systems in specific driving environments”. The PEGASUS project proposes corresponding functional scenes, logical scenes, and physical scene concepts based on the differences in demand for scenes during the concept phase, system development phase, and test phase of autonomous driving product development [34]. Chinese academician Zheng Nanning of Xi’an University of Communications defines a scene as “a specific situation or scene of a traffic occasion at a specific time and in a specific space. It can be defined as a set of entities that can give a rich description of the current environment with perceptual data.” [53]. Based on the above viewpoints, these scene definitions are consistent in the core elements: they all include road environment elements, other traffic participants, and vehicle driving tasks. At the same time, these elements will last for a certain period of time and have dynamic characteristics.
Therefore, the autonomous driving scenario can be understood as such: a scenario is the dynamic description of the components of the autonomous vehicle and its driving environment over a period of time. The relationship of these components is determined by the functions of the autonomous vehicle to be inspected. In short, the scene can be regarded as a combination of the driving situation and driving scene of an autonomous vehicle.
Autonomous driving scenarios are infinitely rich, extremely complex, difficult to predict, and inexhaustible. Therefore, the scenarios used for developing and testing should meet the requirements of quantifiable (the features of each element of the scenario can be quantified) and reproducible (the scenario is in the current technology The basic and test software can be reproduced) and high-fidelity (can present or reflect the real world scene to a certain extent).
1.2 Scenario Ontology
Determining the ontology of the scenario element is the cornerstone of scenario-based techniques. However, there are still disputes among different researchers regarding the types and content of ontology.

Commonly used open source schemas such as OpenDrive and OpenScenario specified their road elements and traffic dynamic elements definitions in detail [31, 23]. Ulbrich et al. [46] proposed that the elements of a scene should include test vehicles, traffic environment elements, driving task information, and specific driving behaviors. Autonomous driving is a part of the test scene. Geyer et al. [15] believe that the scene is the pre-defined driving environment, driving tasks, static elements and dynamic elements during the automatic driving test, and the test vehicle itself is not included in the scene. Korbinian et al. [18] divided the scene elements into three categories: the environmental part (weather, light, wind speed, etc.), the static part (lane lines, trees, obstacles) and the dynamic part (traffic participants, pedestrians). In the latest report of RAND, the scene elements are divided into 5 layers, namely the road information layer (lane line, intersection shape, number of lanes, etc.), road infrastructure layer (traffic signs, trees, guardrails, etc.), road information layer and road The dynamic changes of the facility layer (road maintenance, tree breaking, obstacle movement, etc.), dynamic targets (pedestrians, traffic participants), environmental conditions (light, weather), test vehicles are not included. Matthaei et al. [32] discussed whether weather and light should be included as scene factors. Zhu et al. [54] categorized scenarios into test vehicles and traffic environments. Erwin et al. [7] believe that in the early stage of system development, the scene only needs basic information about the road and other traffic participants.
During testing, the test vehicle itself will have a significant impact on surrounding scene elements, especially other traffic participants. The interaction between the test vehicle and the surrounding driving environment forms a closed loop. At the same time, the property of the test vehicle will have a key impact on the behavioral decision-making of the automatic driving system. For example, the acceleration performance of the vehicle during overtaking plays a decisive role in the execution of the decision. Therefore, the test vehicle should be treated as a part of the scene, and the surrounding driving environment constitutes the whole scene.
Based on this concept, we integrate the above-mentioned research and propose a scenario ontology shown in Fig 1.
In this ontology, the scenario elements have two categories: basic information of the vehicle and environment elements. Among them, the basic information of the vehicle includes three categories: basic elements of the test vehicle, target information, and driving behavior. Traffic environment elements include weather and light, Static road information, dynamic road information and traffic participant information.
1.3 Scenario Data
It is necessary to collect a large amount of scenario data and establish a scenario library. For example, PEGASUS and KITTI in Germany, NHTSA Autonomous Driving Test Architecture Project in the United States, University of California, Berkeley BDD100K, China’s ”Kunlun Project”, Baidu ApolloScape, etc. are all committed to providing more practical scenario data for autonomous driving research and testing [14].
The data sources mainly include three parts: real data, simulation data and expert experience data. The specific content is shown in Figure 2.

1.3.1 Real Data
The real data sources mainly include natural driving data, accident data, close field test driving data, and open road test driving data.
The natural driving data is the scenario data collected during the normal driving of the vehicle by installing a multi-sensor collection platform such as radar, camera, and high-precision inertial navigation on a traditional car. Typical natural driving data collection conditions include highways, urban roads, parking lots, etc. The key to natural driving scene data collection is to ensure the time and space synchronization between sensor data. Time synchronization needs to synchronize the data collection cycles of different sensors. Currently, a unified clock source device such as GPS, COMPASS, GLONASS or GALILEO is used to achieve nanosecond synchronization between sensor data [33]. For sensor data of different frequencies, median sampling, spline difference sampling and other methods can be used to achieve time synchronization [44].
The accident data is the scenario data refined using the existing big data of road traffic accidents. At this stage, many countries and organizations have established traffic accident databases, such as China’s CIDAS database, Germany’s GIDAS database, US NHTSA’s GES database, and EU’s ASSESS database, etc. Automated driving tests can make full use of the data resources provided by these databases to construct test scenarios based on traffic accidents and illegal scenarios.
1.3.2 Simulation Data
Simulation data refers to the test data obtained by virtual operation of the autonomous vehicle in a simulation environment. The simulation environment can be generated through real scene import or vehicle driving environment modeling. Vehicle driving environment modeling mainly includes road scene modeling, traffic environment modeling, weather modeling and electromagnetic environment modeling. The key to traffic environment modeling is to generate correct traffic flow information and the behavior of surrounding traffic vehicles. At present, cellular automats are mostly used. Meteorological modeling and electromagnetic environment modeling aim to restore the weather conditions and electromagnetic interference in the real environment, such as simulating light intensity, humidity, temperature, shadow effects of electromagnetic signals, Doppler frequency shift, etc.
1.3.3 Expert Experience Data
Expert experience data refers to the scene element information obtained through the experience and knowledge of the previous tests. At present, there are more than 80 types of autonomous driving test laws and regulations in countries around the world. Taking the Autonomous Emergency Braking (AEB) function as an example, Euro-NCAP divides the AEB function test into three types: AEB-City, AEB Inter Urban and AEB Pedestrian [39], each test type has its corresponding test scenario.
1.4 Scenario Data Processing
The key to scene data processing is the deconstruction and reconstruction of scene elements.
The German PEGASUS project proposes 7 steps for scene data processing [40]: Generate a general environment description, check the data format, generate additional information, analyze the degree of correlation between the scenes, analyze the possibility of scene occurrence, cluster logical scene data and calculate the frequency distribution, and generate specific test scenes based on the generated logical scenes. Baidu proposed a three-step method of scene clustering including scene classification rule definition, scene labeling (element decomposition, quantification), and label clustering.
According to the existing typical scene data processing methods, this article summarizes and proposes the scene data processing flow shown in Figure 3.

1.4.1 Data Preprocessing
Sensor data from different channels are multimodal. There are also a lot of invalid data and misaligned data in the original data. Therefore, the sensor data cleaning becomes the prerequisite to construct the scenario library.
Cleaning the collected scene data mainly includes removing redundancy, deleting missing data, data repairing, etc. The data repairing can be done manually by completing key information or repairing according to the statistical value of the data. In the data cleaning process, it should meet the following requirements: maintain the data integrity; formulate user customized cleaning rules; minimize cleaning cost [11, 12]. Taking data restoration as an example, the cleaning cost is measured by reconstruction error , which is defined as:
(1) |
where the means any reconstruction methods, is the distance function where Damerau-Levenshtein distance is usually used. The cleaned data is then organized to form a usable scene dataset.
1.4.2 Data Enrichment
Cleaned data will be enriched internally and externally. Internally, additional information can be derived from data directly, including the calculation for time-to-collision (TTC), time headway, time-to-brake (TTB) and etc [19]. Externally, key information in the data is annotated by external annotators. Annotators can be human-based or algorithm-based (a.k.a Auto Annotation). Commonly used algorithms include supervised and semi-supervised methods [4, 48, 52, 35].
1.4.3 Scenario Clustering
Annotated scenarios is clustered based on Ontology. The scenes that meet the classification criteria are clustered into corresponding scene elements, and the parameter space of scene elements is clarified. Commonly used clustering algorithms mainly include K-Means clustering, hierarchical clustering, Gaussian Mixture model, Deep learning based clustering such as T-SNE [20].
1.4.4 Scenario Density Estimation
Based on the above clustered scenarios, the kernel density functions of the ontology scenarios are calculated to facilitate the random generation of specific scenarios in section 3.1. Suppose are scenarios with independent and identical distribution. Let its probability density function be , the kernel density function estimator is defined as:
(2) |
where
(3) |
In this estimator, is the kernel function, non-negative and the integral value is 1. is the smoothing factor, which is determined by the square error of the average integral; is the smoothed kernel function. With these density functions, test-cases can be manually picked or randomly generated according to of specific scenarios.
2 Scenario-based V-Model

With the level of autonomous driving increases, the test scenarios become infinitely rich, extremely complex, unpredictable, and inexhaustible. Covering all situations in road testing is no longer possible. A scenario-based V-Model testing framework is shown in Figure 4. It includes virtual testing, such as software-in-the-loop testing (SIL), hardware-in-the-loop testing (HIL), and real road testing, such as close field testing and open road testing [26, 16, 3].
Car companies and research institutions are gradually pursuing scenario-based direction due to the abundant scenarios, fast calculation speed, high test efficiency, low resource consumption, good repeatability, and easy embedding in all aspects of vehicle development. The scenario property among virtual testing, close-field testing and open-road testing are summarized in table I below.
Number of Scenarios | How are they used | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
||||||||||||||
|
|
|
||||||||||||||
|
|
|
3 Automatic Scenario Generation
As shown in Figure 3, when we don’t have enough scenarios to do SIL testing, we have to generate scenarios by human or machine. Human expert can generate very customized scenarios for testing. However, the cons are obvious – expensive and unscalable. The goal of this section is to automatically generate a large number of test scenarios in a short time according to test requirements. The generation methods mostly fall into two categories: random scenario generation and dangerous scenario generation.
3.1 Random Scenario Generation
Based on the probability density of various scenes in Eq.2, specific scenes can be randomly generated in the virtual environment. The generation methods mainly lies in three categories. 1) Random sampling represented by Monte Carlo sampling and fast random search tree. 2) Importance based sampling such as importance level analysis of scene elements. 3) Machine learning based methods.
3.1.1 Random Sampling
Yang et al. [51] and Lee [27] extracted data fragments from road collision pre-warning and adaptive cruise field tests, then used Monte Carlo simulation to generate the test scenario for ‘active braking’. Olivaves et al. [38] used Markov chain Monte Carlo methods to reconstruct road information by analyzing road map data. Fellner et al. [13] applied the Rapidly-exploring Random Tree (RRT) method in path planning to scene generation, and the generated test cases can consider more than 2300 scene elements. Li et al. [30] proposed a common model construction method based on road image sequence, which uses Super-pixel Markov random field algorithm to monitor the road area and realize the random modeling of the road scenario. Elias et al. [41] proposed a scene generation method based on the backtracking algorithm, which can randomly generates dynamic and static scene elements.
3.1.2 Importance Based Sampling
Importance based sampling [50] usually contains three major steps. First, it needs to analyze the scene elements, clarify the scene elements, and discretize the continuous scene elements. Then determine the importance score of each scene element through information entropy and level analysis. Next, the importance score of different elements is flattened, and the relative importance parameters of each scene element are obtained. Finally, testcases are generated through the combined test scenarios.
3.1.3 Machine Learning Based Sampling
Schillinng et al. [43] approached the problem by changing the nature of scene elements, such as white balance, light changes, motion blur, etc. Alexander et al. [24] infer the behavior information of surrounding traffic participants based on the collected data, and use neural networks to learn the behavior information of surrounding vehicles to generate dynamic scenes. Li et al. [21] divided the driving position around the car into 8 areas, then generate scenarios through the arrangement and combination of the relative position and speed of the vehicle and the surrounding traffic vehicles. Vishnukumar et al. [47] proposed to apply the deep learning method to the test verification process. After the initial necessary test scenarios are given, random test scenarios are automatically generated through learning algorithms.
3.2 Dangerous Scenario Generation
Compared with building real test scenarios in the real world, generating test cases in a virtual environment can greatly reduce time and resource consumption. However, due to the low probability of accidents under natural circumstances, the method of using random generation may still face a large number of calculation difficulties. Putting more weight on dangerous scenes generation can alleviate this problem.
First of all, it is necessary to define and classify dangerous scenes. Many projects have conducted research on car dangerous scenes. SeMiFOT divides the risk of driving into 4 levels [1]. The United States NHTSA classifies collisions into 37 categories [36]. Aparicio et al. [2] summarized the types of conflicts between cars and cars, cars and pedestrians. Winkle et al. [49] analyzed accident data in which the line of sight was blocked in different weather conditions from 2004 to 2014, and analyzed the severity of the accident.
The definition of the above-mentioned dangerous scenes is narrow where most of them only analyze the types of their dangers without defining specific parameters of the scene elements. Tang et al. [45] define each attribute parameter of the accident scene, and propose a method for drawing urban traffic accidents. Sven et al. [19] used specific parameters such as TTB, expected braking deceleration, TTC, traffic flow, speed fluctuation, average speed, acceleration change and other specific parameters to find the dangerous scenes from the massive car driving data. Elrofai et al. [10] judged whether there is lane changing behavior by detecting the speed and yaw rate of the vehicle during driving. When the continuous yaw rate exceeds the threshold for a period of time, it is judged as a valuable steering event. Huang et al. [22] proposed a method to accelerate the generation of dangerous scenes based on importance sampling based on the defined dangerous scenes. The core idea is to introduce a new probability density function to increase the probability of producing dangerous scenes, thereby reducing the number of tests. When using the randomly scene generation method, the probability density function of the dangerous scene is , and the minimum number of tests is
(4) |
where is the probability of a dangerous scenario, is related to the inverse cumulative distribution function of .
When importance sampling is used to generate dangerous scenes, the probability density function of the dangerous scenes is , and the minimum number of tests is
(5) |
where is the index function of dangerous event and is likelihood ratio for using importance sampling. is the probability of occurrence of the dangerous scene after changing the probability density function to .
Through the verification of the method for typical scenes such as cut in and AEB, it is proved that the test speed is 7,000 times faster than Monte Carlo test simulation.
3.3 Technical Challenges
There are three technical challenges for auto test scenario generation: authenticity, granularity, and measurement.
3.3.1 Authenticity
In order to ensure the authenticity of the scene during the virtual test, the reference measurement system (RMS) should be established during the virtual scene test [28]. RMS is mainly used to compare the difference between the generated virtual test scene and the real world. Its accuracy needs to be higher than that of sensors on autonomous vehicles. If the roughness of the scene elements detected by the RMS system is less than a certain a threshold value, it can prove that the generated virtual test environment can be used to test the automatic driving function. Taking the lane keeping function as an example, the necessary environmental element information includes road shape, lane line position, lane line shape, and light conditions. At this point, the main component of the RMS is the image acquisition device, which has better performance in terms of resolution and sensitivity than the sensors used in autonomous vehicles. The RMS image acquisition device is the placed on the HIL test bench built above for detection. If the detected road color features, lane line gray value, lane line edge shape and other characteristics are similar to the real world, it proves that the fidelity of the generated virtual scene meets the requirements.
3.3.2 Granularity
The granularity of scene elements needs to be adapted according to technological development. Taking the size of raindrop particles as an example. The size of raindrops will cause greater interference to radar echo. The smaller the raindrops, the weaker the reflection of microwaves. For radar, when the diameter of raindrops is less than a certain threshold, the detection results of the radar will almost remain unchanged for the decision-making results of the entire autopilot system. At this time, blindly pursuing the reality of simulation, such as reducing the particle size of raindrops, will increase The consumption of large calculations puts a great burden on the computation of the simulation system. Therefore, the authenticity of the simulation environment needs to consider the technical level of the sensors currently used and the computing power.
3.3.3 Measurement
Collision is often used as the measurement for the virtual test. In order to increase the virtual test coverage, Tong et al. [9] proposed a way of specifying key performance indicators (Key Performance Indicator, KPI) to describe the performance of autonomous vehicles. Taking the adaptive cruise system as an example, the KPI parameters describing the adaptive cruise performance in the virtual test include: safety (the ability to avoid collisions), comfort (vehicle acceleration and deceleration), naturalness (the similarity of human driving), economy (fuel consumption), according to different automatic driving functions, different KPIs can be set for evaluation. Some scholars have also proposed the use of the Turing test as measurement. Li et al. [29] proposed a driver-in-the-loop parallel intelligent test model, which uses the principle of Turing test to test the understanding of the elements and driving decision-making capabilities of autonomous vehicles in complex scenarios.
Disclaimers
Draft for open concept instruction. Algorithms are partial and figures are subject to change.
References
- [1] C. Ahlstrom, T. Victor, C. Wege, and E. Steinmetz. Processing of eye/head-tracking data in large-scale naturalistic driving data sets. IEEE transactions on intelligent transportation systems, 13(2):553–564, 2011.
- [2] A. Aparicio, M. Lesemann, and H. Eriksson. Status of test methods for autonomous emergency braking systems-results from the active test project. Technical report, SAE Technical Paper, 2013.
- [3] G. Bagschik, T. Menzel, and M. Maurer. Ontology based scene creation for the development of automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 1813–1820. IEEE, 2018.
- [4] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399–2434, 2006.
- [5] J. Carroll. Scenario-Based Design: Envisioning Work and Technology in System Development. Wiley, 1995.
- [6] D. Cayan, M. Tyree, M. Dettinger, H. Hidalgo, T. Das, E. Maurer, P. Bromirski, and R. Flick. Climate change scenarios and sea level rise estimates for the california 2009 climate change scenarios assessment. 2009.
- [7] E. de Gelder and J.-P. Paardekooper. Assessment of automated driving systems using real-life scenarios. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 589–594. IEEE, 2017.
- [8] A. Disessa. A principled design for an integrated computational environment. Human-Computer Interaction, 1:1–47, 03 1985.
- [9] T. Duy Son, L. Awatsu, J. Hubrechts, A. Bhave, and H. Van der Auweraer. A simulation-based testing and validation framework for adas development. 11 2017.
- [10] H. Elrofai, D. Worm, and O. O. den Camp. Scenario identification for validation of automated driving functions. In Advanced Microsystems for Automotive Applications 2016, pages 153–163. Springer, 2016.
- [11] W. Fan, F. Geerts, X. Jia, and A. Kementsietsidis. Conditional functional dependencies for capturing data inconsistencies. ACM Transactions on Database Systems (TODS), 33(2):1–48, 2008.
- [12] W. Fan, J. Li, S. Ma, N. Tang, and W. Yu. Towards certain fixes with editing rules and master data. The VLDB journal, 21(2):213–238, 2012.
- [13] A. Fellner, W. Krenn, R. Schlick, T. Tarrach, and G. Weissenbacher. Model-based, mutation-driven test-case generation via heuristic-guided branching search. ACM Transactions on Embedded Computing Systems (TECS), 18(1):1–28, 2019.
- [14] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3354–3361. IEEE, 2012.
- [15] S. Geyer, M. Baltzer, B. Franz, S. Hakuli, M. Kauer, M. Kienle, S. Kwee-Meier, T. Weigerber, K. Bengler, R. Bruder, F. Flemisch, and H. Winner. Concept and development of a unified ontology for generating test and use-case catalogues for assisted and automated vehicle guidance. Intelligent Transport Systems, IET, 8:183–189, 05 2014.
- [16] D. González, J. Pérez, V. Milanés, and F. Nashashibi. A review of motion planning techniques for automated vehicles. IEEE Transactions on Intelligent Transportation Systems, 17(4):1135–1145, 2015.
- [17] J. Gould, S. Boies, S. Levy, J. Richards, and J. Schoonard. The 1984 olympic message system: A test of behavioral principles of system design. Commun. ACM, 30:758–769, 09 1987.
- [18] K. Groh, T. Kuehbeck, B. Fleischmann, M. Schiementz, and C. Chibelushi. Towards a scenario-based assessment method for highly automated driving functions. 2017.
- [19] S. Hallerbach, Y. Xia, U. Eberle, and F. Koester. Simulation-based identification of critical scenarios for cooperative and automated vehicles. SAE International Journal of Connected and Automated Vehicles, 1(2018-01-1066):93–106, 2018.
- [20] G. E. Hinton and S. T. Roweis. Stochastic neighbor embedding. In Advances in neural information processing systems, pages 857–864, 2003.
- [21] L. Huang, Q. Xia, F. Xie, H.-L. Xiu, and H. Shu. Study on the test scenarios of level 2 automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 49–54. IEEE, 2018.
- [22] Z. Huang, D. Zhao, H. Lam, D. J. LeBlanc, and H. Peng. Evaluation of automated vehicles in the frontal cut-in scenario—an enhanced approach using piecewise mixture models. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 197–202. IEEE, 2017.
- [23] J.-M. Jullien, C. Martel, L. Vignollet, and M. Wentland. Openscenario: a flexible integrated environment to develop educational activities based on pedagogical scenarios. In 2009 Ninth IEEE International Conference on Advanced Learning Technologies, pages 509–513. IEEE, 2009.
- [24] A. Koenig, M. Gutbrod, S. Hohmann, and J. Ludwig. Bridging the gap between open loop tests and statistical validation for highly automated driving. SAE International journal of transportation safety, 5(1):81–87, 2017.
- [25] K. Koskimies, T. Systa, J. Tuomi, and T. Mannisto. Automated support for modeling oo software. IEEE software, 15(1):87–94, 1998.
- [26] R. Lattarulo, J. Pérez, and M. Dendaluce. A complete framework for developing and testing automated driving controllers. IFAC-PapersOnLine, 50(1):258–263, 2017.
- [27] K. Lee. Longitudinal driver model and collision warning and avoidance algorithms based on human driving databases. PhD thesis, University of Michigan, 2004.
- [28] A. Leitner and S. Metzner. Challenges for reproducing real-life test runs in simulation for validating automated driving functions. 135, 07 2018.
- [29] L. Li, X. Wang, K. Wang, Y. Lin, J. Xin, L. Chen, L. Xu, B. Tian, Y. Ai, J. Wang, et al. Parallel testing of vehicle intelligence via virtual-real interaction. 2019.
- [30] Y. Li, Y. Liu, J. Zhu, S. Ma, Z. Niu, and R. Guo. Spatiotemporal road scene reconstruction using superpixel-based markov random field. Information Sciences, 507, 08 2019.
- [31] M. M. Loper and M. J. Black. Opendr: An approximate differentiable renderer. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, Computer Vision – ECCV 2014, pages 154–169, Cham, 2014. Springer International Publishing.
- [32] R. Matthaei, G. Bagschik, and M. Maurer. Map-relative localization in lane-level maps for adas and autonomous driving. In 2014 IEEE Intelligent Vehicles Symposium Proceedings, pages 49–55. IEEE, 2014.
- [33] A. I. McInnes. Model-checking the flooding time synchronization protocol. In 2009 IEEE International Conference on Control and Automation, pages 422–429. IEEE, 2009.
- [34] T. Menzel, G. Bagschik, and M. Maurer. Scenarios for development, test and validation of automated vehicles, 2018.
- [35] V. N. Murthy, S. Maji, and R. Manmatha. Automatic image annotation using deep learning representations. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, pages 603–606, 2015.
- [36] W. G. Najm, J. D. Smith, M. Yanagisawa, et al. Pre-crash scenario typology for crash avoidance research. Technical report, United States. National Highway Traffic Safety Administration, 2007.
- [37] L. P. Olander, H. K. Gibbs, M. Steininger, J. J. Swenson, and B. C. Murray. Reference scenarios for deforestation and forest degradation in support of redd: a review of data and methods. Environmental Research Letters, 3(2):025011, 2008.
- [38] S. P. Olivares, N. Rebernik, A. Eichberger, and E. Stadlober. Virtual stochastic testing of advanced driver assistance systems. In Advanced Microsystems for Automotive Applications 2015, pages 25–35. Springer, 2016.
- [39] M.-K. Park, S.-Y. Lee, C.-K. Kwon, and S.-W. Kim. Design of pedestrian target selection with funnel map for pedestrian aeb system. IEEE Transactions on Vehicular Technology, 66(5):3597–3609, 2016.
- [40] A. Pütz, A. Zlocki, J. Bock, and L. Eckstein. System validation of highly automated vehicles with a database of relevant traffic scenarios. 2017.
- [41] E. Rocklage, H. Kraft, A. Karatas, and J. Seewig. Automated scenario generation for regression testing of autonomous vehicles. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 476–483. IEEE, 2017.
- [42] A. Schieben, M. Heesen, J. Schindler, J. Kelsch, and F. Flemisch. The theater-system technique: agile designing and testing of system behavior and interaction, applied to highly automated vehicles. pages 43–46, 09 2009.
- [43] R. Schilling and T. Schultz. Validation of automated driving functions. In Simulation and Testing for Vehicle Technology, pages 377–381. Springer, 2016.
- [44] F. Sivrikaya and B. Yener. Time synchronization in sensor networks: a survey. IEEE network, 18(4):45–50, 2004.
- [45] Y. Tang and L. Wang. Development of scenes drawing system for urban road accidents. In 2011 IEEE International Conference on Mechatronics and Automation, pages 1152–1157. IEEE, 2011.
- [46] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer. Defining and substantiating the terms scene, situation, and scenario for automated driving. In 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pages 982–988. IEEE, 2015.
- [47] H. J. Vishnukumar, B. Butting, C. Müller, and E. Sax. Machine learning and deep neural network — artificial intelligence core for lab and real-world test and validation for adas and autonomous vehicles: Ai for efficient and quality test and validation. 2017 Intelligent Systems Conference (IntelliSys), pages 714–721, 2017.
- [48] W. Wang and D. Zhao. Extracting traffic primitives directly from naturalistically logged data for self-driving applications. IEEE Robotics and Automation Letters, 3(2):1223–1229, 2018.
- [49] T. Winkle, C. Erbsmehl, and K. Bengler. Area-wide real-world test scenarios of poor visibility for safe development of automated vehicles. European Transport Research Review, 10(2):1–15, 2018.
- [50] Q. Xia, J. Duan, F. Gao, T. Chen, and C. Yang. Automatic generation method of test scenario for adas based on complexity. Technical report, SAE Technical Paper, 2017.
- [51] H.-H. Yang and H. Peng. Development and evaluation of collision warning/collision avoidance algorithms using an errable driver model. Vehicle system dynamics, 48(S1):525–535, 2010.
- [52] J. Yang et al. Automatically labeling video data using multi-class active learning. In Proceedings Ninth IEEE international conference on computer vision, pages 516–523. IEEE, 2003.
- [53] N. Zheng. Achieving fully autonomous unmanned still faces difficult challenges, Dec. 2017.
- [54] Z. J. C. H. X. Z.-g. Z. X.-m. D. W.-w. ZHU Bing, ZHANG Pei-xing. Review of scenario-based virtual validation methods for automated vehicles. China Journal of Highway and Transport, 32(6):1, 2019.