Object detection is a matured technique, converging to the detection performance of human vision. This paper presents a method to further close the remaining gap of detection capability by investigating visual factors impairing the detectability of objects. As some of these factors are hard or impossible to measure in real sensor data, a detector is trained on synthetic data making perfect measurements and ground truth data available at a large scale. The resulting detector is then used to calibrate an empirical weighting loss, which weights samples of real training data and their corresponding detection impairing factors. The method is applied to the task of pedestrian detection in traffic scenes. The effectiveness of the empirical detection impairment weighting loss (DIW loss) is demonstrated on a detector trained on the CityPersons dataset and reaches a new state-of-the-art detection performance on this benchmark, improving the previous by .
We introduce the Synthetic Pedestrian Dataset (SynPeDS) which was designed to support a systematic safety analysis for pedestrian detection tasks in urban scenes. The dataset was generated synthetically with a real-time and a physically-based rendering pipeline and provides camera frames and in part associated LiDAR point clouds. It contains ground truth for semantic segmentation, instance segmentation, 2D and 3D bounding boxes, and in part, pose information and bodypart segmentation. In particular, it comes with a large amount of meta information for in-depth performance and safety analysis, e.g. addressing semantic properties of the pedestrians and their environment in the frames. Some scenarios were specifically designed to systematically cover certain safety-relevant or performance-reducing dimensions of the input space, defined in project KI Absicherung. The dataset does not claim to be complete or free of bias, but to support coverage and data distribution studies.
Recent advances in state-of-the-art camera-based AI mechanisms in the automated driving field have leveraged great progress in the installation and widespread use of this technology along the recent years. However, vehicles with automated driving capabilities are usually equipped with a wide range of sensors that complement the perception capacity of camera-based AI algorithms. For this reason, this paper tries to reveal the degree of readiness of one of the most used open-source AI models for Level 2 automated driving. To this end, a set of simulated common driving scenarios were used to evaluate the predictions. The results obtained clearly indicate that the current capacity of this camera-based DNN model is not sufficient to be the only source of information in the process of environment perception of a Level 2 automated vehicle, and therefore, further progress in the context awareness needs to be achieved to consider its sole use in the perception stage.
Advancements in deep neural networks have made it a prominent approach for most of the complex computer vision tasks. A key aspect for the deployment of deep neural networks in several applications, like automotive and medical, has been its ability to estimate its uncertainty. A recent leading approach is using Dirichlet distributions to model the uncertainty, which results in real-time estimation of uncertainty. The intermediate layer variational inference has also been a promising approach to-enable real-time estimation of uncertainty, beating state-of-the-art approaches. In this work we introduce the incorporation of both approaches in order to improve the reliability of uncertainty estimation whilst maintaining the real-time capability. Our experiments on the Cityscapes dataset for the task of semantic segmentation showed a significant boost in the deep neural network’s uncertainty estimation capability, whilst also improving its segmentation performance.
Security for automated driving systems brings new challenges that are not typically considered in automotive cybersecurity of conventional non-automated systems. The game changers behind these challenges are missing driver supervision, increased actuator control over vehicle movement, and greater reliance on external input.
Based on our practical experience and lessons learned during development of a SAE level 3 automated driving system, we review the different challenges and discuss directions towards solving them. Some of the challenges can be addressed by applying well-known security mechanisms more rigorously. Some must be handled by collaboration across disciplines, such as security and safety, or security and function development. Others were identified as open challenges and we point out where more research is required to develop new solutions and to examine whether a technical solution is possible at all.
Some current and next generation security solutions employ machine learning and related technologies. Due to the nature of these applications, correct use of machine learning can be critical. One area that is of particular interest in this regard is the use of appropriate data for training and evaluation. In this work, we investigate different characteristics of datasets for security applications and propose a number of qualitative and quantitative metrics which can be evaluated with limited domain knowledge. We illustrate the need for such metrics by analyzing a number of datasets for anomaly and intrusion detection in automotive systems, covering both internal vehicle network and vehicle-to-vehicle (V2V) communication. We demonstrate how the proposed metrics can be used to learn the strengths and weaknesses in these datasets.
Flush-based cache attacks seriously threaten the security of automotive system based on ARM Cortex-A9 MPCore. Most of the proposed defense schemes have limited detection capabilities or can’t detect the malicious attacks fast enough. The method of permanently reducing the resolution of all time APIs in the system is not feasible because the high resolution time API is required for normal running of various applications. In this paper, we propose two more secure collaborative APIs—SafeFlush and SafeTime. In addition to the basic function of flushing a cache line, SafeFlush can also detect and handle the suspected flush-based cache attack process. More importantly, SafeFlush collaborates with SafeTime to effectively resist all flush-based cache attacks. That SafeTime adaptively reduces its resolution for a short time based on the signal sent by SafeFlush makes attacks fail. The attack experiment results show that the success rate of Flush+Reload and flush-based Spectre attacks using SafeFlush and SafeTime APIs can be reduced to less than 1%. Performance experiment results show that the access latency of SafeTime based on global timer is 14.5% slower than the original API and 18% slower based on PMCCNTR. The time consumption of SafeFlush is about 25.2% longer than the original cache flush API. Since SafeFlush and SafeTime are far more secure than the original APIs, their performance loss is acceptable.
Several advanced driver assistance systems require a reliable angle of the steering wheel as input for their control-loop cycle. This research investigates attacks on the steering wheel angle sensor with electromagnetic fields. We present measurement data showing that an angle of 160 ° or larger can be injected into the steering wheel angle sensor using a Helmholz coil. Further, this paper describes the reaction of the steering system on this manipulation. The most interesting observations were full use of an injected steering wheel angle by the receiving system if the injection started before the vehicle was started. Additionally, we describe the failure detection in the electric stability control and discuss existing compensating measures as well as possible mitigation measures.
DoIP, which is defined in ISO 13400, is a transport protocol stack for diagnostic data. Diagnostic data is a potential attack vector at vehicles, so secure transmission must be guaranteed to protect sensitive data and the vehicle. Previous work analyzed a draft version and earlier versions of the DoIP protocol without tls. No formal analysis exists for the DoIP protocol.
The goal of this work is to investigate the DoIP protocol for design flaws that may lead to security vulnerabilities and possible attacks to exploit them. For this purpose, we deductively analyze the DoIP protocol in a first step and subsequently confirm our conclusions formally. For the formal analysis, we use the prover Tamarin. Based on the results, we propose countermeasures to improve the DoIP’s security. We show that the DoIP protocol cannot be considered secure mainly because the security mechanisms tls and client authentication in the DoIP protocol are not mandatory. We propose measures to mitigate the vulnerabilities that we confirm to remain after activating tls. These require only a minor redesign of the protocol.
Cars are getting rapidly connected with their environment allowing all kind of mobility services based on the data from various sensors in the car. Data privacy is in many cases only ensured by legislation, i. e., the European General Data Protection Regulation (GDPR), but not technically enforced. Therefore, we present a system model for enforcing purpose limitation based on data tagging and attribute-based encryption. By encrypting sensitive data in a way only services for a certain purpose can decrypt the data, we ensure access control based on the purpose of a service. In this paper, we present and discuss our system model with the aim to improve technical enforcement of GDPR principles.
Ride-sharing is a popular way of transportation that reduces traffic and the costs of the trip. Emerge of autonomous vehicles makes ride-sharing more popular because these vehicles do not require a driver’s effort. Therefore, in order to find a suitable ride-share, the service provider is not restricted to the driver’s trip. Thus, the autonomous cars are more flexible with matching the passengers. Passengers who want to participate in car-sharing send their trip data to a ride-sharing service provider. However, the passenger’s trip data contains sensitive information about the passenger’s locations. Multiple studies show that a person’s location data can reveal personal information about them, e.g., their health condition, home, work, hobbies, and financial situation. In this paper, we propose a lightweight privacy-preserving ride-sharing protocol for autonomous cars. Contrary to previous works on this topic, our protocol does not rely on any extra party to guarantee privacy and security. Our protocol consists of two main phases: i) privacy-preserving group forming, and ii) privacy-preserving fair pick-up point selection. In addition to ride-sharing, the two phases of our protocol can also be applied to other use cases. We have implemented our protocol for a realistic ride-sharing scenario, where 1000 passengers simultaneously request a ride-share. Our evaluation results show that the time and communication costs of our protocol are such that it is feasible for real-world applications.
Combined safety and cybersecurity testing are critical for assessing the reliability and optimisation of autonomous driving (AD) algorithms. However, safety and cybersecurity testing is often conducted in isolation, leading to a lack of evaluation of the complex system-of-system interactions which impact the reliability and optimisation of the AD algorithm. Concurrently, practical limitations of testing include resource usage and time. This paper proposes a methodology for combined safety and cybersecurity testing and applies it to a real-world AV shuttle using digital twin, software-in-the-loop (SiL) simulation and a real-world Autonomous Vehicle (AV) test environment. The results of the safety and cybersecurity tests and feedback from the AD algorithm designers demonstrate that the methodology developed is useful for assessing the reliability and optimisation of an AD algorithm in the development phase. Furthermore, from the observed system-of-system interactions, key relationships such as speed and attack parameters can be used to optimise testing.