Despite the remarkable progress of deep learning in stereo matching, there exists a gap in accuracy between real-time models and slower state-of-the-art models which are suitable for practical applications. This paper presents an iterative multi-scale coarse-to-fine refinement (iCFR) framework to bridge this gap by allowing it to adopt any stereo matching network to make it fast, more efficient and scalable while keeping comparable accuracy. To reduce the computational cost of matching, we use multi-scale warped features to estimate disparity residuals and push the disparity search range in the cost volume to a minimum limit. Finally, we apply a refinement network to recover the loss of precision which is inherent in multi-scale approaches. We test our iCFR framework by adopting the matching networks from state-of-the art GANet and AANet. The result is 49 × faster inference time compared to GANet-deep and 4 × less memory consumption, with comparable error. Our best performing network, which we call FRSNet is scalable even up to an input resolution of 6K on a GTX 1080Ti, with inference time still below one second and comparable accuracy to AANet+. It out-performs all real-time stereo methods and achieves competitive accuracy on the KITTI benchmark.
The automation of driving systems relies on proof of the correct functioning of perception. Arguing the safety of deep neural networks (DNNs) must involve quantifiable evidence. Currently, the application of DNNs suffers from an incomprehensible behavior. It is still an open question if post-hoc methods mitigate the safety concerns of trained DNNs. Our work proposes a method for inherently interpretable and concept-based pedestrian detection (CPD). CPD explicitly structures the latent space with concept vectors that learn features for body parts as predefined concepts. The distance-based clustering and separation of latent representations build an interpretable reasoning process. Hence, CPD predicts a body part segmentation based on distances of latent representations to concept vectors. A non-interpretable 2d bounding box prediction for pedestrians complements the segmentation. The proposed CPD generates additional information that can be of great value in a safety argumentation of a DNN for pedestrian detection. We report competitive performance for the task of pedestrian detection. Finally, CPD enables concept-based tests to quantify evidence of a safe perception in automated driving systems.
Deep neural networks have been the prominent approach for many computer vision tasks, excelling in solving many critical tasks. However, estimating the uncertainty of the network’s predictions has still been an open research question with various approaches, adding an edge to a deep neural network by providing more information about the predictions it is generating. Uncertainty estimation is deemed to be an important enabler for the future of automated driving systems, as its information could be needed for processing the vehicle’s next maneuver based on the uncertainty estimates of its perception module. In this paper, we propose a new approach by adding intermediate multivariate layers within a deep neural network aiming to provide much faster uncertainty estimations than the top two state-of-art approaches, MC Dropout and Deep Ensembles. A thorough comparison between the proposed approach and the two state-of-art approaches is presented to evaluate the new technique, assessing its speed, performance and calibration. Results show that the proposed uncertainty estimation method is significantly faster with the potential for real-time applications whilst exhibiting comparable performance to the state-of-art approaches.
Synthetic, i.e., computer generated-imagery (CGI) data is a key component for training and validating deep-learning-based perceptive functions due to its ability to simulate rare cases, avoidance of privacy issues and easy generation of huge datasets with pixel accurate ground-truth data. Recent simulation and rendering engines simulate already a wealth of realistic optical effects, but are mainly focused on the human perception system. But, perceptive functions require realistic images modeled with sensor artifacts as close as possible towards the sensor the training data has been recorded with.
In this paper we propose a method to improve the data synthesis by introducing a more realistic sensor model that implements a number of sensor and lens artifacts. We further propose a Wasserstein distance (earth mover’s distance, EMD) based domain divergence measure and use it as minimization criterion to adapt the parameters of our sensor artifact simulation from synthetic to real images. With the optimized sensor parameters applied to the synthetic images for training, the mIoU of a semantic segmentation network (DeeplabV3+) solely trained on synthetic images is increased from 40.36% to 47.63%.
Risk-based security models have seen a steady rise in popularity over the last decades, and several security risk assessment models have been proposed for the automotive industry. The new UN vehicle regulation 155 on cybersecurity provisions for vehicle type approval, as part of the 1958 agreement on vehicle harmonization, mandates the use of risk assessment to mitigate cybersecurity risks and is expected to be adopted into national laws in 54 countries within 1 to 3 years. This new legislation will also apply to autonomous vehicles. The automotive cybersecurity engineering standard ISO/SAE 21434 is seen as a way to fulfill the new UN legislation, so we can expect quick and wide industry adoption. One risk assessment model that has gained some popularity and is in active use in several companies is the HEAVENS model, but since ISO/SAE 21434 introduces additional requirements on the risk assessment process, the original HEAVENS model does not fulfill the standard.
In this paper, we investigate the gap between the HEAVENS risk assessment model and ISO/SAE 21434, and we identify and propose 12 model updates to HEAVENS to close this gap. We also discuss identified weaknesses of the HEAVENS risk assessment model and propose 5 additional model updates to overcome them. In accordance with these 17 identified model updates, we propose HEAVENS 2.0, a new risk assessment model based on HEAVENS which is fully compliant with ISO/SAE 21434.
Security is a cross-cutting issue in the automotive development process. The nature of cross-cutting issues demands constant coordination between different stakeholders. Changes in the vehicle functionalities lead to reoccurring security analysis steps, rising the complexity of progress tracking. While those process steps are typically done on function level, the vehicle architecture has to be verified as a composite, too. This is mostly done late in the development process by testing. Thus, architectural mismatches between functionalities security demands are often revealed too late.
Starting from the definition of integrity as a system property in the information flow, we present the link from the MoRA approach to the architectural modeling and analysis approach. Verifying the no command-up policy is transferred to the temporal logic TLA+ allowing an early and fast architecture verification.
Vehicles are becoming interconnected and autonomous while collecting, sharing and processing large amounts of personal, and private data. When developing a service that relies on such data, ensuring privacy preserving data sharing and processing is one of the main challenges. Often several entities are involved in these steps and the interested parties are manifold. To ensure data privacy, a variety of different de-identification techniques exist that all exhibit unique peculiarities to be considered. In this paper, we show at the example of a location-based service for weather prediction of an energy grid operator, how the different de-identification techniques can be evaluated. With this, we aim to provide a better understanding of state-of-the-art de-identification techniques and the pitfalls to consider by implementation. Finally, we find that the optimal technique for a specific service depends highly on the scenario specifications and requirements.
Deep neural networks are at the heart of safety-critical applications such as autonomous driving. Distributional shift is a typical problem in predictive modeling, when the feature distribution of inputs and outputs varies between the training and test stages. When used on data different from the training distribution, neural networks provide little or no performance guarantees on such out-of-distribution (OOD) inputs. Monitoring distributional shift can help assess reliability of neural network predictions with the purpose of predicting potential safety-critical contexts. With our research, we evaluate state of the art OOD detection methods on autonomous driving camera data, while also demonstrating the influence of OOD data on the prediction reliability of neural networks. We evaluate three different OOD detection methods: As a baseline method we employ a variational autoencoder (VAE) trained on the similar data as the perception network (depth estimation) and use a reconstruction error based out of distribution measure. As a second approach, we choose to evaluate a method termed Likelihood Regret, which has been shown to be an efficient likelihood based OOD measure for VAEs. As a third approach, we evaluate another recently introduced method based on generative modelling termed SSD, which uses self-supervised representation learning followed by a distance based detection in the feature space, to calculate the outlier score. We compare all 3 methods and evaluate them concurrently with the error of an depth estimation network. Results show that while the reconstruction error based OOD metric is not able to differentiate between in and out of distribution data across all scenarios, the likelihood regret based OOD metric as well as the SSD outlier score perform fairly well in OOD detection. Their metrics are also highly correlated with perception error, rendering them promising candidates for an autonomous driving system reliability monitor.
For the public charging of Electric Vehicles (EVs), ISO 15118 specifies the Plug and Charge (PnC) approach. Unfortunately, PnC requires a quite complex Public Key Infrastructure (PKI) and completely lacks privacy-preserving measures. This work shows that other approaches from the literature and a novel introduced approach outperform PnC. All thirteen approaches were rated based on a developed evaluation methodology, regarding their security, usability, offered features and interoperability. For the best approaches recommendations regarding a more detailed analysis and comparison are given. These promising approaches should be further analyzed and more detailed specified, in order to mitigate current security flaws and offer the user a great and easy charging experience.
The increasing complexity of the e-mobility infrastructure leads to an increasing risk of security threats, which may negatively affect any connected infrastructures such as the power grid. The grid is one of the most important critical infrastructures, making it a valuable target for cyber attacks. This situation gives rise to the potential of e-mobility-based attacks to the grid, e.g., causing large-scale black outs based on a sudden increase in charging demand. In this paper, we propose a framework for simulating and analyzing the impact of e-mobility-based attacks on grid resilience. We derive e-mobility-specific attacks, based on an analysis of adversaries and threats, and combine these attacks in our framework with models for grid and e-mobility as well as simulation-based outage analysis. In different case studies, the effects of e-mobility-based attacks on grid resilience are evaluated. The results show, e.g., the scope of increased vulnerability during peak load hours, enabling attacks even at low levels of e-mobility compromise, the increased impact of combined attack strategies, and the time from attack to outage, which may decrease to sub-second ranges for high levels of e-mobility growth and compromise. We further discuss potential protection mechanisms for different resilience objectives including approaches for detection, prevention, and response. This work thus provides the basis for comprehensive resilience research regarding the interconnection of e-mobility and grid.