On Safety Assurance Case for Deep Learning Based Image Classification in Highly Automated Driving

Himanshu Agarwal1,2,a, Rafal Dorociak1,b and Achim Rettberg2,3
1Hella GmbH & Co. KGaA, Lippstadt, Germany
ahimanshu.agarwal@hella.com
brafal.dorociak@hella.com
2Department of Computing Science, Carl von Ossietzky University Oldenburg, Germany
3University of Applied Sciences Hamm-Lippstadt, Lippstadt, Germany
achim.rettberg@iess.org

ABSTRACT


Assessing the overall accuracy of deep learning classifier is not a sufficient criterion to argue for safety of classification based functions in highly automated driving. The causes of deviation from the intended functionality must also be rigorously assessed. In context of functions related to image classification, one of the causes can be the failure to take into account during implementation the classifier’s vulnerability to misclassification due to high similarity between the target classes. In this paper, we emphasize that while developing the safety assurance case for such functions, the argumentation over the appropriate implementation of the functionality must also address the vulnerability to misclassification due to class similarities. Using the traffic sign classification function as our case study, we propose to aid the development of its argumentation by: (a) conducting a systematic investigation of the similarity between the target classes, (b) assigning a corresponding classifier vulnerability rating to every possible misclassification, and (c) ensuring that the claims against the misclassifications that induce higher risk (scored on the basis of vulnerability and severity) are supported with more compelling sub-goals and evidences as compared to the claims against misclassifications that induce lower risk.

Keywords: Deep Learning, Neural Networks, Vulnerability, Intended Functionality, Safety Case, Functional Safety.



Full Text (PDF)