Continuous Safety Verification of Neural Networks

Chih-Hong Cheng1 and Rongjie Yan2,3,b
1DENSO AUTOMOTIVE Deutschland GmbH, Eching, Germany
c.cheng@eu.denso.com
2State Key Laboratory of Computer Science, ISCAS, Beijing, China
3University of Chinese Academy of Sciences, Beijing, China
byrj@ios.ac.cn

ABSTRACT


Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a 1/10-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.

Keywords: DNN, Safety, Formal Verification, Continuous Engineering.



Full Text (PDF)