Bounding Perception Neural Network Uncertainty for Safe Control of Autonomous Systems

Zhilu Wang1,a, Chao Huang1,b, Yixuan Wang1,c, Clara Hobbs2,e, Samarjit Chakraborty2,f and Qi Zhu2,d
1Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL
azhilu.wang@unorthwestern.edu
bchao.huang@northwestern.edu
cyixuanwang2024@unorthwestern.edu
dqzhu@northwestern.edu
2Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC
ecghobbs@cs.unc.edu
fsamarjit@cs.unc.edu

ABSTRACT


Future autonomous systems will rely on advanced sensors and deep neural networks for perceiving the environment, and then utilize the perceived information for system planning, control, adaptation, and general decision making. However, due to the inherent uncertainties from the dynamic environment and the lack of methodologies for predicting neural network behavior, the perception modules in autonomous systems often could not provide deterministic guarantees and may sometimes lead the system into unsafe states (e.g., as evident by a number of high-profile accidents with experimental autonomous vehicles). This has significantly impeded the broader application of machine learning techniques, particularly those based on deep neural networks, in safetycritical systems. In this paper, we will discuss these challenges, define open research problems, and introduce our recent work in developing formal methods for quantitatively bounding the output uncertainty of perception neural networks with respect to input perturbations, and leveraging such bounds to formally ensure the safety of system control. Unlike most existing works that only focus on either the perception module or the control module, our approach provides a holistic end-to-end framework that bounds the perception uncertainty and addresses its impact on control.



Full Text (PDF)