Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding

Zhilu Wang1,a, Chao Huang2 and Qi Zhu1,b
1Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA
azhilu.wang@u.northwestern.edu
bqzhu@northwestern.edu
2Department of Computer Science, University of Liverpool, Liverpool, UK
chao.huang2@liverpool.ac.uk

ABSTRACT


The robustness of deep neural networks has received significant interest recently, especially when being deployed in safety-critical systems, as it is important to analyze how sensitive the model output is under input perturbations. While most previous works focused on the local robustness property around an input sample, the studies of the global robustness property, which bounds the maximum output change under perturbations over the entire input space, are still lacking. In this work, we formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem, and present an efficient approach to address it. Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side with extra interleaving dependencies added between them, and an over-approximation algorithm leveraging relaxation and refinement techniques to reduce complexity. Experiments demonstrate the timing efficiency of our work when compared with previous global robustness certification methods and the tightness of our over-approximation. A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach for certifying the global robustness of neural networks in safety-critical systems.



Full Text (PDF)