Mind the Scaling Factors: Resilience Analysis of Quantized Adversarially Robust CNNs

Nael Fasfous1, Lukas Frickenstein2, Michael Neumeier1, Manoj Rohit Vemparala1, Alexander Frickenstein2, Emanuele Valpreda3, Maurizio Martina3 and Walter Stechele1
1Department of Electrical and Computer Engineering, Technical University of Munich, Germany
2Autonomous Driving, BMW Group, Germany
3Department of Electronics and Telecommunications, Politecnico di Torino, Italy

ABSTRACT


As more deep learning algorithms enter safetycritical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.



Full Text (PDF)