Estimating Vulnerability of All Model Parameters in DNN with a Small Number of Fault Injections

Yangchao Zhang1, Hiroaki Itsuji2, Takumi Uezono2, Tadanobu Toba2 and Masanori Hashimoto3
1Dept. Information Systems Engineering, Osaka University
2Center for Technology Innovation - Production Engineering and MONOZUKURI, R&D Group, Hitachi, Ltd
3Dept. Communications and Computer Engineering, Kyoto University
hashimoto@i.kyoto-u.ac.jp

ABSTRACT


The reliability of deep neural networks (DNNs) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injectionbased vulnerability estimation.

Keywords: Deep Neural Network, Network Vulnerability, Fault Injection, Bit Flip, Machine Learning.



Full Text (PDF)