Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks

Karthikeyan Nagarajan1,a, Junde Li1,b, Sina Sayyah Ensan1,c, Mohammad Nasim Imtiaz Khan2, Sachhidh Kannan3 and Swaroop Ghosh1,d
1School of EECS, Pennsylvania State University, Univeristy Park, PA, USA
akxn287@psu.edu
bjul1512@psu.edu
csayyah@psu.edu
dszg212@psu.edu
2Intel Corporation, Folsom, CA, USA
mohammad.nasim.imtiaz.khan@intel.com
3Ampere Computing, Portland, OR, USA
sachhidh@amperecomputing.com

ABSTRACT


Spiking Neural Networks (SNN) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNN). In comparison to DNNs, SNNs are more computationally powerful and provide superior energy efficiency. SNNs, while exciting at first appearance, contain securitysensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that adversaries can exploit. We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches to corrupt crucial training parameters such as spike amplitude and neuron’s membrane threshold potential on SNNs developed using common analog neurons. We also evaluate the impact of power-based attacks on individual SNN layers for 0% (i.e., no attack) to 100% (i.e., whole layer under attack). We investigate the impact of the attacks on digit classification tasks and find that in the worst-case scenario, classification accuracy is reduced by 85.65%. We also propose defenses e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area and 25% power overhead. We also present a dummy neuron-based voltage fault injection detection system with ∼1% power and area overhead.



Full Text (PDF)