Counteract Side-Channel Analysis of Neural Networks by Shuffling

Manuel Brosch1,a, Matthias Probst1,b and Georg Sigl1,2,c
1Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany
amanuel.brosch@tum.de
bmatthias.probst@tum.de
csigl@tum.de
2Fraunhofer Institute for Applied and Integrated Security (AISEC), Munich, Germany

ABSTRACT


Machine learning is becoming an essential part in almost every electronic device. Implementations of neural networks are mostly targeted towards computational performance or memory footprint. Nevertheless, security is also an important part in order to keep the network secret and protect the intellectual property associated to the network. Especially, since neural network implementations are demonstrated to be vulnerable to side-channel analysis, powerful and computational cheap countermeasures are in demand.

In this work, we apply a shuffling countermeasure to a microcontroller implementation of a neural network to prevent side-channel analysis. The countermeasure is effective while the computational overhead is low. We investigate the extensions necessary for our countermeasure, and how shuffling increases the effort for an attack in theory. In addition, we demonstrate the increase in effort for an attacker through experiments on real side-channel measurements. Based on the mechanism of shuffling and our experimental results, we conclude that an attack on a commonly used neural network with shuffling is no longer feasible in a reasonable amount of time.

Keywords: Neural Networks, Side-Channel Analysis, Countermeasure, Shuffling.



Full Text (PDF)