Margin-Maximization in Binarized Neural Networks for Optimizing Bit Error Tolerance

Sebastian Buschjägera, Jian-Jia Chenb, Kuan-Hsun Chenc, Mario Günzeld, Christian Hakerte, Katharina Morikf, Rodion Novking, Lukas Pfahlerh and Mikail Yaylai
These authors contributed equally Technical University of Dortmund
asebastian.buschjaeger@tu-dortmund.de
bjian-jia.chen@tu-dortmund.de
ckuan-hsun.chen@tu-dortmund.de
dmario.guenzel@tu-dortmund.de
echristian.hakert@tu-dortmund.de
fkatharina.morik@tu-dortmund.de
grodion.novkin@tu-dortmund.de
hlukas.pfahler@tu-dortmund.de
imikail.yayla@tu-dortmund.de

ABSTRACT


To overcome the memory wall in neural network (NN) inference systems, recent studies have proposed to use approximate memory, in which the supply voltage and access latency parameters are tuned, for lower energy consumption and faster access at the cost of reliability. To tolerate the occuring bit errors, the state-of-the-art approaches apply bit flip injections to the NNs during training, which require high overheads and do not scale well for large NNs and high bit error rates. In this work, we focus on binarized NNs (BNNs), whose simpler structure allows better exploration of bit error tolerance metrics based on margins. We provide formal proofs to quantify the maximum number of bit flips that can be tolerated. With the proposed margin-based metrics and the well-known hinge loss for maximum margin classification in support vector machines (SVMs), we are able to construct a modified hinge loss (MHL) to train BNNs for bit error tolerance without any bit flip injections. Our experimental results indicate that the MHL enables the possibility for BNNs to tolerate higher bit error rates than with bit flip training and, therefore, allows to further lower the requirements on approximate memories used for BNNs.

Keywords: Neural Networks, Error Tolerance, Approximate Memory.



Full Text (PDF)