Understanding the Design of IBM Neurosynaptic System and Its Tradeoffs: A User Perspective

Hsin-Pai Chenga, Wei Wenb, Chunpeng Wuc, Sicheng Lid, Hai (Helen) Lie and Yiran Chenf
Electrical and Computer Engineering Department, University of Pittsburgh, Pittsburgh, PA.
ahsc38@pitt.edu
bwew57@pitt.edu
cchw127@pitt.edu
dsil27@pitt.edu
ehal66@pitt.edu
fyic52@pitt.edu

ABSTRACT


As a large-scale commercial spiking-based neuromorphic computing platform, IBM TrueNorth processor received tremendous attentions in society. However, one of the known issues in TrueNorth design is the limited precision of synaptic weights. The current workaround is running multiple neural network copies in which the average value of each synaptic weight is close to that in the original network. We theoretically analyze the impacts of low data precision in the TrueNorth chip on inference accuracy, core occupation, and performance, and present a probability-biased learning method to enhance the inference accuracy through reducing the random variance of each computation copy. Our experimental results proved that the proposed techniques considerably improve the computation accuracy of TrueNorth platform and reduce the incurred hardware and performance overheads. Among all the tested methods, L1TEA regularization achieved the best result, say, up to 2.74% accuracy enhancement when deploying MNIST application onto TrueNorth platform. In May 2016, IBM TrueNorth team implemented convolutional neural networks (CNN) on TrueNorth processor and coincidently use a similar method, say, trinary weights, {-1, 0, 1}. It achieves near state-of-the-art accuracy on 8 standard datasets. In addition, to further evaluate TrueNorth performance on CNN, we test similar deep convolutional networks on True- North, GPU and FPGA. Among all, GPU has the highest throughput. But if we consider energy consumption, TrueNorth processor is the most energy-efficient one, say, > 6000 frames/sec/Watt.



Full Text (PDF)