Energy-Efficient Hybrid Stochastic-Binary Neural Networks for Near-Sensor Computing

Vincent T. Lee1,a, Armin Alaghi1,b, John P. Hayes2, Visvesh Sathe3 and Luis Ceze1,c
1Department of Computer Science and Engineering, University of Washington, Seattle, WA, 98198.
avlee2@cs.washington.edu
barmin@cs.washington.edu
cluisceze@cs.washington.edu
2Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109.
jhayes@eecs.umich.edu
3Department of Electrical Engineering, University of Washington, Seattle, WA, 98195.
sathe@uw.edu

ABSTRACT


Recent advances in neural networks (NNs) exhibit unprecedented success at transforming large, unstructured data streams into compact higher-level semantic information for tasks such as handwriting recognition, image classification, and speech recognition. Ideally, systems would employ near-sensor com-putation to execute these tasks at sensor endpoints to maximize data reduction and minimize data movement. However, near-sensor computing presents its own set of challenges such as operating power constraints, energy budgets, and communication bandwidth capacities. In this paper, we propose a stochastic-binary hybrid design which splits the computation between the stochastic and binary domains for near-sensor NN applications. In addition, our design uses a new stochastic adder and multiplier that are significantly more accurate than existing adders and multipliers. We also show that retraining the binary portion of the NN computation can compensate for precision losses introduced by shorter stochastic bit-streams, allowing faster run times at minimal accuracy losses. Our evaluation shows that our hybrid stochastic-binary design can achieve 9.8 215energy efficiency savings, and application-level accuracies within 0.05% compared to conventional all-binary designs.

Keywords: Neural networks, Stochastic computing.



Full Text (PDF)