A Resource-efficient Spiking Neural Network Accelerator Supporting Emerging Neural Encoding

Daniel Gerlinghoff1, Zhehui Wang1, Xiaozhe Gu2, Rick Siow Mong Goh1 and Tao Luo1
1Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
2Future Network of Intelligence Institute, Chinese University of Hong Kong, Shenzhen, China

ABSTRACT


Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing and the closer resemblance of biological processes in the nervous system of humans. However, SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models, which offsets efficiency and inhibits its application to low-power systems for real-world use cases. To alleviate this problem, emerging neural encoding schemes are proposed to shorten the spike train while maintaining the high accuracy. However, current accelerators for SNN cannot well support the emerging encoding schemes. In this work, we present a novel hardware architecture that can efficiently support SNN with emerging neural encoding. Our implementation features energy and area efficient processing units with increased parallelism and reduced memory accesses. We verified the accelerator on FPGA and achieve 25% and 90% improvement over previous work in power consumption and latency, respectively. At the same time, high area efficiency allows us to scale for large neural network models. To the best of our knowledge, this is the first work to deploy the large neural network model VGG on physical FPGA-based neuromorphic hardware.

Keywords: Spiking Neural Network, FPGA Accelerator, Neural Encoding.



Full Text (PDF)