3.5 Low-power brain inspired computing for embedded systems

Printer-friendly version PDF version

Date: Tuesday 28 March 2017
Time: 14:30 - 16:00
Location / Room: 3C

Chair:
Johanna Sepulveda, TU Munich, DE

Co-Chair:
Andrea Bartolini, Uniiversita' di Bologna - ETH Zurich, IT

Neural Networks are promising techniques for bringing brain inspired computing into embedded platforms. Energy efficiency is a primary concern in these computing domain. This track combines low power design techniques such as approximate computing and compression with state-of-the-art hardware architectures.

TimeLabelPresentation Title
Authors
14:303.5.1APPROXIMATE COMPUTING FOR SPIKING NEURAL NETWORKS
Speaker:
Sanchari Sen, Purdue University, US
Authors:
Sanchari Sen, Swagath Venkataramani and Anand Raghunathan, Purdue University, US
Abstract
Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN achieves 1.4-5.5X reduction in scalar operations for network evaluation, which translates to 1.2-3.62X and 1.26-3.9X improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:003.5.2ADAPTIVE WEIGHT COMPRESSION FOR MEMORY-EFFICIENT NEURAL NETWORKS
Speaker:
Jong Hwan Ko, Georgia Institute of Technology, US
Authors:
Jong Hwan Ko, Duckhwan Kim, Taesik Na, Jaeha Kung and Saibal Mukhopadhyay, Georgia Institute of Technology, US
Abstract
Neural networks generally require significant memory capacity/bandwidth to store/access a large number of synaptic weights. This paper presents an application of JPEG image encoding to compress the weights by exploiting the spatial locality and smoothness of the weight matrix. To minimize the loss of accuracy due to JPEG encoding, we propose to adaptively control the quantization factor of the JPEG algorithm depending on the error-sensitivity (gradient) of each weight. With the adaptive compression technique, the weight blocks with higher sensitivity are compressed less for higher accuracy. The adaptive compression reduces memory requirement, which in turn results in higher performance and lower energy of neural network hardware. The simulation for inference hardware for multilayer perceptron with the MNIST dataset shows up to 42X compression with less than 1% loss of recognition accuracy, resulting in 3X higher effective memory bandwidth and ~19X lower system energy.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:303.5.3REAL-TIME ANOMALY DETECTION FOR STREAMING DATA USING BURST CODE ON A NEUROSYNAPTIC PROCESSOR
Speaker:
Qinru Qiu, Syracuse University, US
Authors:
Qiuwen Chen and Qinru Qiu, Syracuse University, US
Abstract
Real-time anomaly detection for streaming data is a desirable feature for mobile devices or unmanned systems. The key challenge is how to deliver required performance under the stringent power constraint. To address the paradox between performance and power consumption, brain-inspired hardware, such as the IBM Neurosynaptic System, has been developed to enable low power implementation of large-scale neural models. Meanwhile, inspired by the operation and the massive parallel structure of human brain, carefully structured inference model has been demonstrated to give superior detection quality than many traditional models while facilitates neuromorphic implementation. Implementing inference based anomaly detection on the neurosynaptic processor is not straightforward due to hardware limitations. This work presents a design flow and component library that flexibly maps learned detection network to the TrueNorth architecture. Instead of traditional rate code, burst code is adopted in the design, which represents numerical value using the phase of a burst of spike trains. This does not only reduce the hardware complexity, but also increases the results accuracy. A Corelet library, NeoInfer-TN, is developed for basic operations in burst code and two-phase pipelines are constructed based on the library components. The design can be configured for different tradeoffs between detection accuracy and throughput/energy. We evaluate the system using intrusion detection data streams. The results show higher detection rate than some conventional approaches and real-time performance, with only 50mW power consumption. Overall, it achieves 10^8 operations per watt-second.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:453.5.4FAST, LOW POWER EVALUATION OF ELEMENTARY FUNCTIONS USING RADIAL BASIS FUNCTION NETWORKS
Speaker:
Parami Wijesinghe, Purdue University, US
Authors:
Parami Wijesinghe, Chamika Liyanagedera and Kaushik Roy, Purdue University, US
Abstract
Fast and efficient implementation of elementary functions such as sin(), cos(),and log() are of ample importance in a large class of applications. The state of the art methods for function evaluation involves either expensive calculations such as multiplications, large number of iterations, or large Lookup-Tables (LUTs). Higher number of iterations leads to higher latency whereas large LUTs contribute to delay, higher area requirement and higher power consumption owing to data fetching and leakage. We propose a hardware architecture for evaluating mathematical functions, consisting a small LUT and a simple Radial Basis Function Network (RBFN), a type of an Artificial Neural Network (ANN). Our proposed method evaluates trigonometric, hyperbolic, exponential, logarithmic, and square root functions. This technique finds utility in applications where the highest priority is on performance and power consumption. In contrast to traditional ANNs, our approach does not involve multiplication when determining the post synaptic states of the network. Owing to the simplicity of the approach, we were able to attain more than 2.5x power benefits and more than 1.4x performance benefits when compared with traditional approaches, under the same accuracy conditions.

Download Paper (PDF; Only available from the DATE venue WiFi)
16:00End of session
Coffee Break in Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Tuesday, March 28, 2017

  • Coffee Break 10:30 - 11:30
  • Coffee Break 16:00 - 17:00

Wednesday, March 29, 2017

  • Coffee Break 10:00 - 11:00
  • Coffee Break 16:00 - 17:00

Thursday, March 30, 2017

  • Coffee Break 10:00 - 11:00
  • Coffee Break 15:30 - 16:00