8.1 Special Day on "Embedded AI": Neuromorphic chips and systems

Printer-friendly version PDF version

Date: Wednesday 11 March 2020
Time: 17:00 - 18:30
Location / Room: Amphithéâtre Jean Prouve

Chair:
Wei Lu, University of Michigan, US

Co-Chair:
Bernabe Linares-Barranco, CSIC, ES

Within the global field of AI, there is a subfield that focuses on exploiting neuroscience knowledge for artificial intelligent hardware systems. This is the neuromorphic engineering field. This session presents some examples of AI research focusing on this AI subfield.

TimeLabelPresentation Title
Authors
17:008.1.1SPINNAKER2 : A PLATFORM FOR BIO-INSPIRED ARTIFICIAL INTELLIGENCE AND BRAIN SIMULATION
Authors:
Bernhard Vogginger, Christian Mayr, Sebastian Höppner, Johannes Partzsch and Steve Furber, TU Dresden, DE
Abstract
SpiNNaker is an ARM-based processor platform optimized for the simulation of spiking neural networks. This brief describes the roadmap in going from the current SPINNaker1 system, a 1 Million core machine in 130nm CMOS, to SpiNNaker2, a 10 Million core machine in 22nm FDSOI. Apart from pure scaling, we will take advantage of specific technology features, such as runtime adaptive body biasing, to deliver cutting-edge power consumption. Power management of the cores allows a wide range of workload adaptivity, i.e. processor power scales with the complexity and activity of the spiking network. Additional numerical accelerators will enhance the utility of SpiNNaker2 for simulation of spiking neural networks as well as for executing conventional deep neural networks. The interplay between these two domains will provide a wide field for bio-inspired algorithm exploration on SpiNNaker2, bringing machine learning and neuromorphics closer together. Apart from the platforms' traditional usage as a neuroscience exploration tool, the extended functionality opens up new application areas such as automotive AI, tactile internet, industry 4.0 and biomedical processing.
17:308.1.2AN ON-CHIP LEARNING ACCELERATOR FOR SPIKING NEURAL NETWORKS USING STT-RAM CROSSBAR ARRAYS
Authors:
Shruti R. Kulkarni, Shihui Yin, Jae-sun Seo and Bipin Rajendran, New Jersey Institute of Technology, US
Abstract
In this work, we present a scheme for implementinglearning on a digital non-volatile memory (NVM) based hardware accelerator for Spiking Neural Networks (SNNs). Our design estimates across three prominent non-volatile memories - Phase Change Memory (PCM), Resistive RAM (RRAM), and Spin Transfer Torque RAM (STT-RAM) show that the STT-RAM arrays enable at least 2× higher throughput compared to the other two memory technologies. We discuss the design and the signal communication framework through the STT-RAM crossbar array for training and inference in SNNs. Each STT-RAM cell in the array stores a single bit value. Our neurosynaptic computational core consists of the memory crossbar array and its read/write peripheral circuitry and the digital logic for the spiking neurons, weight update computations, spike router, and decoder for incoming spike packets. Our STT-RAM based design shows ∼20× higher performance per unit Watt per unit area compared to conventional SRAM based design, making it a promising learning platform for realizing systems with significant area and power limitations.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:008.1.3OVERCOMING CHALLENGES FOR ACHIEVING HIGH IN-SITU TRAINING ACCURACY WITH EMERGING MEMORIES
Speaker:
Shimeng Yu, Georgia Tech, US
Authors:
Shanshi Huang, Xiaoyu Sun, Xiaochen Peng, Hongwu Jiang and Shimeng Yu, Georgia Tech, US
Abstract
Embedded artificial intelligence (AI) prefers the adaptive learning capability when deployed in the field, thus in-situ training on-chip is required. Emerging non-volatile memories (eNVMs) are of great interests serving as analog synapses in deep neural network (DNN) on-chip acceleration due to its multilevel programmability. However, the asymmetry/nonlinearity in the conductance tuning remains a grand challenge for achieving high in-situ training accuracy. In addition, analog-to-digital converter (ADC) at the edge of the memory array introduces an additional challenge - quantization error for in-memory computing. In this work, we gain new insights and overcome these challenges through an algorithm-hardware co-optimization. We incorporate these hardware non-ideal effects into the DNN propagation and weight update steps. We evaluate on a VGG-like network for CIFAR-10 dataset, and we show that the asymmetry of the conductance tuning is no longer a limiting factor of in-situ training accuracy if exploiting adaptive "momentum" in the weight update rule. Even considering ADC quantization error, in-situ training accuracy could approach software baseline. Our results show much relaxed requirements that enable a variety of eNVMs for DNN acceleration on the embedded AI platforms.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:30End of session