PNeuro: a scalable energy‐efficient programmable hardware accelerator for neural networks

A. Carbon1,a, J.‐M. Philippe1, O. Bichler1, R. Schmit1, B. Tain1, D. Briand1, N. Ventroux1, M. Paindavoine2,b and O. Brousse2
1CEA, LIST Gif sur Yvette, France
aalexandre.carbon@cea.fr
2GlobalSensing Technologies, 14 rue Pierre de Coubertin, Dijon, France
bmichel.paindavoine@gsensing.eu

ABSTRACT


Artificial intelligence and especially Machine Learning recently gained a lot of interest from the industry. Indeed, new generation of neural networks built with a large number of successive computing layers enables a large amount of new applications and services implemented from smart sensors to data centers. These Deep Neural Networks (DNN) can interpret signals to recognize objects or situations to drive decision processes. However, their integration into embedded systems remains challenging due to their high computing needs. This paper presents PNeuro, a scalable energy‐efficient hardware accelerator for the inference phase of DNN processing chains. Simple programmable processing elements architectured in SIMD clusters perform all the operations needed by DNN (convolutions, pooling, non‐linear functions, etc.). An FDSOI 28nm prototype shows an energy efficiency of 700GMACS/s/W at 800 MHz. These results open important perspectives regarding the development of smart energy‐efficient solutions based on Deep Neural Networks.

Keywords: Neural networks, Neural network hardware, Computer architectures, FPGA, ASIC, Low power.



Full Text (PDF)