doi: 10.7873/DATE.2015.0414


RNA: A Reconfigurable Architecture for Hardware Neural Acceleration


Fengbin Tua, Shouyi Yin, Peng Ouyang, Leibo Liu and Shaojun Wei

Tsinghua National Laboratory for Information Science and Technology, Institute of Microelectronics, Tsinghua University, Beijing, P.R. China.

atfb13@mails.tsinghua.edu.cn

ABSTRACT

As the energy problem has become a big concern in digital system design, one promising solution is combining the core processor with a multi-purpose accelerator targeting high performance applications. Many modern applications can be approximated by multi-layer perceptron (MLP) models, with little quality loss. However, many current MLP accelerators have several drawbacks, such as the unbalance of their performance and flexibility. In this paper, we propose a scheduling framework to guide mapping MLPs onto limited hardware resources with high performance. The framework successfully solves the main constraints of hardware neural acceleration. Furthermore, we implement a reconfigurable neural architecture (RNA) based on this framework, whose computing pattern can be reconfigured for different MLP topologies. The RNA achieves comparable performance with application-specific accelerators and greater flexibility than other hardware MLPs.



Full Text (PDF)