NN Compactor: Minimizing Memory and Logic Resources for Small Neural Networks

Seongmin Hong1,a,Inho Lee1,b and Yongjun Park2
1Hongik University, Seoul, Korea
aseongminhong@mail.hongik.ac.kr
binholee@mail.hongik.ac.kr
2Hanyang University, Seoul, Korea
yongjunpark@hanyang.ac.kr

ABSTRACT


Special neural accelerators are an appealing hardware platform for machine learning systems because they provide both high performance and energy efficiency. Although various neural accelerators have recently been introduced, they are difficult to adapt to embedded platforms because current neural accelerators require high memory capacity and bandwidth for the fast preparation of synaptic weights. Embedded platforms are often unable to meet these memory requirements because of their limited resources. In FPGA‐based IoT (internet of things) systems, the problem becomes even worse since computation units generated from logic blocks cannot be fully utilized due to the small size of block memory. In order to overcome this problem, we propose a novel dual‐track quantization technique to reduce synaptic weight width based on the magnitude of the value while minimizing accuracy loss. In this value‐adaptive technique, large and small value weights are quantized differently. In this paper, we present a fully automatic framework called NN Compactor that generates a compact neural accelerator by minimizing the memory requirements of synaptic weights through dual-track quantization and minimizing the logic requirements of PUs with minimum recognition accuracy loss. For the three widely used datasets of MNIST, CNAE‐9, and Forest, experimental results demonstrate that our compact neural accelerator achieves an average performance improvement of 6.4× over a baseline embedded system using minimal resources with minimal accuracy loss.

Keywords: Neural networks, Accelerator, Automation, Quantization.



Full Text (PDF)