SNA: A Siamese Network Accelerator to Exploit the Model-Level Parallelism of Hybrid Network Structure

Xingbin Wang1,2, Boyan Zhao1, Rui Hou1,2 and Dan Meng1
1State Key Laboratory of Information Security, Institute of Information Engineering, CAS, Beijing, China
2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China

ABSTRACT


Siamese network is compute-intensive learning model with growing applicability in a wide range of domains. However, state-of-art deep neural network (DNN) accelerators would not work efficiently for siamese network, as their designs do not account for the algorithm properties of siamese network. In this paper, we propose a siamese network accelerator called SNA, the first Simultaneous Multi-Threading (SMT) hardware architecture to perform siamese network inference with high performance and energy efficiency. We devise an adaptive inter model computing resource partition and flexible on-chip buffer management mechanism based on the model parallelism and SMT design philosophy. Our architecture is implemented in Verilog and synthesized in a 65nm technology using Synopsys design tools. We also evaluate it with several typical siamese networks. Compared to the state-of-art accelerator, on average, the SNA architecture offers 2.1x speedup and 1.48x energy reduction.



Full Text (PDF)