BNNsplit: Binarized Neural Networks for embedded distributed FPGA-based computing systems
Giorgia Fiscalettia, Marco Spezialib, Luca Stornaiuoloc, Marco D. Santambrogiod and Donatella Sciuto Politecnico di Milanoe
Dipartimento di Elettronica Informazione e Bioingegneria (DEIB), Milan, Italy
agiorgia.fiscaletti@mail.polimi.it
bmarco.speziali@mail.polimi.it
cluca.stornaiuolo@polimi.it
dmarco.santambrogio@polimi.it
edonatella.sciuto@polimi.it
ABSTRACT
In the past few years, Convolutional Neural Networks (CNNs) have seen a massive improvement, outperforming other visual recognition algorithms. Since they are playing an increasingly important role in fields such as face recognition, augmented reality or autonomous driving, there is the growing need for a fast and efficient system to perform the redundant and heavy computations of CNNs. This trend led researchers towards heterogeneous systems provided with hardware accelerators, such as GPUs and FPGAs. The vast majority of CNNs is implemented with floating-point parameters and operations, but from research, it has emerged that high classification accuracy can be obtained also by reducing the floating-point activations and weights to binary values. This context is well suitable for FPGAs, that are known to stand out in terms of performance when dealing with binary operations, as demonstrated in FINN, the state-of-theart framework for building Binarized Neural Network (BNN) accelerators on FPGAs. In this paper, we propose a framework that extends FINN to a distributed scenario, enabling BNNs implementation on embedded multi-FPGA systems.
Keywords: Binarized Neural Networks, BNN, PYNQ, Embedded, Distributed