TAS: Ternarized Neural Architecture Search for Resource-Constrained Edge Devices

Mohammad Lonia, Hamid Mousavi, Mohammad Riazatib, Masoud Daneshtalabc and Mikael Sjödind
School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden
amohammad.loni@mdh.se
bmohammad.riazati@mdh.se
cmasoud.daneshtalab@mdh.se
dmikael.sjodin@mdh.se

ABSTRACT


Ternary Neural Networks (TNNs) compress network weights and activation functions into 2-bit representation resulting in remarkable network compression and energy efficiency. However, there remains a significant gap in accuracy between TNNs and full-precision counterparts. Recent advances in Neural Architectures Search (NAS) promise opportunities in automated optimization for various deep learning tasks. Unfortunately, this area is unexplored for optimizing TNNs. This paper proposes TAS, a framework that drastically reduces the accuracy gap between TNNs and their full-precision counterparts by integrating quantization into the network design. We experienced that directly applying NAS to the ternary domain provides accuracy degradation as the search settings are customized for full-precision networks. To address this problem, we propose (i) a new cell template for ternary networks with maximum gradient propagation; and (ii) a novel learnable quantizer that adaptively relaxes the ternarization mechanism from the distribution of the weights and activation functions. Experimental results reveal that TAS delivers 2.64% higher accuracy and ≈2.8× memory saving over competing methods with the same bit-width resolution on the CIFAR-10 dataset. These results suggest that TAS is an effective method that paves the way for the efficient design of the next generation of quantized neural networks.

Keywords: Quantization, Ternary Neural Network, Neural Architecture Search, Embedded Systems.



Full Text (PDF)