Posit Arithmetic for the Training and Deployment of Generative Adversarial Networks

Nhut-Minh Ho1, Duy-Thanh Nguyen2, Himeshi De Silva1, John L. Gustafson1, Weng-Fai Wong1 and Ik Joon Chang2
1National University of Singapore
2Kyung Hee University

ABSTRACT


This paper proposes a set of methods that enables low precision positTM arithmetic to be successfully used for the training of generative adversarial networks (GANs) with minimal quality loss. We show that ultra low precision posits, as small as 6 bits, can achieve high quality output for the generation phase after training. We also evaluate floating-point (float) formats and compare them to 8-bit posits in the context of GAN training. Our scaling and adaptive calibration techniques are capable of producing superior training quality for 8-bit posits that surpasses 8-bit floats and matches the results of 16-bit floats. Hardware simulation results indicate that our methods have higher energy efficiency compared to both 16- and 8-bit float training systems.

Keywords: Posit Arithmetic, GAN, Neural Networks.



Full Text (PDF)