Efficient Tensor Cores support in TVM for Low-Latency Deep learning
Wei Suna, Savvas Sioutasb, Sander Stuijkc, Andrew Nelsond and Henk Corporaale
Eindhoven University of Technology
aw.sun@tue.nl
bs.sioutas@tue.nl
cs.stuijk@tue.nl
da.t.nelson@tue.nl
eh.corporaal@tue.nl
ABSTRACT
Deep learning algorithms are gaining popularity in autonomous systems. These systems typically have stringent latency constraints that are challenging to meet given the high computational demands of these algorithms. Nvidia introduced Tensor Cores (TCs) to speed up some of the most commonly used operations in deep learning algorithms. Compilers (e.g., TVM) and libraries (e.g., cuDNN) focus on the efficient usage of TCs when performing batch processing. Latency sensitive applications can however not exploit large batch processing. This paper presents an extension to the TVM compiler that generates low latency TCs implementations, particularly for batch size 1. Experimental results show that our solution reduces the latency on average by 14% compared to the cuDNN library on a Desktop RTX2070 GPU, and by 49% on an Embedded Jetson Xavier GPU.