Reconciling QoS and Concurrency in NVIDIA GPUs via Warp-Level Scheduling

Jayati Singh1, Ignacio Sañudo Olmedo2,a, Nicola Capodieci2,b, Andrea Marongiu2,c and Marco Caccamo3
1University of Illinois Urbana-Champaign, United States
jayati@illinois.edu
2University of Modena and Reggio Emilia, Italy
aIgnacioSañudo.Olmedo@unimore.it
bNicola.Capodieci@unimore.it
cAndrea.Marongiu@unimore.it
3Technical University of Munich, Germany
mcaccamo@tum.de

ABSTRACT


The widespread deployment of NVIDIA GPUs in latency-sensitive systems today requires predictable GPU multitasking, which cannot be trivially achieved. The NVIDIA CUDA API allows programmers to easily exploit the processing power provided by these massively parallel accelerators and is one of the major reasons behind their ubiquity. However, NVIDIA GPUs and the CUDA programming model favor throughput instead of latency and timing predictability. Hence, providing real-time and quality-of-service (QoS) properties to GPU applications presents an interesting research challenge. Such a challenge is paramount when considering simultaneous multikernel (SMK) scenarios, wherein kernels are executed concurrently within each streaming multiprocessor (SM). In this work, we explore QoS-based finegrained multitasking in SMK via job arbitration at the lowest level of the GPU scheduling hierarchy, i.e., between warps. We present QoS-aware warp scheduling (QAWS) and evaluate it against state-of-the-art, kernel-agnostic policies seen in NVIDIA hardware today. Since the NVIDIA ecosystem lacks a mechanism to specify and enforce kernel priority at the warp granularity, we implement and evaluate our proposed warp scheduling policy on GPGPU-Sim. QAWS not only improves the response time of the higher priority tasks but also has comparable or better throughput than the state-of-the-art policies.



Full Text (PDF)