9.5 Attacking Memory and I/O Bottlenecks

Printer-friendly version PDF version

Date: Thursday 28 March 2019
Time: 08:30 - 10:00
Location / Room: Room 5

Chair:
Leonidas Kosmidis, Barcelona Supercomputing Center, ES

Co-Chair:
Cristina Silvano, Politecnico di Milano, IT

Focusing on the memory hierarchy and memory and I/O bottlenecks, this session presents new techniques to better exploit the GPU cache memory by using new adaptive compression techniques, by enhancing the GPU cache utilization by exploiting non-frequently accessed blocks, and by proposing a new smart SSD-based I/O caching system. The papers in this section showcase new opportunities for alternative cache solutions.

TimeLabelPresentation Title
Authors
08:309.5.1SLC: MEMORY ACCESS GRANULARITY AWARE SELECTIVE LOSSY COMPRESSION FOR GPUS
Speaker:
Sohan Lal, Technical University of Berlin, DE
Authors:
Sohan Lal, Jan Lucas and Ben Juurlink, Technical University of Berlin, DE
Abstract
Memory compression is a promising approach for reducing memory bandwidth requirements and increasing performance, however, memory compression techniques often result in a low effective compression ratio due to large memory access granularity (MAG) exhibited by GPUs. Our analysis of the distribution of compressed blocks shows that a significant percentage of blocks are compressed to a size that is only a few bytes above a multiple of MAG, but a whole burst is fetched from memory. These few extra bytes significantly reduce the compression ratio and the performance gain that otherwise could result from a higher raw compression ratio. To increase the effective compression ratio, we propose a novel MAG aware Selective Lossy Compression (SLC) technique for GPUs. The key idea of SLC is that when lossless compression yields a compressed size with few bytes above a multiple of MAG, we approximate these extra bytes such that the compressed size is a multiple of MAG. This way, SLC mostly retains the quality of a lossless compression and occasionally trades small accuracy for higher performance. We show a speedup of up to 35% normalized to a state-of-the-art lossless compression technique with a low loss in accuracy. Furthermore, average energy consumption and energy-delay-product are reduced by 8.3% and 17.5%, respectively.

Download Paper (PDF; Only available from the DATE venue WiFi)
09:009.5.2LOSCACHE: LEVERAGING LOCALITY SIMILARITY TO BUILD ENERGY-EFFICIENT GPU L2 CACHE
Speaker:
Jingweijia Tan, Jilin University, CN
Authors:
Jingweijia Tan1, Kaige Yan2, Shuaiwen Leon Song3 and Xin Fu4
1Jilin University, CN; 2College of Communication Engineering, Jilin University, CN; 3Pacific Northwest National Laboratory, US; 4University of Houston, US
Abstract
This paper presents a novel energy-efficient cache design for massively parallel, throughput-oriented architectures like GPUs. Unlike L1 data cache on modern GPUs, L2 cache shared by all the streaming multiprocessors is not the primary performance bottleneck but it does consume a large amount of chip energy. We observe that L2 cache is significantly underutilized by spending 95.6% of the time storing useless data. If such "dead time" on L2 is identified and reduced, L2's energy efficiency can be drastically improved. Fortunately, we discover that the SIMT programming model of GPUs provides a unique feature among threads: instruction-level data locality similarity, which can be used to accurately predict the data re-reference counts at L2 cache block level. We propose a simple design that leverages this Locality Similarity to build an energy-efficient GPU L2 Cache, named LoSCache. Specifically, LoSCache uses the data locality information from a small group of CTAs to dynamically predict the L2-level data re-reference counts of the remaining CTAs. After that, specific L2 cache lines can be powered off if they are predicted to be "dead" after certain accesses. Experimental results on a wide range of applications demonstrate that our proposed design can significantly reduce the L2 cache energy by an average of 64% with only 0.5% performance loss.

Download Paper (PDF; Only available from the DATE venue WiFi)
09:309.5.3LBICA: A LOAD BALANCER FOR I/O CACHE ARCHITECTURES
Speaker:
Reza Salkhordeh, Sharif University of Technology, IR
Authors:
Saba Ahmadian, Reza Salkhordeh and Hossein Asadi, Sharif University of Technology, IR
Abstract
In recent years, enterprise Solid-State Drives (SSDs) are used in the caching layer of high-performance servers to close the growing performance gap between processing units and storage subsystem. SSD-based I/O caching is typically not effective in workloads with burst accesses in which the caching layer itself becomes the performance bottleneck because of the large number of accesses. Existing I/O cache architectures mainly focus on maximizing the cache hit ratio while they neglect the average queue time of accesses. Previous studies suggested bypassing the cache when burst accesses are identified. These schemes, however, are not applicable to a general cache configuration and also result in significant performance degradation on burst accesses. In this paper, we propose a novel I/O cache load balancing scheme (LBICA) with adaptive write policy management to prevent the I/O cache from becoming performance bottleneck in burst accesses. Our proposal, unlike previous schemes, which disable the I/O cache or bypass the requests into the disk subsystem in burst accesses, selectively reduces the number of waiting accesses in the SSD queue and balances the load between the I/O cache and the disk subsystem while providing the maximum performance. The proposed scheme characterizes the workload based on the type of in-queue requests and assigns an effective cache write policy. We aim to bypass the accesses which 1) are served faster by the disk subsystem or 2) cannot be merged with other accesses in the I/O cache queue. Doing so, the selected requests are responded by the disk layer, preventing from overloading the I/O cache. Our evaluations on a physical system shows that LBICA reduces the load on the I/O cache by 48% and improves the performance of burst workloads by 30% compared to the latest state-of-the-art load balancing scheme.

Download Paper (PDF; Only available from the DATE venue WiFi)
10:00IP4-14, 356CARS: A MULTI-LAYER CONFLICT-AWARE REQUEST SCHEDULER FOR NVME SSDS
Speaker:
Tianming Yang, Huanghuai University, CN
Authors:
Tianming Yang1, Ping Huang2, Weiying Zhang3, Haitao Wu1 and Longxin Lin4
1Huanghuai University, CN; 2Temple University, US; 3Northeastern University, CN; 4Jinan University, CN
Abstract
NVMe SSDs are nowadays widely deployed in various computing platforms due to its high performance and low power consumption, especially in data centers to support modern latency-sensitive applications. NVMe SSDs improve on SATA and SAS interfaced SSDs by providing a large number of device I/O queues at the host side and applications can directly manage the queues to concurrently issue requests to the device. However, the currently deployed request scheduling approach is oblivious to the states of the various device internal components and thus may lead to suboptimal decisions due to various resource contentions at different layers inside the SSD device. In this work, we propose a Conflict Aware Request Scheduling policy named CARS for NVMe SSDs to maximally leverage the rich parallelism available in modern NVMe SSDs. The central idea is to check possible conflicts that a fetched request might be associated with before dispatching that request. If there exists a conflict, it refrains from issuing the request and move to check a request in the next submission queue. In doing so, our scheduler can evenly distribute the requests among the parallel idle components in the flash chips, improving performance. Our evaluations have shown that our scheduler can reduce the slowdown metric by up to 46% relative to the de facto round-robin scheduling policy for a variety of patterned workloads.

Download Paper (PDF; Only available from the DATE venue WiFi)
10:01IP4-15, 878QUEUE BASED MEMORY MANAGEMENT UNIT FOR HETEROGENEOUS MPSOCS
Speaker:
Robert Wittig, TU Dresden, DE
Authors:
Robert Wittig, Mattis Hasler, Emil Matus and Gerhard Fettweis, TU Dresden, DE
Abstract
Sharing tightly coupled memory in a multi-processor system-on-chip is a promising approach to improve the programming flexibility as well as to ease the constraints imposed by area and power. However, it poses a challenge in terms of access latency. In this paper, we present a queue based memory management unit which combines the low latency access of shared tightly coupled memory with the flexibility of a traditional memory management unit. Our passive conflict detection approach significantly reduces the critical path compared to previously proposed methods while preserving the flexibility associated with dynamic memory allocation and heterogeneous data widths.

Download Paper (PDF; Only available from the DATE venue WiFi)
10:00End of session
Coffee Break in Exhibition Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the Lunch Area to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

Wednesday, March 27, 2019

Thursday, March 28, 2019