Unifying Temporal and Spatial Locality for Cache Management inside SSDs

Zhibing Sha1, Zhigang Cai1,a, FranÇois Trahay2, Jianwei Liao1 and Dong Yin3
1College of Computer and Information Science, Southwest University, Chongqing, China
aczg@swu.edu.cn
2Telecom SudParis, Institut Polytechnique de Paris, France
3School of Computer Science and Technology, Huaihua University, Huaihua, China
byindong2050@126.com

ABSTRACT


To ensure better I/O performance of solid-state drivers (SSDs), a dynamic random access memory (DRAM) is commonly equipped as a cache to absorb overwrites or writes, instead of directly flushing them onto underlying SSD cells. This paper focuses on the management of the small amount cache inside SSDs. First, we propose to unify both factors of temporal and spatial locality of user applications by employing the visibility graph technique, for directing cache management. Next, we propose to support batch adjustment of adjacent or nearby (hot) cached data pages by referring to the connection situations in the visibility graph of all cached pages. At last, we propose to evict the buffered data pages in batches, to maximize the internal flushing parallelism of SSD devices, without worsening I/O congestion. The trace-driven simulation experiments show that our proposal can yield improvements on cache hits by more than 2.8%, and the overall I/O latency by 20.2% on average, in contrast to conventional cache schemes inside SSDs.

Keywords: Solid-state Drivers, Cache Management, Locality of Reference, Visibility Graph, Batch Adjustment.



Full Text (PDF)