A Write-Efficient Cache Algorithm based on Macroscopic Trend for NVM-based Read Cache

Ning Bao1,a, Yunpeng Chai1,b and Xiao Qin2
1Key Laboratory of Data Engineering and Knowledge Engineering, MOE School of Computing, Renmin University of China Beijing, China
abaoning@ruc.edu.cn
bypchai@ruc.edu.cn
2Samuel Ginn College of Engineering Auburn University Auburn, USA
xqin@auburn.edu

ABSTRACT


Compared with traditional storage technologies, non-volatile memory (NVM) techniques have excellent I/O performances, but high costs and limited write endurance (e.g., NAND and PCM) or high energy consumption of writing (e.g.,STT-MRAM). As a result, the storage systems prefer to utilize NVM devices as read caches for performance boost. Unlike write caches, read caches have greater potential of write reduction because their writes are only triggered by cache updates. However, traditional cache algorithms like LRU and LFU have to update cached blocks frequently because it is difficult for them to predict data popularity in the long future. Although some new algorithms like SieveStore reduce cache write pressure, they still rely on those traditional cache schemes for data popularity prediction. Due to the bad long-term data popularity prediction effect, these new cache algorithms lead to a significant and unnecessary decrease of cache hit ratios. In this paper, we propose a new Macroscopic Trend (MT) cache replacement algorithm to reduce cache updates effectively and maintain high cache hit ratios. This algorithm discovers long-term hot data effectively by observing the macroscopic trend of data blocks. We have conducted extensive experiments driven by a series of real-world traces, and the results indicate that compared with LRU, the MT cache algorithm can achieve 15.28 times longer lifetime or less energy consumption of NVM caches with a similar hit ratio.



Full Text (PDF)