An Efficient Resource-Optimized Learning Prefetcher for Solid State Drives

Rui Xua, Xi Jinb, Linfeng Taoc, Shuaizhi Guod, Zikun Xiange and Teng Tianf
Key Laboratory of Strongly‐Coupled Quantum Matter Physics, Chinese Academy of Sciences, School of Physical Sciences, University of Science and Technology of China. Hefei, Anhui, China
axray@mail.ustc.edu.cn
bjinxi@mail.ustc.edu.cn
ctlf@mail.ustc.edu.cn
ddybjxmg@mail.ustc.edu.cn
exzk372z@mail.ustc.edu.cn
ftianteng@mail.ustc.edu.cn

ABSTRACT


In recent years, solid‐state drives (SSDs) have been widely deployed in modern storage systems. To increase the performance of SSDs, prefetchers for SSDs have been designed both at operating system (OS) layer and flash translation layer (FTL). Prefetchers in FTL have many advantages like OS-independence, easy‐using, and compatibility. However, due to the limitation of computing capabilities and memory resources, existing prefetchers in FTL merely employ simple sequential prefetching which may incur high penalty cost for I/O access stream with complex patterns. In this paper, an efficient learning prefetcher implemented in FTL is proposed. Considering the resource limitation of SSDs, a learning algorithm based on Markov chains is employed and optimized so that high hit ratio and low penalty cost can be achieved even for complex access patterns. To validate our design, a simulator with the prefetcher is designed and implemented based on Flashsim. The TPC‐H benchmark and an application launch trace are tested on the simulator. According to experimental results of the TPC‐H benchmark, more than 90% of memory cost can be saved in comparison with a previous design at OS layer. The hit ratio can be increased by 24.1% and the number of times of misprefetching can be reduced by 95.8% in comparison with the simple sequential prefetching strategy.



Full Text (PDF)