T-SKID: Predicting When to Prefetch Separately from Address Prediction

Toru Koizumia, Tomoki Nakamurab, Yuya Degawac, Hidetsugu Iried, Shuichi Sakaie and Ryota Shioyaf
Graduate School of Information Science and Technology, The University of Tokyo Tokyo, Japan
akoizumi@mtl.t.u-tokyo.ac.jp
btomokin@mtl.t.u-tokyo.ac.jp
cdegawa@mtl.t.u-tokyo.ac.jp
dirie@mtl.t.u-tokyo.ac.jp
esakai@mtl.t.u-tokyo.ac.jp
fshioya@ci.i.u-tokyo.ac.jp

ABSTRACT


Prefetching is an important technique for reducing the number of cache misses and improving processor performance, and thus various prefetchers have been proposed. Many prefetchers are focused on issuing prefetches sufficiently earlier than demand accesses to hide miss latency. In contrast, we propose a T-SKID prefetcher, which focuses on delaying prefetching. If a prefetcher issues prefetches for demand accesses too early, the prefetched line will be evicted before it is referenced. We found that existing prefetchers often issue such too-early prefetches, and this observation offers new opportunities to improve performance. To tackle this issue, T-SKID performs timing prediction independently of address prediction. In addition to issuing prefetches sufficiently early as existing prefetchers do, T-SKID can delay the issue of prefetches until an appropriate time if necessary. We evaluated T-SKID by simulations using SPEC CPU 2017. The result shows that T-SKID achieves a 5.6% performance improvement for multi-core environment, compared to Instruction Pointer Classifier based Prefetching, which is a state-of-the-art prefetcher.



Full Text (PDF)