Processing in 3D memories to speed up operations on complex data structures

Paulo C. Santos1,a, Geraldo F. Oliveira1,b, João P. Lima1,c, Marco A. Z. Alves2, Luigi Carro 1,d and Antonio C. S. Beck1,e
1Informatics Institute ‐ Federal University of Rio Grande do Sul ‐ Porto Alegre, Brazil
apcssjunior@inf.ufpr.br
bgfojunior@inf.ufpr.br
cjplima@inf.ufpr.br
dcarro@inf.ufpr.br
ecaco@inf.ufpr.br
2Department of Informatics ‐ Federal University of Paraná ‐ Curitiba, Brazil
mazalves@inf.ufpr.br

ABSTRACT


Pointer chasing has been, for years, the kernel operation employed by diverse data structures, from graphs to hash tables and dictionaries. However, due to the bewildering growth in the volume of data that current applications have to deal with, performing pointer chasing operations have become a major source of performance and energy bottleneck, due to its sparse memory access behavior. In this work, we aim to tackle this problem by taking advantage of the already available parallelism present in today's 3D‐stacked memories. We present a simple mechanism that can accelerate pointer chasing operations by making use of a state‐of‐the‐art PIM design that executes in memory vector operations. The key idea behind our design is to run speculative loads, in parallel, based on a given memory address in a reconfigurable window of addresses. Our design can perform pointer‐chasing operations on b+tree 4.9× faster when compared to modern baseline systems. Besides that, since our device avoids data movement, we can also reduce energy consumption by 85% when compared to the baseline.

Keywords: In‐Memory Processing, Pointer‐Chasing, Big Data, Reconfigurable, Vector Instructions, Hybrid Memory Cube



Full Text (PDF)