Indirection Stream Semantic Register Architecture for Efficient Sparse-Dense Linear Algebra

Paul Scheffler1,a, Florian Zaruba1,b, Fabian Schuiki1,c, Torsten Hoefler2 and Luca Benini1,d
1Integrated Systems Laboratory, ETH Zurich, Switzerland
apaulsc@iis.ee.ethz.ch
bzarubaf@iis.ee.ethz.ch
cfschuiki@iis.ee.ethz.ch
clbenini@iis.ee.ethz.ch
2Scalable Parallel Computing Laboratory, ETH Zurich, Switzerland
htor@inf.ethz.ch

ABSTRACT


Sparse-dense linear algebra is crucial in many domains, but challenging to handle efficiently on CPUs, GPUs, and accelerators alike; multiplications with sparse formats like CSR and CSF require indirect memory lookups. In this work, we enhance a memory-streaming RISC-V ISA extension to accelerate sparse-dense products through streaming indirection. We present efficient dot, matrix-vector, and matrix-matrix product kernels using our hardware, enabling single-core FPU utilizations of up to 80% and speedups of up to 7.2x over an optimized baseline without extensions. A matrix-vector implementation on a multicore cluster is up to 5.8x faster and 2.7x more energy-efficient with our kernels than an optimized baseline. We propose further uses for our indirection hardware, such as scatter-gather operations and codebook decoding, and compare our work to state-of-the-art CPU, GPU, and accelerator approaches, measuring a 2.8x higher peak FP64 utilization in CSR matrix-vector multiplication than a GTX 1080 Ti GPU running a cuSPARSE kernel.

Keywords: Computer Architecture, Hardware Acceleration, Linear Algebra, Sparse Computation, Sparse Tensors.



Full Text (PDF)