doi: 10.7873/DATE.2015.0071


Accelerating Complex Brain-Model Simulations on GPU Platforms


H.A. Du Nguyen1,a, Zaid Al-Ars1,b, Georgios Smaragdos2,c and Christos Strydis2,d

1Laboratory of Computer Engineering, Faculty of EE, Mathematics and CS, Delft University of Technology, Delft, The Netherlands.

aH.A.DuNguyen@tudelft.nl
bZ.Al-Ars@tudelft.nl

2Neuroscience Department, Erasmus Medical Center, Rotterdam, The Netherlands.

cg.smaragdos@erasmusmc.nl
dc.strydis@erasmusmc.nl

ABSTRACT

The Inferior Olive (IO) in the brain, in conjunction with the cerebellum, is responsible for crucial sensorimotorintegration functions in humans. In this paper, we simulate a computationally challenging IO neuron model consisting of three compartments per neuron in a network arrangement on GPU platforms. Several GPU platforms of the two latest NVIDIA GPU architectures (Fermi, Kepler) have been used to simulate largescale IO-neuron networks. These networks have been ported on 4 diverse GPU platforms and implementation has been optimized, scoring 3x speedups compared to its unoptimized version. The effect of GPU L1-cache and thread block size as well as the impact of numerical precision of the application on performance have been evaluated and best configurations have been chosen. In effect, a maximum speedup of 160x has been achieved with respect to a reference CPU platform.



Full Text (PDF)