Exploiting Parallelism with Vertex-Clustering in Processing-In-Memory-based GCN Accelerators

Yu Zhu, Zhenhua Zhu, Guohao Dai, Kai Zhong, Huazhong Yang and Yu Wanga
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China
ayu-wang@tsinghua.edu.cn

ABSTRACT


Recently, Graph Convolutional Networks (GCNs) have shown powerful learning capabilities in graph processing tasks. Computing GCNs with conventional von Neumann architectures usually suffers from limited memory bandwidth due to the irregular memory access. Recent work has proposed Processing- In-Memory (PIM) architectures to overcome the bandwidth bottleneck in Convolutional Neural Networks (CNNs) by performing in-situ matrix-vector multiplication. However, the performance improvement and computation parallelism of existing CNN-oriented PIM architectures is hindered when performing GCNs because of the large scale and sparsity of graphs.

To tackle these problems, this paper presents a parallelism enhancement framework for PIM-based GCN architectures. At the software level, we propose a fixed-point quantization method for GCNs, which reduces the PIM computation overhead with little accuracy loss. We also introduce the vertex clustering algorithm to the graph, minimizing the inter-cluster links and realizing clusterlevel parallel computing on multi-core systems. At the hardware level, we design a Resistive Random Access Memory (RRAM) based multi-core PIM architecture for GCN, which supports the cluster-level parallelism. Besides, we propose a coarse-grained pipeline dataflow to cover the RRAM write costs and improve the GCN computation throughput. At the software/hardware interface level, we propose a PIM-aware GCN mapping strategy to achieve the optimal tradeoff between resource utilization and computation performance. We also propose edge dropping methods to reduce the inter-core communications with little accuracy loss. We evaluate our framework on typical datasets with multiple widelyused GCN models. Experimental results show that the proposed framework achieves 698×, 89×, and 41× speedup with 7108×, 255×, and 31× energy efficiency enhancement compared with CPUs, GPUs, and ASICs, respectively.



Full Text (PDF)