Future of HPC: Diversifying Heterogeneity

Dejan Milojicic1, Paolo Faraboschi2, Nicolas Dube3 and Duncan Roweth4
1System Archtiecture Lab Hewlett Packard Labs Palo Alto, CA
dejan.milojicic@hpe.com
2AI Research Lab Hewlett Packard Labs Palo Alto, CA
paolo.faraboschi@hpe.com
3HPC CT Office Hewlett Packard Enterprise Quebec, Canada
nicolas.dube@hpe.com
4HPC CT Office Hewlett Packard Enterprise Bristol, UK
duncan.roweth@hpe.com

ABSTRACT


After the end of Dennard scaling and with the imminent end of Moore’s Law, it has become challenging to continue scaling HPC systems within a given power envelope. This is exacerbated most in large systems, such as high end supercomputers. To alleviate this problem, general purpose is no longer sufficient, and HPC systems and components are being augmented with special-purpose hardware. By definition, because of the narrow applicability of specialization, broad supercomputing adoption requires using different heterogeneous components, each optimized for a specific application domain. In this paper, we discuss the impact of the introduced heterogeneity of specialization across the HPC stack: interconnects including memory models, accelerators including power and cooling, use cases and applications including AI, and delivery models, such as traditional, as-a-Service, and federated. We believe that a stack that supports diversification across hardware and software is required to continue scaling performance and maintaining energy efficiency.

Keywords: High Performance Computing (HPC), Artificial intelligence (AI), Interconnects, Accelerators, Delivery models, as-a-Service (aaS), Heterogeneity, Diversification.



Full Text (PDF)