Big vs Little Core for Energy-Efficient Hadoop Computing
Maria Malik1,a, Katayoun Neshatpour1,b, Tinoosh Mohsenin2, Avesta Sasan1,c and Houman Homayoun1,d
1Department of Electrical and Computer Engineering, George Mason University.
ammalik9@gmu.edu
bkneshatp@gmu.edu
casasan@gmu.edu
dhhomayou@gmu.edu
2Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County.
tinoosh@umbc.edu
ABSTRACT
The rapid growth in the data yields challenges to process data efficiently using current high-performance server architectures such as big Xeon cores. Furthermore, physical design constraints, such as power and density, have become the dominant limiting factor for scaling out servers. Heterogeneous architectures that combine big Xeon cores with little Atom cores have emerged as a promising solution to enhance energy-efficiency by allowing each application to run on an architecture that matches resource needs more closely than a one-size-fits-all architecture. Therefore, the question of whether to map the application to big Xeon or little Atom in heterogeneous server architecture becomes important. In this paper, we characterize Hadoop-based applications and their corresponding MapReduce tasks on big Xeon and little Atom-based server architectures to understand how the choice of big vs little cores is affected by various parameters at application, system and architecture levels and the interplay among these parameters. Furthermore, we have evaluated the operational and the capital cost to understand how performance, power and area constraints for big data analytics affects the choice of big vs little core server as a more cost and energy efficient architecture.
Keywords: Heterogeneous architectures, Hadoop, Big data, Energy and cost efficiency, Big and little cores, Scheduling.