3.2 Approximate and Near-Threshold Computing

Printer-friendly version PDF version

Date: Tuesday 20 March 2018
Time: 14:30 - 16:00
Location / Room: Konf. 6

Chair:
Semeen Rehman, Vienna University of Technology (TU Wien), AT

Co-Chair:
Saibal Mukhopadhyay, Georgia Tech., US

This session focuses on approximate and near-threshold computing. The first paper proposes a novel dynamic virtual machine (VM) allocation method, while guaranteeing quality of service (QoS) requirements. The second paper introduces and presents an adaptive simulation methodology in which neurons in the region of interest (ROI) follow highly accurate biological models while the other neurons follow computation-friendly models. Finally the last paper shows an approximate computing technique to perform approximate computing with memory, avoiding redundant computation when encountering similar input patterns. The session also includes one IP paper on approximate big data computing.

TimeLabelPresentation Title
Authors
14:303.2.1ENERGY PROPORTIONALITY IN NEAR-THRESHOLD COMPUTING SERVERS AND CLOUD DATA CENTERS: CONSOLIDATING OR NOT?
Speaker:
Ali Pahlevan, Embedded Systems Lab (ESL), EPFL, CH
Authors:
Ali Pahlevan1, Yasir Mahmood Qureshi1, Marina Zapater1, Andrea Bartolini2, Davide Rossi3, Luca Benini2 and David Atienza1
1Embedded Systems Lab (ESL), EPFL, CH; 2Integrated System Laboratory ETH, Zurich, CH; 3Energy Efficient Embedded Systems (EEES) Lab – DEI, University of Bologna, IT
Abstract
Cloud Computing aims to efficiently tackle the increasing demand of computing resources, and its popularity has led to a dramatic increase in the number of computing servers and data centers worldwide. However, as effect of post-Dennard scaling, computing servers have become power-limited, and new system-level approaches must be used to improve their energy efficiency. This paper first presents an accurate power modelling characterization for a new server architecture based on the FD-SOI process technology for near-threshold computing (NTC). Then, we explore the existing energy vs. performance trade-offs when virtualized applications with different CPU utilization and memory footprint characteristics are executed. Finally, based on this analysis, we propose a novel dynamic virtual machine (VM) allocation method that exploits the knowledge of VMs characteristics together with our accurate server power model for next-generation NTC-based data centers, while guaranteeing quality of service (QoS) requirements. Our results demonstrate the inefficiency of current workload consolidation techniques for new NTC-based data center designs, and how our proposed method provides up to 45% energy savings when compared to state-of-the-art consolidation-based approaches.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:003.2.2LOOKUP TABLE ALLOCATION FOR APPROXIMATE COMPUTING WITH MEMORY UNDER QUALITY CONSTRAINTS
Speaker:
Ye Tian, The Chinese University of Hong Kong, HK
Authors:
Ye Tian, Qian Zhang, Ting Wang and Qiang Xu, The Chinese University of Hong Kong, HK
Abstract
Computation kernels in emerging recognition, mining, and synthesis (RMS) applications are inherently error-resilient, where approximate computing can be applied to improve their energy efficiency by trading off computational effort and output quality. One promising approximate computing technique is to perform approximate computing with memory, which stores a subset of function responses in a lookup table (LUT), and avoids redundant computation when encountering similar input patterns. Limited by the memory space, most existing solutions simply store values for those frequently-appeared input patterns, without considering output quality and/or intrinsic characteristic of the target kernel. In this paper, we propose a novel LUT allocation technique for approximate computing with memory, which is able to dramatically improve the hit rate of LUT and hence achieves significant energy savings under given quality constraints. We also present how to apply the proposed LUT allocation solution for multiple computation kernels. Experimental results show the efficacy of our proposed methodology.

Download Paper (PDF; Only available from the DATE venue WiFi)
15:303.2.3ACCELERATING BIOPHYSICAL NEURAL NETWORK SIMULATION WITH REGION OF INTEREST BASED APPROXIMATION
Speaker:
Yun Long, Georgia Institute of Technology, US
Authors:
Yun Long, Xueyuan She and Saibal Mukhopadhyay, Georgia Institute of Technology, US
Abstract
Modeling the dynamics of biophysical neural network (BNN) is essential to understand brain operation and design cognitive systems. Large-scale and biophysically plausible BNN modeling requires solving multiple-terms, coupled and non-linear differential equations, making simulation computationally complex and memory intensive. This paper presents an adaptive simulation methodology in which neurons in the region of interest (ROI) follow high biological accurate models while the other neurons follow computation friendly models. To enable ROI based approximation, we propose a generic template based computing algorithm which unifies the data structure and computing flow for various neuron models. We implement the algorithms on CPU, GPU and embedded platforms, showing 11x speedup with insignificant loss of biological details in the region of interest.

Download Paper (PDF; Only available from the DATE venue WiFi)
16:00IP1-5, 491QOR-AWARE POWER CAPPING FOR APPROXIMATE BIG DATA PROCESSING
Speaker:
Sherief Reda, Brown University, US
Authors:
Seyed Morteza Nabavinejad1, Xin Zhan2, Reza Azimi2, Maziar Goudarzi1 and Sherief Reda2
1Sharif University of Technology, IR; 2Brown University, US
Abstract
To limit the peak power consumption of a cluster, a centralized power capping system typically assigns power caps to the individual servers, which are then enforced using local capping controllers. Consequently, the performance and throughput of the servers are affected, and the runtime of jobs is extended as a result. We observe that servers in big data processing clusters often execute big data applications that have different tolerance for approximate results. To mitigate the impact of power capping, we propose a new power-Capping aware resource manager for Approximate Big data processing (CAB) that takes into consideration the minimum Quality-of-Result (QoR) of the jobs. We use industry standard feedback power capping controllers to enforce a power cap quickly, while, simultaneously modifying the resource allocations to various jobs based on their progress rate, target minimum QoR, and the power cap such that the impact of capping on runtime is minimized. Based on the applied cap and the progress rates of jobs, CAB dynamically allocates the computing resources (i.e., number of cores and memory) to the jobs to mitigate the impact of capping on the finish time. We implement CAB in Hadoop-2.7.3 and evaluate its improvement over other methods on a state-of-the-art 28-core Xeon server. We demonstrate that CAB minimizes the impact of power capping on runtime by up to 39.4% while meeting the minimum QoR constraints.

Download Paper (PDF; Only available from the DATE venue WiFi)
16:00End of session
Coffee Break in Exhibition Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area (Terrace Level of the ICCD).

Lunch Breaks (Großer Saal + Saal 1)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the rooms "Großer Saal" and "Saal 1" (Saal Level of the ICCD) to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 20, 2018

  • Coffee Break 10:30 - 11:30
  • Lunch Break 13:00 - 14:30
  • Awards Presentation and Keynote Lecture in "Saal 2" 13:50 - 14:20
  • Coffee Break 16:00 - 17:00

Wednesday, March 21, 2018

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:30
  • Awards Presentation and Keynote Lecture in "Saal 2" 13:30 - 14:20
  • Coffee Break 16:00 - 17:00

Thursday, March 22, 2018

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:00
  • Keynote Lecture in "Saal 2" 13:20 - 13:50
  • Coffee Break 15:30 - 16:00