HELCFL: High-Efficiency and Low-Cost Federated Learning in Heterogeneous Mobile-Edge Computing

Yangguang Cui1, Kun Cao3, Junlong Zhou4 and Tongquan Wei2
1Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 200062, China
2State Key Laboratory of Mathematical Engineering and Advanced Computing, Wuxi 214125, China
3College of Information Science and Technology, Jinan University, Guangzhou 510632, China
4School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China

ABSTRACT


Federated Learning (FL), an emerging distributed machine learning (ML), empowers a large number of embedded devices (e.g., phones and cameras) and a server to jointly train a global ML model without centralizing user private data on a server. However, when deploying FL in a mobile-edge computing (MEC) system, restricted communication resources of the MEC system, heterogeneity and constrained energy of user devices have a severe impact on FL training efficiency. To address these issues, in this article, we design a distinctive FL framework, called HELCFL, to achieve high-efficiency and low-cost FL training. Specifically, by analyzing the theoretical foundation of FL, our HELCFL first develops a utility-driven and greedy-decay user selection strategy to enhance FL performance and reduce training delay. Subsequently, by analyzing and utilizing the slack time in FL training, our HELCFL introduces a device operating frequency determination approach to reduce training energy costs. Experiments verify that our HELCFL can enhance the highest accuracy by up to 43.45%, realize the training speedup of up to 275.03%, and save up to 58.25% training energy costs compared to state-of-the-art baselines.

Keywords: Federated learning, mobile-edge computing, user selection, frequency determination, high efficiency, low costs.



Full Text (PDF)