Full Text:   <445>

Summary:  <44>

Suppl. Mater.: 

CLC number: TP393

On-line Access: 2024-06-04

Received: 2023-02-28

Revision Accepted: 2024-06-04

Crosschecked: 2023-10-17

Cited: 0

Clicked: 647

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yizhuo CAI

https://orcid.org/0000-0002-6662-3767

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.5 P.713-727

http://doi.org/10.1631/FITEE.2300122


Communication efficiency optimization of federated learning for computing and network convergence of 6G networks


Author(s):  Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG

Affiliation(s):  Wireless Signal Processing and Network Laboratory, Beijing University of Posts and Telecommunications, Beijing 100876, China; more

Corresponding email(s):   leibo@chinatelecom.cn, zhangx@ieee.org

Key Words:  Computing and network convergence, Communication efficiency, Federated learning, Two architectures


Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG. Communication efficiency optimization of federated learning for computing and network convergence of 6G networks[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 713-727.

@article{title="Communication efficiency optimization of federated learning for computing and network convergence of 6G networks",
author="Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="5",
pages="713-727",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300122"
}

%0 Journal Article
%T Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
%A Yizhuo CAI
%A Bo LEI
%A Qianying ZHAO
%A Jing PENG
%A Min WEI
%A Yushun ZHANG
%A Xing ZHANG
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 5
%P 713-727
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300122

TY - JOUR
T1 - Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
A1 - Yizhuo CAI
A1 - Bo LEI
A1 - Qianying ZHAO
A1 - Jing PENG
A1 - Min WEI
A1 - Yushun ZHANG
A1 - Xing ZHANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 5
SP - 713
EP - 727
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300122


Abstract: 
federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.

面向6G算力网络的联邦学习通信效率优化

蔡逸卓1,雷波2,赵倩颖2,彭竞3,卫敏2,张禹舜1,张兴1
1北京邮电大学无线信号处理与网络实验室,中国北京市,100876
2中国电信股份有限公司研究院,中国北京市,102209
3中国电信股份有限公司北京分公司,中国北京市,100011
摘要:联邦学习以参与设备之间协作训练全局模型的形式,有效地解决了数据隐私等问题。然而,在复杂的网络环境中,网络拓扑和设备算力等因素极其影响联邦学习的训练和通信过程。作为一种算力可测、可感知、可分配、可调度和可管理的新型网络架构和范式,6G中的算力网络恰好能有效支持联邦学习训练并提高其通信效率。根据业务需求、资源负载、网络条件和设备算力等信息,算力网络可以决策联邦学习的训练进而实现通信效率提高。为了提高复杂网络环境下联邦学习的通信效率,本文研究了其在6G算力网络中的通信效率优化方法,针对不同的网络条件和参与设备的算力作出训练过程的决策。仿真实验基于联邦学习中存在的两种架构,依据算力信息调度设备参与训练,并在传输模型参数的过程中实现通信效率的优化。仿真结果表明,本文提出的方法能够很好地应对复杂的网络情况,有效平衡参与设备的本地训练延迟差异,提高在传输模型参数时的通信效率,并提高网络中的资源利用率。

关键词:算力网络协同;通信效率;联邦学习;两种架构

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Cao QM, Zhang X, Zhang YS, et al., 2021. Layered model aggregation based federated learning in mobile edge networks. IEEE/CIC Int Conf on Communications in China, p.1-6.

[2]Chen MZ, Yang ZH, Saad W, et al., 2021. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans Wirel Commun, 20(1):269-283.

[3]Deng YH, Lyu F, Ren J, et al., 2021. Share: shaping data distribution at edge for communication-efficient hierarchical federated learning. IEEE 41st Int Conf on Distributed Computing Systems, p.24-34.

[4]Dinh CT, Tran NH, Nguyen MNH, et al., 2021. Federated learning over wireless networks: convergence analysis and resource allocation. IEEE/ACM Trans Netw, 29(1):398-409.

[5]Fraboni Y, Vidal R, Kameni L, et al., 2021. Clustered sampling: low-variance and improved representativity for clients selection in federated learning. Proc 38th Int Conf on Machine Learning, p.3407-3416.

[6]He CY, Annavaram M, Avestimehr S, 2020. Group knowledge transfer: federated learning of large CNNs at the edge. Proc 34 th Int Conf on Neural Information Processing Systems, p.14068-14080.

[7]Kairouz P, McMahan HB, Avent B, et al., 2021. Advances and open problems in federated learning. Found Trends Mach Learn, 14(1-2):1-210.

[8]Konečný J, McMahan HB, Yu FX, et al., 2016. Federated learning: strategies for improving communication efficiency.

[9]Li DL, Wang JP, 2019. FedMD: heterogenous federated learning via model distillation.

[10]Li T, Sahu AK, Talwalkar A, et al., 2020. Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag, 37(3):50-60.

[11]Lin T, Kong LJ, Stich SU, et al., 2020. Ensemble distillation for robust model fusion in federated learning. Proc 34 th Int Conf on Neural Information Processing Systems, p.2351-2363.

[12]Liu LM, Zhang J, Song SH, et al., 2020. Client-edge-cloud hierarchical federated learning. IEEE Int Conf on Communications, p.1-6.

[13]McMahan HB, Moore E, Ramage D, et al., 2017. Communication-efficient learning of deep networks from decentralized data. Proc 20 th Int Conf on Artificial Intelligence and Statistics, p.1273-1282.

[14]Pinyoanuntapong P, Janakaraj P, Wang P, et al., 2020. FedAir: towards multi-hop federated learning over-the-air. IEEE 21st Int Workshop on Signal Processing Advances in Wireless Communications, p.1-5.

[15]Qin ZJ, Li GY, Ye H, 2021. Federated learning and wireless communications. IEEE Wirel Commun, 28(5):134-140.

[16]So J, Güler B, Avestimehr AS, 2021. Turbo-aggregate: breaking the quadratic aggregation barrier in secure federated learning. IEEE J Sel Areas Inform Theory, 2(1):479-489.

[17]Sun W, Li ZJ, Wang QBJ, et al., 2023. FedTAR: task and resource-aware federated learning for wireless computing power networks. IEEE Int Things J, 10(5):4257-4270.

[18]Sun YK, Lei B, Liu J, et al., 2022. Computing power network: a survey. China Commun, early access.

[19]Wahab OA, Mourad A, Otrok H, et al., 2021. Federated machine learning: survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Commun Surv Tut, 23(2):1342-1397.

[20]Wu WT, He LG, Lin WW, et al., 2021. Safa: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans Comput, 70(5):655-668.

[21]Yang HH, Liu ZZ, Quek TQS, et al., 2020. Scheduling policies for federated learning in wireless networks. IEEE Trans Commun, 68(1):317-333.

[22]Yang ZH, Chen MZ, Saad W, et al., 2021. Energy efficient federated learning over wireless communication networks. IEEE Trans Wirel Commun, 20(3):1935-1949.

[23]Yang ZH, Chen MZ, Wong KK, et al., 2022. Federated learning for 6G: applications, challenges, and opportunities. Engineering, 8:33-41.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE