CLC number:
On-line Access: 2023-11-01
Received: 2023-02-28
Revision Accepted: 2023-10-17
Crosschecked: 0000-00-00
Cited: 0
Clicked: 433
Yi-zhuo Cai, Bo Lei, Qian-ying Zhao, Jing Peng, Min Wei, Yu-shun zhang, Xing Zhang. Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks",
author="Yi-zhuo Cai, Bo Lei, Qian-ying Zhao, Jing Peng, Min Wei, Yu-shun zhang, Xing Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300122"
}
%0 Journal Article
%T Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks
%A Yi-zhuo Cai
%A Bo Lei
%A Qian-ying Zhao
%A Jing Peng
%A Min Wei
%A Yu-shun zhang
%A Xing Zhang
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300122
TY - JOUR
T1 - Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks
A1 - Yi-zhuo Cai
A1 - Bo Lei
A1 - Qian-ying Zhao
A1 - Jing Peng
A1 - Min Wei
A1 - Yu-shun zhang
A1 - Xing Zhang
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300122
Abstract: federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and device computing power can affect its training or communication process in complex network environments. computing and network convergence (CNC) of sixth generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this article, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization of federated learning for CNC of 6G networks, methods that gives decisions on its training process for different network conditions and computing power of participating devices. The experiments address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the method we proposed can (1) cope well with complex network situations, (2) effectively balance the delay distribution of participating devices for local training, (3) improve the communication efficiency during the transfer of model parameters, and (4) improve the resource utilization in the network.
Open peer comments: Debate/Discuss/Question/Opinion
<1>