Full Text:   <1315>

CLC number: TP39

On-line Access: 2025-06-04

Received: 2024-06-20

Revision Accepted: 2024-12-15

Crosschecked: 2025-09-04

Cited: 0

Clicked: 1042

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Fei WU

https://orcid.org/0000-0003-2139-8807

Chao WU

https://orcid.org/0000-0003-0885-6869

Tao SHEN

https://orcid.org/0000-0003-0819-9782

Shengyu ZHANG

https://orcid.org/0000-0002-0030-8289

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.8 P.1378-1393

http://doi.org/10.1631/FITEE.2400530


FedMcon: an adaptive aggregation method for federated learning via meta controller


Author(s):  Tao SHEN, Zexi LI, Ziyu ZHAO, Didi ZHU, Zheqi LV, Kun KUANG, Shengyu ZHANG, Chao WU, Fei WU

Affiliation(s):  College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):   tao.shen@zju.edu.cn, zexi.li@zju.edu.cn, ziyuzhao.cs@zju.edu.cn, didi_zhu@zju.edu.cn, zheqilv@zju.edu.cn, kunkuang@zju.edu.cn, sy_zhang@zju.edu.cn, chao.wu@zju.edu.cn, wufei@zju.edu.cn

Key Words:  Federated learning, Meta-learning, Adaptive aggregation


Tao SHEN, Zexi LI, Ziyu ZHAO, Didi ZHU, Zheqi LV, Kun KUANG, Shengyu ZHANG, Chao WU, Fei WU. FedMcon: an adaptive aggregation method for federated learning via meta controller[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(8): 1378-1393.

@article{title="FedMcon: an adaptive aggregation method for federated learning via meta controller",
author="Tao SHEN, Zexi LI, Ziyu ZHAO, Didi ZHU, Zheqi LV, Kun KUANG, Shengyu ZHANG, Chao WU, Fei WU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="8",
pages="1378-1393",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400530"
}

%0 Journal Article
%T FedMcon: an adaptive aggregation method for federated learning via meta controller
%A Tao SHEN
%A Zexi LI
%A Ziyu ZHAO
%A Didi ZHU
%A Zheqi LV
%A Kun KUANG
%A Shengyu ZHANG
%A Chao WU
%A Fei WU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 8
%P 1378-1393
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400530

TY - JOUR
T1 - FedMcon: an adaptive aggregation method for federated learning via meta controller
A1 - Tao SHEN
A1 - Zexi LI
A1 - Ziyu ZHAO
A1 - Didi ZHU
A1 - Zheqi LV
A1 - Kun KUANG
A1 - Shengyu ZHANG
A1 - Chao WU
A1 - Fei WU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 8
SP - 1378
EP - 1393
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400530


Abstract: 
federated learning (FL) emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized clients with privacy constraints. In the vanilla federated averaging algorithm (FedAvg), the global model is generated by the weighted linear combination of local models, and the weights are proportional to the local data sizes. This methodology, however, encounters challenges when facing heterogeneous and unknown client data distributions, often leading to discrepancies from the intended global objective. The linear combination-based aggregation often fails to address the varied dynamics presented by diverse scenarios, settings, and data distributions inherent in FL, resulting in hindered convergence and compromised generalization. In this paper, we present a new aggregation method, FedMcon, within a framework of meta-learning for FL. We introduce a learnable controller trained on a small proxy dataset and served as an aggregator to learn how to adaptively aggregate heterogeneous local models into a better global model toward the desired objective. The experimental results indicate that the proposed method is effective on extremely non-independent and identically distributed data and it can simultaneously reach 19 times communication speedup in a single FL setting.

FedMcon:一种通过元控制器实现的联邦学习自适应聚合方法

沈弢1,李则熹1,赵子瑜1,朱迪迪1,吕喆奇1,况琨1,张圣宇2,吴超3,4,吴飞1
1浙江大学计算机科学与技术学院,中国杭州市,310027
2浙江大学软件学院,中国杭州市,310027
3浙江大学公共管理学院,中国杭州市,310027
4浙江大学社会治理研究院,中国杭州市,310027
摘要:联邦学习作为一种新型机器学习框架,能在满足隐私约束的前提下,通过去中心化的客户端协作训练深度模型。在经典的联邦学习算法(FedAvg)中,全局模型是通过本地模型的加权线性组合生成的,其权重与客户端本地的数据量成正比。然而,这种方法在面对异构且未知的客户端数据分布时会遭遇挑战,往往导致偏离预期的全局优化目标。基于线性组合的聚合方法难以有效应对联邦学习场景内在的多样化设置、数据分布以及动态变化,从而出现收敛困难和泛化能力下降。本文提出一种基于元学习框架的全新聚合方法FedMcon。引入一个可学习的聚合器,在小规模代理数据集上训练,并用于自适应地将异构的本地模型聚合为一个更符合目标的全局模型。实验结果表明,本文方法能够处理极端非独立同分布数据,在单个联邦学习设置中实现19倍的通信效率提升。

关键词:联邦学习;元学习;自适应聚合

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Andrychowicz M, Denil M, Gomez S, et al., 2016. Learning to learn by gradient descent by gradient descent.

[2]Bertinetto L, Henriques JF, Valmadre J, et al., 2016. Learning feed-forward one-shot learners.

[3]Caldas S, Duddu SMK, Wu P, et al., 2019. LEAF: a benchmark for federated settings.

[4]Chen DS, Hu J, Tan VJ, et al., 2023. Elastic aggregation for federated optimization. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.12187-12197.

[5]Chen F, Dong ZH, Li ZG, et al., 2019. Federated meta-learning for recommendation.

[6]Fallah A, Mokhtari A, Ozdaglar A, 2020. Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Proc 34th Int Conf on Neural Information Processing Systems, p.3557-3568.

[7]Finn C, Abbeel P, Levine S, 2017. Model-agnostic meta-learning for fast adaptation of deep networks.

[8]Haddad WM, Chellaboina V, 2008. Nonlinear Dynamical Systems and Control: a Lyapunov-Based Approach. Princeton University Press, Princeton, USA.

[9]Harper FM, Konstan JA, 2015. The MovieLens datasets: history and context. ACM Trans Interact Intell Syst, 5(4):19.

[10]Hsu TMH, Qi H, Brown M, 2019. Measuring the effects of non-identical data distribution for federated visual classification.

[11]Huang YT, Chu LY, Zhou ZR, et al., 2021. Personalized cross-silo federated learning on non-IID data. Proc 35th AAAI Conf on Artificial Intelligence, p.7865-7873.

[12]Jiang WW, Han HY, Zhang Y, et al., 2024. Federated split learning for sequential data in satellite–terrestrial integrated networks. Inform Fusion, 103:102141.

[13]Jiang YH, Konečný J, Rush K, et al., 2023. Improving federated learning personalization via model agnostic meta-learning.

[14]Karimireddy SP, Jaggi M, Kale S, et al., 2021a. Breaking the centralized barrier for cross-device federated learning. Proc 35th Int Conf on Neural Information Processing Systems, p.28663-28676.

[15]Karimireddy SP, Kale S, Mohri M, et al., 2021b. SCAFFOLD: stochastic controlled averaging for federated learning.

[16]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. Toronto, ON, Canada.

[17]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.

[18]Li CL, Niu D, Jiang B, et al., 2021. Meta-HAR: federated representation learning for human activity recognition. Proc Web Conf, p.912-922.

[19]Li QB, Diao YQ, Chen Q, et al., 2021a. Federated learning on non-IID data silos: an experimental study.

[20]Li QB, He BS, Song D, 2021b. Model-contrastive federated learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.10708-10717.

[21]Li T, Sahu AK, Zaheer M, et al., 2020. Federated optimization in heterogeneous networks.

[22]Li XX, Jiang MR, Zhang XF, et al., 2021. FedBN: federated learning on non-IID features via local batch normalization.

[23]Lin T, Kong LJ, Stich SU, et al., 2020. Ensemble distillation for robust model fusion in federated learning. Proc 34th Int Conf on Neural Information Processing Systems, p.2351-2363.

[24]Lin YJ, Ren PJ, Chen ZM, et al., 2020. Meta matrix factorization for federated rating predictions. Proc 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.981-990.

[25]McMahan HB, Moore E, Ramage D, et al., 2023. Communication-efficient learning of deep networks from decentralized data.

[26]Muhammad K, Wang QQ, O’Reilly-Morgan D, et al., 2020. FedFast: going beyond average for faster training of federated recommender systems. Proc 26th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.1234-1242.

[27]Nichol A, Achiam J, Schulman J, 2018. On first-order meta-learning algorithms.

[28]Parra-Ullauri JM, Madhukumar H, Nicolaescu AC, et al., 2024. kubeFlower: a privacy-preserving framework for Kubernetes-based federated learning in cloud-edge environments. Future Gener Comput Syst, 157:558-572.

[29]Pramling I, 1990. Learning to Learn: a Study of Swedish Preschool Children. Springer, New York, USA.

[30]Ravi S, Larochelle H, 2016. Optimization as a model for few-shot learning. 5th Int Conf on Learning Representations.

[31]Reddi S, Charles Z, Zaheer M, et al., 2021. Adaptive federated optimization.

[32]Shamsian A, Navon A, Fetaya E, et al., 2021. Personalized federated learning using hypernetworks. Proc 38th Int Conf on Machine Learning, p.9489-9502.

[33]Shen T, Li ZX, Zhao ZY, et al., 2024. An adaptive aggregation method for federated learning via meta controller. Proc 6th ACM Int Conf on Multimedia in Asia Workshops, Article 20.

[34]Wang JY, Liu QH, Liang H, et al., 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization.

[35]Xu ZW, van Hasselt H, Silver D, 2018. Meta-gradient reinforcement learning.

[36]Yan YL, Feng CM, Ye M, et al., 2023. Rethinking client drift in federated learning: a logit perspective.

[37]Yao X, Huang TC, Zhang RX, et al., 2020. Federated learning with unbiased gradient aggregation and controllable meta updating.

[38]Zhao Y, Li M, Lai LZ, et al., 2022. Federated learning with non-IID data.

[39]Zhou GR, Zhu XQ, Song CR, et al., 2018. Deep interest network for click-through rate prediction. Proc 24th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.1059-1068.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE