Full Text:   <614>

Summary:  <93>

CLC number: TP391.4

On-line Access: 2025-02-10

Received: 2024-04-12

Revision Accepted: 2024-05-14

Crosschecked: 2025-02-18

Cited: 0

Clicked: 1158

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Ruipeng ZHANG

https://orcid.org/0000-0002-4372-4987

Ziqing FAN

https://orcid.org/0009-0009-1459-3250

Jiangchao YAO

https://orcid.org/0000-0001-6115-5194

Ya ZHANG

https://orcid.org/0000-0002-5390-9053

Yanfeng WANG

https://orcid.org/0000-0002-3196-2347

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.1 P.42-61

http://doi.org/10.1631/FITEE.2400279


Fairness-guided federated training for generalization and personalization in cross-silo federated learning


Author(s):  Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG

Affiliation(s):  School of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai 200240, China; more

Corresponding email(s):   ya_zhang@sjtu.edu.cn, wangyanfeng622@sjtu.edu.cn

Key Words:  Generalized and personalized federated learning, Performance distribution fairness, Domain shift


Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG. Fairness-guided federated training for generalization and personalization in cross-silo federated learning[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(1): 42-61.

@article{title="Fairness-guided federated training for generalization and personalization in cross-silo federated learning",
author="Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="1",
pages="42-61",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400279"
}

%0 Journal Article
%T Fairness-guided federated training for generalization and personalization in cross-silo federated learning
%A Ruipeng ZHANG
%A Ziqing FAN
%A Jiangchao YAO
%A Ya ZHANG
%A Yanfeng WANG
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 1
%P 42-61
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400279

TY - JOUR
T1 - Fairness-guided federated training for generalization and personalization in cross-silo federated learning
A1 - Ruipeng ZHANG
A1 - Ziqing FAN
A1 - Jiangchao YAO
A1 - Ya ZHANG
A1 - Yanfeng WANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 1
SP - 42
EP - 61
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400279


Abstract: 
Cross-silo federated learning (FL), which benefits from relatively abundant data and rich computing power, is drawing increasing focus due to the significant transformations that foundation models (FMs) are instigating in the artificial intelligence field. The intensified data heterogeneity issue of this area, unlike that in cross-device FL, is caused mainly by substantial data volumes and distribution shifts across clients, which requires algorithms to comprehensively consider the personalization and generalization balance. In this paper, we aim to address the objective of generalized and personalized federated learning (GPFL) by enhancing the global model’s cross-domain generalization capabilities and simultaneously improving the personalization performance of local training clients. By investigating the fairness of performance distribution within the federation system, we explore a new connection between generalization gap and aggregation weights established in previous studies, culminating in the fairness-guided federated training for generalization and personalization (FFT-GP) approach. FFT-GP integrates a fairness-aware aggregation (FAA) approach to minimize the generalization gap variance among training clients and a meta-learning strategy that aligns local training with the global model’s feature distribution, thereby balancing generalization and personalization. Our extensive experimental results demonstrate FFT-GP’s superior efficacy compared to existing models, showcasing its potential to enhance FL systems across a variety of practical scenarios.

面向跨中心联邦学习的泛化与个性化兼顾的公平性引导联邦训练

张瑞鹏2,3,樊子卿2,3,姚江超2,3,张娅1,3,王延峰1,3
1上海交通大学人工智能学院,中国上海市,200240
2上海交通大学未来媒体网络协同创新中心,中国上海市,200240
3上海人工智能实验室,中国上海市,200232
摘要:由于基础模型在人工智能领域引发的重大变革,跨中心联邦学习因其相对丰富的数据和强大的计算能力越来越受到人们的关注。与跨设备联邦学习不同,跨中心联邦学习的数据异构问题主要由客户端之间的大规模数据和分布偏移引起,这要求算法全面考虑个性化和泛化之间的平衡。本文旨在通过增强全局模型的跨域泛化能力以及提高本地训练客户端的个性化性能来解决泛化和个性化兼顾的联邦学习目标。通过研究联邦系统中性能分布的公平性,进一步探讨了以往研究中建立的泛化误差与聚合权重之间的相关性,提出泛化与个性化兼顾的公平性引导联邦训练(FFT-GP)方法。FFT-GP结合了一种公平性感知聚合策略,以最小化训练客户端之间的泛化误差方差,以及一种元学习策略,使局部训练与全局模型特征分布保持一致,从而平衡泛化和个性化。大量实验结果表明,与现有模型相比,FFT-GP具有卓越的效果,展示了其在各种实际场景中增强联邦学习训练表现的潜力。

关键词:兼顾泛化与个性化的联邦学习;性能分布公平性;领域偏移

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Achiam J, Adler S, Agarwal S, et al., 2023. GPT-4 technical report. https://arxiv.org/abs/2303.08774

[2]Arivazhagan MG, Aggarwal V, Singh AK, et al., 2019. Federated learning with personalization layers. https://arxiv.org/abs/1912.00818

[3]Beery S, Van Horn G, Perona P, 2018. Recognition in terra incognita. Proc 15th European Conf on Computer Vision, p.456-473.

[4]Bloch N, Madabhushi A, Huisman H, et al., 2015. NCI-ISBI 2013 challenge: automated segmentation of prostate structures.

[5]Caldas S, Duddu SMK, Wu P, et al., 2018. LEAF: a benchmark for federated settings. https://arxiv.org/abs/1812.01097

[6]Chu LY, Wang LJ, Dong YJ, et al., 2021. FedFair: training fair models in cross-silo federated learning. https://arxiv.org/abs/2109.05662

[7]Cohen JP, Hashir M, Brooks R, et al., 2020. On the limits of cross-domain generalization in automated X-ray prediction. Proc Int Conf on Medical Imaging with Deep Learning, p.136-155.

[8]Collins L, Hassani H, Mokhtari A, et al., 2021. Exploiting shared representations for personalized federated learning. Proc 38th Int Conf on Machine Learning, p.2089-2099.

[9]Cong Y, Qiu J, Zhang K, et al., 2023. Ada-FFL: adaptive computing fairness federated learning. CAAI Trans Intell Technol, 9(3):541-584.

[10]Cui S, Pan WS, Liang J, et al., 2021. Addressing algorithmic disparity and performance inconsistency in federated learning. Proc 35th Int Conf on Neural Information Processing Systems, p.26091-26102.

[11]Ding N, Qin YJ, Yang G, et al., 2023. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nat Mach Intell, 5(3):220-235.

[12]Dosovitskiy A, Beyer L, Kolesnikov A, et al., 2021. An image is worth 16×16 words: Transformers for image recognition at scale. Proc 9th Int Conf on Learning Representations, p.1-22.

[13]du Terrail JO, Ayed SS, Cyffers E, et al., 2022. FLamby: datasets and benchmarks for cross-silo federated learning in realistic healthcare settings. Proc 36th Int Conf on Neural Information Processing Systems, p.1-20.

[14]Fallah A, Mokhtari A, Ozdaglar AE, 2020. Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Proc 34th Int Conf on Neural Information Processing Systems.

[15]Fan ZQ, Wang YF, Yao JC, et al., 2022. FedSkip: combatting statistical heterogeneity with federated skip aggregation. Proc IEEE Int Conf on Data Mining, p.131-140.

[16]Fan ZQ, Zhang RP, Yao JC, et al., 2023a. Federated learning with bilateral curation for partially class-disjoint data. Proc 37th Int Conf on Neural Information Processing Systems.

[17]Fan ZQ, Yao JC, Zhang RP, et al., 2023b. Federated learning under partially class-disjoint data via manifold reshaping. Proc Transactions on Machine Learning Research.

[18]Fang C, Xu Y, Rockmore DN, 2013. Unbiased metric learning: on the utilization of multiple datasets and web images for softening bias. Proc IEEE Int Conf on Computer Vision, p.1657-1664.

[19]Gulrajani I, Lopez-Paz D, 2021. In search of lost domain generalization. Proc 9th Int Conf on Learning Representations, p.1-27.

[20]Guo T, Guo S, Wang JX, et al., 2024. PromptFL: let federated participants cooperatively learn prompts instead of models—federated learning in age of foundation model. IEEE Trans Mob Comput, 23(5):5179-5194.

[21]Haque A, Milstein A, Fei-Fei L, 2020. Illuminating the dark spaces of healthcare with ambient intelligence. Nature, 585(7824):193-202.

[22]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.

[23]Huang C, Huang JW, Liu X, 2022. Cross-silo federated learning: challenges and opportunities. https://arxiv.org/abs/2206.12949

[24]Huang YT, Chu LY, Zhou ZR, et al., 2021. Personalized cross-silo federated learning on non-IID data. Proc 35th AAAI Conf on Artificial Intelligence, p.7865-7873.

[25]Jiang MR, Yang HZ, Cheng C, et al., 2023. IOP-FL: inside-outside personalization for federated medical image segmentation. IEEE Trans Med Imag, 42(7):2106-2117.

[26]Kairouz P, McMahan HB, Avent B, et al., 2021. Advances and open problems in federated learning. Found Trends Mach Learn, 14(1-2):1-210.

[27]Karimireddy SP, Kale S, Mohri M, et al., 2020. SCAFFOLD: stochastic controlled averaging for federated learning. Proc 37th Int Conf on Machine Learning, p.5132-5143.

[28]Khosla A, Zhou TH, Malisiewicz T, et al., 2012. Undoing the damage of dataset bias. Proc 12th European Conf on Computer Vision, p.158-171.

[29]Kirillov A, Mintun E, Ravi N, et al., 2023. Segment anything. https://arxiv.org/abs/2304.02643

[30]Lemaître G, Martí R, Freixenet J, et al., 2015. Computer-aided detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: a review. Comput Biol Med, 60:8-31.

[31]Li D, Yang YX, Song YZ, et al., 2017. Deeper, broader and artier domain generalization. Proc IEEE Int Conf on Computer Vision, p.5542-5550.

[32]Li T, Sanjabi M, Beirami A, et al., 2020a. Fair resource allocation in federated learning. Proc 8th Int Conf on Learning Representations, p.1-27.

[33]Li T, Sahu AK, Talwalkar A, et al., 2020b. Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag, 37(3):50-60.

[34]Li T, Sahu AK, Zaheer M, et al., 2020c. Federated optimization in heterogeneous networks. https://arxiv.org/abs/1812.06127v5

[35]Li T, Hu SY, Beirami A, et al., 2021. Ditto: fair and robust federated learning through personalization. Proc 38th Int Conf on Machine Learning, p.6357-6368.

[36]Li X, Huang KX, Yang WH, et al., 2020. On the convergence of FedAvg on non-IID data. Proc 8th Int Conf on Learning Representations, p.1-26.

[37]Li XX, Jiang MR, Zhang XF, et al., 2021. FedBN: federated learning on non-IID features via local batch normalization. Proc 9th Int Conf on Learning Representations, p.1-27.

[38]Litjens G, Toth R, van de Ven W, et al., 2014. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med Image Anal, 18(2):359-373.

[39]Liu KZ, Hu SY, Wu S, et al., 2022. On privacy and personalization in cross-silo federated learning. Proc 36th Int Conf on Neural Information Processing Systems, p.5925-5940.

[40]Liu QD, Dou Q, Yu LQ, et al., 2020. MS-Net: multi-site network for improving prostate segmentation with heterogeneous MRI data. IEEE Trans Med Imag, 39(9):2713-2724.

[41]Liu QD, Chen C, Qin J, et al., 2021. FedDG: federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1013-1023.

[42]Lu W, Hu XX, Wang JD, et al., 2023. FedCLIP: fast generalization and personalization for CLIP in federated learning. Proc Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models, p.1-14.

[43]Lyu HQ, Zhang YX, Wang C, et al., 2023. Federated learning privacy incentives: reverse auctions and negotiations. CAAI Trans Intell Technol, 8(4):1538-1557.

[44]Ma ZZ, Zhao MY, Cai XJ, et al., 2021. Fast-convergent federated learning with class-weighted aggregation. J Syst Archit, 117:102125.

[45]McMahan B, Moore E, Ramage D, et al., 2017. Communication-efficient learning of deep networks from decentralized data. Proc 20th Int Conf on Artificial Intelligence and Statistics, p.1273-1282.

[46]Mohri M, Sivek G, Suresh AT, 2019. Agnostic federated learning. Proc 36th Int Conf on Machine Learning, p.4615-4625.

[47]Nguyen AT, Torr PHS, Lim SN, 2022. FedSR: a simple and effective domain generalization method for federated learning. Proc 36th Int Conf on Neural Information Processing Systems, p.38831-38843.

[48]Oh J, Kim S, Yun SY, 2022. FedBABU: toward enhanced representation for federated image classification. Proc 10th Int Conf on Learning Representations, p.1-29.

[49]Peng XC, Bai QX, Xia XD, et al., 2019. Moment matching for multi-source domain adaptation. Proc IEEE/CVF Int Conf on Computer Vision, p.1406-1415.

[50]Radford A, Kim JW, Hallacy C, et al., 2021. Learning transferable visual models from natural language supervision. Proc 38th Int Conf on Machine Learning, p.8748-8763.

[51]Rieke N, Hancox J, Li WQ, et al., 2020. The future of digital health with federated learning. npj Digit Med, 3(1):119.

[52]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. Proc 18th Medical Image Computing and Computer-Assisted Intervention, p.234-241.

[53]Schuhmann C, Beaumont R, Vencu R, et al., 2022. LAION-5B: an open large-scale dataset for training next generation image-text models. Proc 36th Int Conf on Neural Information Processing Systems, p.25278-25294.

[54]Shi YX, Yu H, Leung C, 2024. Towards fairness-aware federated learning. IEEE Trans Neur Netw Learn Syst, 35(9):11922-11938.

[55]Smith V, Chiang CK, Sanjabi M, et al., 2017. Federated multi-task learning. Proc 30th Int Conf on Neural Information Processing Systems.

[56]Su SC, Yang MZ, Li B, et al., 2024. Federated adaptive prompt tuning for multi-domain collaborative learning. Proc 38th AAAI Conf on Artificial Intelligence, 38:15117-15125.

[57]Sun BC, Saenko K, 2016. Deep CORAL: correlation alignment for deep domain adaptation. European Conf on Computer Vision, p.443-450.

[58]van der Maaten L, Hinton G, 2008. Visualizing data using t-SNE. J Mach Learn Res, 9(86):2579-2605.

[59]Venkateswara H, Eusebio J, Chakraborty S, et al., 2017. Deep hashing network for unsupervised domain adaptation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.5018-5027.

[60]Wang JY, Liu QH, Liang H, et al., 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. Proc 34th Int Conf on Neural Information Processing Systems, p.7611-7623.

[61]Wei GYZ, Wang F, Shah A, et al., 2023. Dual prompt tuning for domain-aware federated learning. https://arxiv.org/abs/2310.03103

[62]Xu A, Li WQ, Guo PF, et al., 2022. Closing the generalization gap of cross-silo federated medical image segmentation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.20866-20875.

[63]Xu QW, Zhang RP, Zhang Y, et al., 2021. A Fourier-based framework for domain generalization. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.14383-14392.

[64]Xu QW, Zhang RP, Zhang Y, et al., 2024. Federated adversarial domain hallucination for privacy-preserving domain generalization. IEEE Trans Multim, 26:1-14.

[65]Yuan HL, Morningstar WR, Ning L, et al., 2022. What do we mean by generalization in federated learning? Proc 10th Int Conf on Learning Representations, p.1-26.

[66]Yuan JK, Ma X, Chen DF, et al., 2023. Collaborative semantic aggregation and calibration for federated domain generalization. IEEE Trans Knowl Data Eng, 35(12):12528-12541.

[67]Zeng YC, Chen HX, Lee K, 2021. Improving fairness via federated learning. https://arxiv.org/abs/2110.15545v2

[68]Zhang FD, Kuang K, Chen L, et al., 2023. Federated unsupervised representation learning. Front Inform Technol Electron Eng, 24(8):1181-1193.

[69]Zhang FD, Shuai ZT, Kuang K, et al., 2024. Unified fair federated learning for digital healthcare. Patterns, 5(1):100907.

[70]Zhang HR, Dullerud N, Seyyed-Kalantari L, et al., 2021. An empirical framework for domain generalization in clinical settings. Proc Conf on Health, Inference, and Learning, p.279-290.

[71]Zhang RP, Xu QW, Yao JC, et al., 2023a. Federated domain generalization with generalization adjustment. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3954-3963.

[72]Zhang RP, Fan ZQ, Xu QW, et al., 2023b. GRACE: a generalized and personalized federated learning method for medical imaging. Proc 26th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.14-24.

[73]Zhao Y, Li M, Lai LZ, et al., 2018. Federated learning with non-IID data. https://arxiv.org/abs/1806.00582

[74]Zhou KY, Yang JK, Loy CC, et al., 2022. Learning to prompt for vision-language models. Int J Comput Vis, 130(9):2337-2348.

[75]Zhu HY, Xu JJ, Liu SQ, et al., 2021. Federated learning on non-IID data: a survey. Neurocomputing, 465:371-390.

[76]Zhuang WM, Chen C, Lyu LJ, 2023. When foundation model meets federated learning: motivations, challenges, and future directions. https://arxiv.org/abs/2306.15546

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE