|
Frontiers of Information Technology & Electronic Engineering
ISSN 2095-9184 (print), ISSN 2095-9230 (online)
2025 Vol.26 No.1 P.42-61
Fairness-guided federated training for generalization and personalization in cross-silo federated learning
Abstract: Cross-silo federated learning (FL), which benefits from relatively abundant data and rich computing power, is drawing increasing focus due to the significant transformations that foundation models (FMs) are instigating in the artificial intelligence field. The intensified data heterogeneity issue of this area, unlike that in cross-device FL, is caused mainly by substantial data volumes and distribution shifts across clients, which requires algorithms to comprehensively consider the personalization and generalization balance. In this paper, we aim to address the objective of generalized and personalized federated learning (GPFL) by enhancing the global model’s cross-domain generalization capabilities and simultaneously improving the personalization performance of local training clients. By investigating the fairness of performance distribution within the federation system, we explore a new connection between generalization gap and aggregation weights established in previous studies, culminating in the fairness-guided federated training for generalization and personalization (FFT-GP) approach. FFT-GP integrates a fairness-aware aggregation (FAA) approach to minimize the generalization gap variance among training clients and a meta-learning strategy that aligns local training with the global model’s feature distribution, thereby balancing generalization and personalization. Our extensive experimental results demonstrate FFT-GP’s superior efficacy compared to existing models, showcasing its potential to enhance FL systems across a variety of practical scenarios.
Key words: Generalized and personalized federated learning; Performance distribution fairness; Domain shift
1上海交通大学人工智能学院,中国上海市,200240
2上海交通大学未来媒体网络协同创新中心,中国上海市,200240
3上海人工智能实验室,中国上海市,200232
摘要:由于基础模型在人工智能领域引发的重大变革,跨中心联邦学习因其相对丰富的数据和强大的计算能力越来越受到人们的关注。与跨设备联邦学习不同,跨中心联邦学习的数据异构问题主要由客户端之间的大规模数据和分布偏移引起,这要求算法全面考虑个性化和泛化之间的平衡。本文旨在通过增强全局模型的跨域泛化能力以及提高本地训练客户端的个性化性能来解决泛化和个性化兼顾的联邦学习目标。通过研究联邦系统中性能分布的公平性,进一步探讨了以往研究中建立的泛化误差与聚合权重之间的相关性,提出泛化与个性化兼顾的公平性引导联邦训练(FFT-GP)方法。FFT-GP结合了一种公平性感知聚合策略,以最小化训练客户端之间的泛化误差方差,以及一种元学习策略,使局部训练与全局模型特征分布保持一致,从而平衡泛化和个性化。大量实验结果表明,与现有模型相比,FFT-GP具有卓越的效果,展示了其在各种实际场景中增强联邦学习训练表现的潜力。
关键词组:
References:
Open peer comments: Debate/Discuss/Question/Opinion
<1>
DOI:
10.1631/FITEE.2400279
CLC number:
TP391.4
Download Full Text:
Downloaded:
505
Download summary:
<Click Here>Downloaded:
35Clicked:
1039
Cited:
0
On-line Access:
2025-02-10
Received:
2024-04-12
Revision Accepted:
2024-05-14
Crosschecked:
2025-02-18