Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG. Fairness-guided federated training for generalization and personalization in cross-silo federated learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2400279
@article{title="Fairness-guided federated training for generalization and personalization in cross-silo federated learning", author="Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG", journal="Frontiers of Information Technology & Electronic Engineering", year="in press", publisher="Zhejiang University Press & Springer", doi="https://doi.org/10.1631/FITEE.2400279" }
%0 Journal Article %T Fairness-guided federated training for generalization and personalization in cross-silo federated learning %A Ruipeng ZHANG %A Ziqing FAN %A Jiangchao YAO %A Ya ZHANG %A Yanfeng WANG %J Frontiers of Information Technology & Electronic Engineering %P %@ 2095-9184 %D in press %I Zhejiang University Press & Springer doi="https://doi.org/10.1631/FITEE.2400279"
TY - JOUR T1 - Fairness-guided federated training for generalization and personalization in cross-silo federated learning A1 - Ruipeng ZHANG A1 - Ziqing FAN A1 - Jiangchao YAO A1 - Ya ZHANG A1 - Yanfeng WANG J0 - Frontiers of Information Technology & Electronic Engineering SP - EP - %@ 2095-9184 Y1 - in press PB - Zhejiang University Press & Springer ER - doi="https://doi.org/10.1631/FITEE.2400279"
Abstract: Cross-silo federated learning (FL), which benefits from relatively abundant data and rich computing power, is drawing increasing focus due to the significant transformations that foundation models (FMs) are instigating in the AI field. The intensified data heterogeneity issue of this area, unlike that in cross-device FL, is mainly caused by substantial data volumes and distribution shifts across clients, which requires algorithms to comprehensively consider the personalization and generalization balance. In this paper, we aim to address the objective of generalized and personalized federated learning (GPFL) by enhancing the global model’s cross-domain generalization capabilities and simultaneously improving the personalization performance of local training clients. By investigating the fairness of performance distribution within the federation system, we explore a new connection between generalization gap and aggregation weights established in previous studies, culminating in the fairness-guided federated training for generalization and personalization (FFT-GP) approach. FFT-GP integrates a fairness-aware aggregation (FAA) approach to minimize the generalization gap variance among training clients and a meta-learning strategy that aligns local training with the global model’s feature distribution, thereby balancing generalization and personalization. Our extensive experimental results demonstrate FFT-GP’s superior efficacy compared to existing models, showcasing its potential to enhance FL systems across a variety of practical scenarios.
Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference
Open peer comments: Debate/Discuss/Question/Opinion
Open peer comments: Debate/Discuss/Question/Opinion
<1>