CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 0000-00-00
Cited: 0
Clicked: 648
Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG. Fairness-guided federated training for generalization and personalization in cross-silo federated learning[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Fairness-guided federated training for generalization and personalization in cross-silo federated learning",
author="Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400279"
}
%0 Journal Article
%T Fairness-guided federated training for generalization and personalization in cross-silo federated learning
%A Ruipeng ZHANG
%A Ziqing FAN
%A Jiangchao YAO
%A Ya ZHANG
%A Yanfeng WANG
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400279
TY - JOUR
T1 - Fairness-guided federated training for generalization and personalization in cross-silo federated learning
A1 - Ruipeng ZHANG
A1 - Ziqing FAN
A1 - Jiangchao YAO
A1 - Ya ZHANG
A1 - Yanfeng WANG
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400279
Abstract: Cross-silo federated learning (FL), which benefits from relatively abundant data and rich computing power, is drawing increasing focus due to the significant transformations that foundation models (FMs) are instigating in the AI field. The intensified data heterogeneity issue of this area, unlike that in cross-device FL, is mainly caused by substantial data volumes and distribution shifts across clients, which requires algorithms to comprehensively consider the personalization and generalization balance. In this paper, we aim to address the objective of generalized and personalized federated learning (GPFL) by enhancing the global model’s cross-domain generalization capabilities and simultaneously improving the personalization performance of local training clients. By investigating the fairness of performance distribution within the federation system, we explore a new connection between generalization gap and aggregation weights established in previous studies, culminating in the fairness-guided federated training for generalization and personalization (FFT-GP) approach. FFT-GP integrates a fairness-aware aggregation (FAA) approach to minimize the generalization gap variance among training clients and a meta-learning strategy that aligns local training with the global model’s feature distribution, thereby balancing generalization and personalization. Our extensive experimental results demonstrate FFT-GP’s superior efficacy compared to existing models, showcasing its potential to enhance FL systems across a variety of practical scenarios.
Open peer comments: Debate/Discuss/Question/Opinion
<1>