Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

Federated mutual learning: a collaborative machine learning method for heterogeneous data, models, and objectives

Abstract: Federated learning (FL) is a novel technique in deep learning that enables clients to collaboratively train a shared model while retaining their decentralized data. However, researchers working on FL face several unique challenges, especially in the context of heterogeneity. Heterogeneity in data distributions, computational capabilities, and scenarios among clients necessitates the development of customized models and objectives in FL. Unfortunately, existing works such as FedAvg may not effectively accommodate the specific needs of each client. To address the challenges arising from heterogeneity in FL, we provide an overview of the heterogeneities in data, model, and objective (DMO). Furthermore, we propose a novel framework called federated mutual learning (FML), which enables each client to train a personalized model that accounts for the data heterogeneity (DH). A "meme model" serves as an intermediary between the personalized and global models to address model heterogeneity (MH). We introduce a knowledge distillation technique called deep mutual learning (DML) to transfer knowledge between these two models on local data. To overcome objective heterogeneity (OH), we design a shared global model that includes only certain parts, and the personalized model is task-specific and enhanced through mutual learning with the meme model. We evaluate the performance of FML in addressing DMO heterogeneities through experiments and compare it with other commonly used FL methods in similar scenarios. The results demonstrate that FML outperforms other methods and effectively addresses the DMO challenges encountered in the FL setting.

Key words: Federated learning; Knowledge distillation; Privacy preserving; Heterogeneous environment

Chinese Summary  <9> 联邦相互学习:一种针对异构数据、模型和目标的协同机器学习方法

沈弢1,张杰2,贾鑫康2,张凤达1,吕喆奇1,况琨1,吴超3,吴飞1
1浙江大学计算机科学与技术学院,中国杭州市,310027
2浙江大学软件学院,中国杭州市,310027
3浙江大学公共管理学院,中国杭州市,310027
摘要:联邦学习(FL)是深度学习中的一种新技术,可以让客户端在保留各自隐私数据的情况下协同训练模型。然而,由于每个客户端的数据分布、算力和场景都不同,联邦学习面临客户端异构环境的挑战。现有方法(如FedAvg)无法有效满足每个客户的定制化需求。为解决联邦学习中的异构挑战,本文首先详述了数据、模型和目标(DMO)这3个主要异构来源,然后提出一种新的联邦相互学习(FML)框架。该框架使得每个客户端都能训练一个考虑到数据异构(DH)的个性化模型。在模型异构(MH)问题上,引入一种"模因模型"作为个性化模型与全局模型之间的中介,并且采用深度相互学习(DML)的知识蒸馏技术在两个异构模型之间传递知识。针对目标异构(OH)问题,通过共享部分模型参数,设计针对特定任务的个性化模型,同时,利用模因模型进行相互学习。本研究通过实验评估了FML在应对DMO异构性方面的表现,并与其他常见FL方法在相似场景下进行对比。实验结果表明,FML在处理FL环境中的DMO问题的表现卓越,优于其他方法。

关键词组:联邦学习;知识蒸馏;隐私保护;异构环境


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2300098

CLC number:

TP39

Download Full Text:

Click Here

Downloaded:

1089

Download summary:

<Click Here> 

Downloaded:

230

Clicked:

870

Cited:

0

On-line Access:

2023-10-27

Received:

2023-02-20

Revision Accepted:

2023-10-27

Crosschecked:

2023-04-07

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE