Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

Multi-agent differential game based cooperative synchronization control using a data-driven method

Abstract: This paper studies the multi-agent differential game based problem and its application to cooperative synchronization control. A systematized formulation and analysis method for the multi-agent differential game is proposed and a data-driven methodology based on the reinforcement learning (RL) technique is given. First, it is pointed out that typical distributed controllers may not necessarily lead to global Nash equilibrium of the differential game in general cases because of the coupling of networked interactions. Second, to this end, an alternative local Nash solution is derived by defining the best response concept, while the problem is decomposed into local differential games. An off-policy RL algorithm using neighboring interactive data is constructed to update the controller without requiring a system model, while the stability and robustness properties are proved. Third, to further tackle the dilemma, another differential game configuration is investigated based on modified coupling index functions. The distributed solution can achieve global Nash equilibrium in contrast to the previous case while guaranteeing the stability. An equivalent parallel RL method is constructed corresponding to this Nash solution. Finally, the effectiveness of the learning process and the stability of synchronization control are illustrated in simulation results.

Key words: Multi-agent system; Differential game; Synchronization control; Data-driven; Reinforcement learning

Chinese Summary  <25> 基于多智能体微分博弈的数据驱动协同一致控制

石宇1,化永朝2,于江龙1,董希旺1,2,任章1
1北京航空航天大学自动化科学与电气工程学院,中国北京市,100191
2北京航空航天大学人工智能研究院,中国北京市,100191
摘要:本文研究了多智能体微分博弈问题及其在协同一致控制中的应用。提出系统化的多智能体微分博弈构建和分析方法,同时给出一种基于强化学习技术的数据驱动方法。首先论证了由于网络交互的耦合特性,典型的分布式控制器无法充分保证微分博弈的全局纳什均衡。其次通过定义最优对策的概念,将问题分解为局部微分博弈问题,并给出局部纳什均衡解。构造了一种无需系统模型信息的离轨策略强化学习算法,利用在线邻居交互数据对控制器进行优化更新,并证明控制器的稳定性和鲁棒性。进一步提出一种基于改进耦合指标函数的微分博弈模型及其等效的强化学习求解方法。与现有研究相比,该模型解决了多智能体所需信息的耦合问题,并实现分布式框架下全局纳什均衡和稳定控制。构造了与此纳什解对应的等价并行强化学习方法。最后,仿真结果验证了学习过程的有效性和一致控制的稳定性。

关键词组:多智能体系统;微分博弈;一致控制;数据驱动;强化学习


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2200001

CLC number:

TP273

Download Full Text:

Click Here

Downloaded:

4891

Download summary:

<Click Here> 

Downloaded:

337

Clicked:

2149

Cited:

0

On-line Access:

2022-07-21

Received:

2022-01-03

Revision Accepted:

2022-07-21

Crosschecked:

2022-04-21

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE