CLC number:
On-line Access: 2024-11-05
Received: 2024-04-06
Revision Accepted: 2024-09-06
Crosschecked: 0000-00-00
Cited: 0
Clicked: 130
Tao YANG, Xinhao SHI, Qinghan ZENG, Yulin YANG, Cheng XU, Hongzhe LIU. Optimization methods in fully cooperative scenarios: a review of multiagent reinforcement learning[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Optimization methods in fully cooperative scenarios: a review of multiagent reinforcement learning",
author="Tao YANG, Xinhao SHI, Qinghan ZENG, Yulin YANG, Cheng XU, Hongzhe LIU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400259"
}
%0 Journal Article
%T Optimization methods in fully cooperative scenarios: a review of multiagent reinforcement learning
%A Tao YANG
%A Xinhao SHI
%A Qinghan ZENG
%A Yulin YANG
%A Cheng XU
%A Hongzhe LIU
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400259
TY - JOUR
T1 - Optimization methods in fully cooperative scenarios: a review of multiagent reinforcement learning
A1 - Tao YANG
A1 - Xinhao SHI
A1 - Qinghan ZENG
A1 - Yulin YANG
A1 - Cheng XU
A1 - Hongzhe LIU
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400259
Abstract: multiagent reinforcement learning has become a dazzling new star in the field of reinforcement learning in recent years, demonstrating its immense potential across a plethora of application scenarios. The reward function directs agents to explore and make optimal decisions within their environments by establishing evaluation criteria and feedback mechanisms. Concurrently, cooperative objectives at the macro level provide a trajectory for agents’ learning, ensuring alignment between individual behavioral strategies and the overarching system goals. The interplay between reward structures and cooperative objectives not only bolsters the effectiveness of individual agents but also fosters interagent collaboration, offering both momentum and direction for the development of swarm intelligence and the harmonious operation of multiagent systems. This review delves deeply into the methods for designing reward structures and optimizing cooperative objectives in multiagent reinforcement learning, along with the most recent scientific advancements in these fields. The article meticulously reviews the application of simulation environments in cooperative scenarios and discusses future trends and potential research directions in the field, providing a forward-looking perspective and inspiration for subsequent research efforts.
Open peer comments: Debate/Discuss/Question/Opinion
<1>