CLC number: TP13
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2021-01-10
Cited: 0
Clicked: 7600
Citations: Bibtex RefMan EndNote GB/T7714
https://orcid.org/0000-0001-6264-2955
Xinxing LI, Lele XI, Wenzhong ZHA, Zhihong PENG. Minimax Q-learning design for H∞ control of linear discrete-time systems[J]. Frontiers of Information Technology & Electronic Engineering, 2022, 23(3): 438-451.
@article{title="Minimax Q-learning design for H∞ control of linear discrete-time systems",
author="Xinxing LI, Lele XI, Wenzhong ZHA, Zhihong PENG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="23",
number="3",
pages="438-451",
year="2022",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2000446"
}
%0 Journal Article
%T Minimax Q-learning design for H∞ control of linear discrete-time systems
%A Xinxing LI
%A Lele XI
%A Wenzhong ZHA
%A Zhihong PENG
%J Frontiers of Information Technology & Electronic Engineering
%V 23
%N 3
%P 438-451
%@ 2095-9184
%D 2022
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2000446
TY - JOUR
T1 - Minimax Q-learning design for H∞ control of linear discrete-time systems
A1 - Xinxing LI
A1 - Lele XI
A1 - Wenzhong ZHA
A1 - Zhihong PENG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 23
IS - 3
SP - 438
EP - 451
%@ 2095-9184
Y1 - 2022
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2000446
Abstract: The H∞ control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H∞ controller due to the nonlinear Hamilton–Jacobi–Isaacs equation, even for linear systems. This study deals with the design of an H∞ controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning, learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H∞ load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H∞ load-frequency controller has good disturbance rejection performance.
[1]Al-Tamimi A, Lewis FL, Abu-Khalaf M, 2007. Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control. Automatica, 43(3):473-481.
[2]Başar T, Bernhard P, 1995. H∞-Optimal Control and Related Minimax Design Problems (2nd Ed.). Springer, Boston, USA.
[3]Doyle JC, Glover K, Khargonekar PP, et al., 1989. State-space solutions to standard H2 and H∞ control problems. IEEE Trans Autom Contr, 34(8):831-847.
[4]Hansen TD, Miltersen PB, Zwick U, 2003. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. J ACM, 60(1): Article 1.
[5]He HB, Zhong XN, 2018. Learning without external reward. IEEE Comput Intell Mag, 13(3):48-54.
[6]Ioannou PA, Fidan B, 2006. Adaptive Control Tutorial. SIAM, Philadelphia, USA.
[7]Kiumarsi B, Lewis FL, Jiang ZP, 2017. H∞ control of linear discrete-time systems: off-policy reinforcement learning. Automatica, 78:144-152.
[8]Kiumarsi B, Vamvoudakis KG, Modares H, et al., 2018. Optimal and autonomous control using reinforcement learning: a survey. IEEE Trans Neur Netw Learn Syst, 29(6):2042-2062.
[9]Li HR, Zhang QC, Zhao DB, 2020. Deep reinforcement learning-based automatic exploration for navigation in unknown environment. IEEE Trans Neur Netw Learn Syst, 31(6):2064-2076.
[10]Li XX, Peng ZH, Jiao L, et al., 2019. Online adaptive Q-learning method for fully cooperative linear quadratic dynamic games. Inform Sci, 62:222201.
[11]Littman ML, 2001. Value-function reinforcement learning in Markov games. Cogn Syst Res, 2(1):55-66.
[12]Luo B, Wu HN, Huang TW, 2015. Off-policy reinforcement learning for H∞ control design. IEEE Trans Cybern, 45(1):65-76.
[13]Luo B, Yang Y, Liu DR, 2018. Adaptive Q-learning for data-based optimal output regulation with experience replay. IEEE Trans Cybern, 48(12):3337-3348.
[14]Luo B, Yang Y, Liu DR, 2021. Policy iteration Q-learning for data-based two-player zero-sum game of linear discrete-time systems. IEEE Trans Cybern, 51(7):3630-3640.
[15]Mehraeen S, Dierks T, Jagannathan S, et al., 2013. Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks. IEEE Trans Cybern, 43(6):1641-1655.
[16]Modares H, Lewis FL, Jiang ZP, 2015. H∞ tracking control of completely unknown continuous-time systems via off-policy reinforcement learning. IEEE Trans Neur Netw Learn Syst, 26(10):2550-2562.
[17]Prokhorov DV, Wunsch DC, 1997. Adaptive critic designs. IEEE Trans Neur Netw, 8(5):997-1007.
[18]Rizvi SAA, Lin ZL, 2018. Output feedback Q-learning for discrete-time linear zero-sum games with application to the H-infinity control. Automatica, 95:213-221.
[19]Sakamoto N, van der Schaft AJ, 2008. Analytical approximation methods for the stabilizing solution of the Hamilton–Jacobi equation. IEEE Trans Autom Contr, 53(10):2335-2350.
[20]Sutton RS, Barto AG, 1998. Reinforcement Learning: an Introduction. MIT Press, Cambridge, USA.
[21]Valadbeigi AP, Sedigh AK, Lewis FL, 2020. H∞ static output-feedback control design for discrete-time systems using reinforcement learning. IEEE Trans Neur Netw Learn Syst, 31(2):396-406.
[22]Vamvoudakis KG, Modares H, Kiumarsi B, et al., 2017. Game theory-based control system algorithms with real-time reinforcement learning: how to solve multiplayer games online. IEEE Contr Syst Mag, 37(1):33-52.
[23]Watkins CJCH, Dayan P, 1992. Q-learning. Mach Learn, 8(3):279-292.
[24]Wei QL, Lewis FL, Sun QY, et al., 2017. Discrete-time deterministic Q-learning: a novel convergence analysis. IEEE Trans Cybern, 47(5):1224-1237.
[25]Wei YF, Wang ZY, Guo D, et al., 2019. Deep Q-learning based computation offloading strategy for mobile edge computing. Comput Mater Contin, 59(1):89-104.
[26]Yan HS, Zhang JJ, Sun QM, 2019. MTN optimal control of SISO nonlinear time-varying discrete-time systems for tracking by output feedback. Intell Autom Soft Comput, 25(3):487-507.
[27]Zhang HG, Qin CB, Jiang B, et al., 2014. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems. IEEE Trans Cybern, 44(12):2706-2718.
[28]Zhong XN, He HB, Wang D, et al., 2018. Model-free adaptive control for unknown nonlinear zero-sum differential game. IEEE Trans Cybern, 48(5):1633-1646.
[29]Zhu YH, Zhao DB, Li XJ, 2017. Iterative adaptive dynamic programming for solving unknown nonlinear zero-sum game based on online data. IEEE Trans Neur Netw Learn Syst, 28(3):714-725.
Open peer comments: Debate/Discuss/Question/Opinion
<1>