Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118214
Title: Homotopic reinforcement learning for distributed consensus control of stochastic Markov jump multi-agent systems
Authors: Yao, Z
Zhu, Q 
Qin, P
Luo, M
Issue Date: Oct-2025
Source: Journal of supercomputing, Oct. 2025, v. 81, no. 15, 1446
Abstract: This paper investigates the optimized consensus problem for a class of stochastic Markov jump multi-agent systems, with particular attention to the high computational demands that arise in large-scale implementations. Firstly, an error system is constructed based on the consensus objective, and a min-max strategy is introduced to transform the consensus problem into an optimized control problem of the error system. Subsequently, a set of parallel coupled game Lyapunov equations are developed to design the consensus controller, whose solution naturally requires significant parallel computation resources. Furthermore, to address the challenges posed by unknown system dynamics and the difficulty of obtaining an initial stable controller, a novel model-free consensus control approach based on homotopic reinforcement learning is proposed. By collecting state and input data, the proposed method enables the online computation of closed-loop stable controllers and optimized consensus controllers in a scalable and distributed manner. Finally, a numerical example is presented to demonstrate the effectiveness of the proposed approach, highlighting its suitability for real-time implementation on high-performance computing platforms.
Keywords: Distributed consensus
Multi-agent systems
Reinforcement learning
Stochastic Markov jump systems
Publisher: Springer
Journal: Journal of supercomputing 
ISSN: 0920-8542
DOI: 10.1007/s11227-025-07885-5
Appears in Collections:Journal/Magazine Article

Open Access Information
Status embargoed access
Embargo End Date 2026-10-30
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.