Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118121
PIRA download icon_1.1View/Download Full Text
Title: Learning safe, optimal, and real-time flight interaction with deep confidence-enhanced reachability guarantee
Authors: Zhang, Y 
Wang, Y 
Yan, P 
Wen, W 
Issue Date: Dec-2025
Source: IEEE transactions on intelligent transportation systems, Dec. 2025, v. 26, no. 12, p. 23100-23112
Abstract: In the low-altitude economy, ensuring the safe and agile flight of unmanned aerial vehicles (UAVs) in dynamic obstacle environments is essential for expanding interactive applications like parcel delivery. While deep reinforcement learning (DRL) shows promise for UAV motion planning and control, its trial-and-error exploration often struggles to ensure both agility and safety, especially under uncertain observational noise. Therefore, this paper proposes a deep confidence-enhanced reachability policy optimization (DCRPO) framework. By integrating safe DRL with nonlinear model predictive control (NMPC), DCRPO achieves high-level safety decisions, complex real-time joint planning and control for UAVs. Furthermore, we develop a deep confidence-enhanced reachability guarantee that constructs a set of stochastically forward-reachable planned trajectories under uncertainty, enabling robust safety collision probability certifications. This safe reachability mechanism adaptively selects belief space actions from planned actions to interact with the environment, further enhancing safety and reducing training time. In extensive experiments of UAVs traversing a fast-moving rectangular gate, the proposed method outperforms other state-of-the-art baseline methods under varying environments in terms of operational robustness. Furthermore, the proposed method significantly reduces overall collision violations and training time, greatly improving both training safety and efficiency. The demonstration video (https://youtu.be/7xkp9U7FSJg) and the source code (https://github.com/ZyyFLY/DCRPO) are also provided.
Keywords: Deep confidence-enhanced reachability guarantees
Deep reinforcement learning
Joint planning and control
Unmanned aerial vehicles
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on intelligent transportation systems 
ISSN: 1524-9050
EISSN: 1558-0016
DOI: 10.1109/TITS.2025.3616580
Rights: © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication Y. Zhang, Y. Wang, P. Yan and W. Wen, 'Learning Safe, Optimal, and Real-Time Flight Interaction With Deep Confidence-Enhanced Reachability Guarantee,' in IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 12, pp. 23100-23112, Dec. 2025 is available at https://doi.org/10.1109/TITS.2025.3616580.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Zhang_Learning_Safe_Optimal.pdfPre-Published version20.18 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.