Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117727
| Title: | EIRM-RL : epistemic integrity risk monitoring inspired safe reinforcement learning for trustworthy autonomous navigation | Authors: | Zhang, Y Wang, Y Wen, W |
Issue Date: | 15-Jan-2026 | Source: | IEEE internet of things journal, 15 Jan. 2026, v. 13, no. 2, p. 3500-3512 | Abstract: | Reinforcement learning (RL) has shown great potential for autonomous navigation within internet of things (IoT) environments, where various and changing uncertainties pose significant challenges for safe, real-world deployment. Existing safe RL methods typically employ heuristic constraints while neglecting the combined impact of multiple uncertainty sources, reducing robustness and interpretability. Drawing on concepts from global navigation satellite system (GNSS) integrity monitoring, this paper proposes an epistemic integrity risk monitoring reinforcement learning (EIRM-RL) framework to enable trustworthy autonomous navigation under uncertainty. EIRM-RL extends the GNSS protection level concept to RL by utilizing an assembled world model that quantifies and incorporates sensor noise, systematic bias, and epistemic uncertainty. Furthermore, the framework continuously monitors a dynamic epistemic risk probability, which is incorporated into policy optimization as an adaptive safety constraint via Lagrangian duality. This method enables the agent to proactively avoid hazards and effectively balance safety and performance, even in highly uncertain environments. Extensive experiments demonstrate that EIRM-RL achieves superior success rates, collision avoidance, and robustness compared to state-of-the-art safe RL methods, while maintaining high efficiency. | Keywords: | Epistemic uncertainty Integrity risk monitoring Reinforcement learning (RL) Trustworthy autonomous navigation Unmanned ground vehicle (UGV) |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE internet of things journal | EISSN: | 2327-4662 | DOI: | 10.1109/JIOT.2025.3633765 | Rights: | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication Y. Zhang, Y. Wang and W. Wen, 'EIRM-RL: Epistemic Integrity Risk Monitoring Inspired Safe Reinforcement Learning for Trustworthy Autonomous Navigation,' in IEEE Internet of Things Journal, vol. 13, no. 2, pp. 3500-3512, 15 Jan. 2026 is available at https://doi.org/10.1109/JIOT.2025.3633765. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zhang_EIRM-RL_Epistemic_Integrity.pdf | Pre-Published version | 23.03 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



