Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114195
| Title: | A test oracle for reinforcement learning software based on Lyapunov stability control theory | Authors: | Zhang, S Song, H Wang, Q Shen, H Pei, Y |
Issue Date: | 2025 | Source: | In Proceedings: 2025 IEEE/ACM 47th International Conference on Software Engineering ICSE 2025, Ottawa, Ontario, Canada, 27 April – 3 May 2025, p. 502-513. Danvers, MA: The Institute of Electrical and Electronics Engineers, Inc., 2025 | Abstract: | Reinforcement Learning (RL) has gained significant attention in recent years. As RL software becomes more complex and infiltrates critical application domains, ensuring its quality and correctness becomes increasingly important. An indispensable aspect of software quality/correctness insurance is testing. However, testing RL software faces unique challenges compared to testing traditional software, due to the difficulty on defining the outputs’ correctness. This leads to the RL test oracle problem. Current approaches to testing RL software often rely on human oracles, i.e. convening human experts to judge the correctness of RL software outputs. This heavily depends on the availability and quality (including the experiences, subjective states, etc.) of the human experts, and cannot be fully automated. In this paper, we propose a novel approach to design test oracles for RL software by leveraging the Lyapunov stability control theory. By incorporating Lyapunov stability concepts to guide RL training, we hypothesize that a correctly implemented RL software shall output an agent that respects Lyapunov stability control theories. Based on this heuristics, we propose a Lyapunov stability control theory based oracle, LPEA(ϑ, θ), for testing RL software. We conduct extensive experiments over representative RL algorithms and RL software bugs to evaluate our proposed oracle. The results show that our proposed oracle can outperform the human oracle in most metrics. Particularly, LPEA(ϑ = 100%, θ = 75%) outperforms the human oracle by 53.6%, 50%, 18.4%, 34.8%, 18.4%, 127.8%, 60.5%, 38.9%, and 31.7% respectively on accuracy, precision, recall, F1 score, true positive rate, true negative rate, false positive rate, false negative rate, and ROC curve’s AUC; and LPEA(ϑ = 100%, θ = 50%) outperforms the human oracle by 48.2%, 47.4%, 10.5%, 29.1%, 10.5%, 127.8%, 60.5%, 22.2%, and 26.0% respectively on these metrics. | Keywords: | Reinforcement learning Test oracle |
Publisher: | The Institute of Electrical and Electronics Engineers, Inc. | ISBN: | 979-8-3315-0569-1 | DOI: | 10.1109/ICSE55347.2025.00074 | Rights: | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication S. Zhang, H. Song, Q. Wang, H. Shen and Y. Pei, "A Test Oracle for Reinforcement Learning Software Based on Lyapunov Stability Control Theory," 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE), Ottawa, ON, Canada, 2025, pp. 502-513 is available at https://doi.org/10.1109/ICSE55347.2025.00074. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zhang_Test_Oracle_Reinforcement.pdf | Pre-Published version | 1.3 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



