Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113865
PIRA download icon_1.1View/Download Full Text
Title: Toward energy-efficient spike-based deep reinforcement learning with temporal coding
Authors: Zhang, M
Wang, S
Wu, J 
Wei, W
Zhang, D
Zhou, Z
Wang, S
Zhang, F
Yang, Y
Issue Date: May-2025
Source: IEEE computational intelligence magazine, May 2025, v. 20, no. 2, p. 45-57
Abstract: Deep reinforcement learning (DRL) facilitates efficient interaction with complex environments by enabling continuous optimization strategies and providing agents with autonomous learning abilities. However, traditional DRL methods often require large-scale neural networks and extensive computational resources, which limits their applicability in power-sensitive and resource-constrained edge environments, such as mobile robots and drones. To overcome these limitations, we leverage the energy-efficient properties of brain-inspired spiking neural networks (SNNs) to develop a novel spike-based DRL framework, referred to as Spike-DRL. Unlike traditional SNN-based reinforcement learning methods, Spike-DRL incorporates the energy-efficient time-to-first-spike (TTFS) encoding scheme, where information is encoded through the precise timing of a single spike. This TTFS-based method allows Spike-DRL to work in a sparse, event-driven manner, significantly reducing energy consumption. In addition, to improve the deployment capability of Spike-DRL in resource-constrained environments, a lightweight strategy for quantizing synaptic weights into low-bit representations is introduced, significantly reducing memory usage and computational complexity. Extensive experiments have been conducted to evaluate the performance of the proposed Spike-DRL, and the results show that our method achieves competitive performance with higher energy efficiency and lower memory requirements. This work presents a biologically inspired model that is well suited for real-time decision-making and autonomous learning in power-sensitive and resource-limited edge environments.
Keywords: .
Publisher: Institute of Electrical and Electronics Engineers Inc.
Journal: IEEE computational intelligence magazine 
ISSN: 1556-603X
EISSN: 1556-6048
DOI: 10.1109/MCI.2025.3541572
Rights: © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication M. Zhang et al., "Toward Energy-Efficient Spike-Based Deep Reinforcement Learning With Temporal Coding," in IEEE Computational Intelligence Magazine, vol. 20, no. 2, pp. 45-57, May 2025 is available at https://doi.org/10.1109/MCI.2025.3541572.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Zhang_Toward_Energy_Efficient.pdfPre-Published version6.57 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.