Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113865
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorZhang, M-
dc.creatorWang, S-
dc.creatorWu, J-
dc.creatorWei, W-
dc.creatorZhang, D-
dc.creatorZhou, Z-
dc.creatorWang, S-
dc.creatorZhang, F-
dc.creatorYang, Y-
dc.date.accessioned2025-06-26T07:11:13Z-
dc.date.available2025-06-26T07:11:13Z-
dc.identifier.issn1556-603X-
dc.identifier.urihttp://hdl.handle.net/10397/113865-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication M. Zhang et al., "Toward Energy-Efficient Spike-Based Deep Reinforcement Learning With Temporal Coding," in IEEE Computational Intelligence Magazine, vol. 20, no. 2, pp. 45-57, May 2025 is available at https://doi.org/10.1109/MCI.2025.3541572.en_US
dc.subject.en_US
dc.titleToward energy-efficient spike-based deep reinforcement learning with temporal codingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage45-
dc.identifier.epage57-
dc.identifier.volume20-
dc.identifier.issue2-
dc.identifier.doi10.1109/MCI.2025.3541572-
dcterms.abstractDeep reinforcement learning (DRL) facilitates efficient interaction with complex environments by enabling continuous optimization strategies and providing agents with autonomous learning abilities. However, traditional DRL methods often require large-scale neural networks and extensive computational resources, which limits their applicability in power-sensitive and resource-constrained edge environments, such as mobile robots and drones. To overcome these limitations, we leverage the energy-efficient properties of brain-inspired spiking neural networks (SNNs) to develop a novel spike-based DRL framework, referred to as Spike-DRL. Unlike traditional SNN-based reinforcement learning methods, Spike-DRL incorporates the energy-efficient time-to-first-spike (TTFS) encoding scheme, where information is encoded through the precise timing of a single spike. This TTFS-based method allows Spike-DRL to work in a sparse, event-driven manner, significantly reducing energy consumption. In addition, to improve the deployment capability of Spike-DRL in resource-constrained environments, a lightweight strategy for quantizing synaptic weights into low-bit representations is introduced, significantly reducing memory usage and computational complexity. Extensive experiments have been conducted to evaluate the performance of the proposed Spike-DRL, and the results show that our method achieves competitive performance with higher energy efficiency and lower memory requirements. This work presents a biologically inspired model that is well suited for real-time decision-making and autonomous learning in power-sensitive and resource-limited edge environments.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE computational intelligence magazine, May 2025, v. 20, no. 2, p. 45-57-
dcterms.isPartOfIEEE computational intelligence magazine-
dcterms.issued2025-05-
dc.identifier.scopus2-s2.0-105003671005-
dc.identifier.eissn1556-6048-
dc.description.validate202506 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera3776en_US
dc.identifier.SubFormID51026en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of Chinaen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhang_Toward_Energy_Efficient.pdfPre-Published version6.57 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.