Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/99550
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorYang, Qen_US
dc.creatorWu, Jen_US
dc.creatorZhang, Men_US
dc.creatorChua, Yen_US
dc.creatorWang, Xen_US
dc.creatorLi, Hen_US
dc.date.accessioned2023-07-12T08:58:25Z-
dc.date.available2023-07-12T08:58:25Z-
dc.identifier.isbn978-17-1387-108-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/99550-
dc.description36th Conference on Neural Information Processing Systems, NeurIPS 2022, 28 November - 9 December 2022, New Orleans, Louisiana, USA.en_US
dc.language.isoenen_US
dc.publisherNeural Information Processing Systems Foundation, Inc. (NeurIPS)en_US
dc.rights© The Authorsen_US
dc.rightsPosted with permission of the author.en_US
dc.rightsThe following publication Yang, Q., Wu, J., Zhang, M., Chua, Y., Wang, X., & Li, H. (2022). Training spiking neural networks with local tandem learning. Advances in Neural Information Processing Systems, 35, 12662-12676 is available at https://proceedings.neurips.cc/paper_files/paper/2022/hash/523caec7832a47fb19b8471dbfeec471-Abstract-Conference.html.en_US
dc.titleTraining spiking neural networks with local tandem learningen_US
dc.typeConference Paperen_US
dc.identifier.spage12662en_US
dc.identifier.epage12676en_US
dcterms.abstractSpiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationAdvances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022, p. 12662-12676en_US
dcterms.issued2022-
dc.relation.conferenceConference on Neural Information Processing Systems [NeurIPS]en_US
dc.description.validate202307 bcwwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2246-
dc.identifier.SubFormID47210-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextToward Next-generation Neuromorphic Machine Intelligence; PolyUen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Yang_Training_Spiking_neural.pdf2.26 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

152
Last Week
9
Last month
Citations as of Nov 10, 2025

Downloads

77
Citations as of Nov 10, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.