Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114097
PIRA download icon_1.1View/Download Full Text
Title: Efficient deep spiking multilayer perceptrons with multiplication-free inference
Authors: Li, B
Leng, L
Shen, S
Zhang, K
Zhang, J
Liao, J
Cheng, R 
Issue Date: Apr-2025
Source: IEEE transactions on neural networks and learning systems, Apr. 2025, v. 36, no. 4, p. 7542-7554
Abstract: Advancements in adapting deep convolution architectures for spiking neural networks (SNNs) have significantly enhanced image classification performance and reduced computational burdens. However, the inability of multiplication-free inference (MFI) to align with attention and transformer mechanisms, which are critical to superior performance on high-resolution vision tasks, imposes limitations on these gains. To address this, our research explores a new pathway, drawing inspiration from the progress made in multilayer perceptrons (MLPs). We propose an innovative spiking MLP architecture that uses batch normalization (BN) to retain MFI compatibility and introduce a spiking patch encoding (SPE) layer to enhance local feature extraction capabilities. As a result, we establish an efficient multistage spiking MLP network that blends effectively global receptive fields with local feature extraction for comprehensive spike-based computation. Without relying on pretraining or sophisticated SNN training techniques, our network secures a top-one accuracy of 66.39% on the ImageNet-1K dataset, surpassing the directly trained spiking ResNet-34 by 2.67%. Furthermore, we curtail computational costs, model parameters, and simulation steps. An expanded version of our network compares with the performance of the spiking VGG-16 network with a 71.64% top-one accuracy, all while operating with a model capacity 2.1 times smaller. Our findings highlight the potential of our deep SNN architecture in effectively integrating global and local learning abilities. Interestingly, the trained receptive field in our network mirrors the activity patterns of cortical cells.
Keywords: Image classification
Multilayer perceptron (MLP)
Spiking neural network (SNNs)
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on neural networks and learning systems 
ISSN: 2162-237X
EISSN: 2162-2388
DOI: 10.1109/TNNLS.2024.3394837
Rights: © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication B. Li et al., "Efficient Deep Spiking Multilayer Perceptrons With Multiplication-Free Inference," in IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 4, pp. 7542-7554, April 2025 is available at https://doi.org/10.1109/TNNLS.2024.3394837.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Li_Efficient_Deep_Spiking.pdfPre-Published version5.44 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.