Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117186
| Title: | IML-Spikeformer : input-aware multilevel spiking transformer for speech processing | Authors: | Song, Z Zhang, S Chou, Y Wu, J Li, H |
Issue Date: | Mar-2026 | Source: | IEEE transactions on neural networks and learning systems, Mar. 2026, v. 37, no. 3, p. 1377-1389 | Abstract: | Spiking neural networks (SNNs), inspired by biological neural mechanisms, represent a promising neuromorphic computing paradigm that offers energy-efficient alternatives to traditional artificial neural networks (ANNs). Despite proven effectiveness, SNN architectures have struggled to achieve competitive performance on large-scale speech processing tasks. Two key challenges hinder progress: 1) the high computational overhead during training caused by multitimestep spike firing and 2) the absence of large-scale SNN architectures tailored to speech processing tasks. To overcome the issues, we introduce the input-aware multilevel spikeformer (IML-Spikeformer), a spiking transformer architecture specifically designed for large-scale speech processing. Central to our design is the input-aware multilevel spike (IMLS) mechanism, which simulates multitimestep spike firing within a single timestep using an adaptive, input-aware thresholding scheme. IML-Spikeformer further integrates a reparameterized spiking self-attention (RepSSA) module with a hierarchical decay mask (HDM), forming the HD-RepSSA module. This module enhances the precision of attention maps and enables modeling of multiscale temporal dependencies in speech signals. Experiments demonstrate that IML-Spikeformer achieves word error rates (WERs) of 6.0% on AiShell-1 and 3.4% on Librispeech-960, comparable to conventional ANN transformers while reducing theoretical inference energy consumption by 4.64X and 4.32X, , respectively. IML-Spikeformer marks an advance of scalable SNN architectures for large-scale speech processing in both task performance and energy efficiency. Our source code and model checkpoints are publicly available at github.com/Pooookeman/IML-Spikeformer | Keywords: | Neuromorphic auditory processing Speech recognition Spiking neural networks (SNNs) Spiking trans- former |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE transactions on neural networks and learning systems | ISSN: | 2162-237X | EISSN: | 2162-2388 | DOI: | 10.1109/TNNLS.2025.3615971 | Rights: | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication Z. Song, S. Zhang, Y. Chou, J. Wu and H. Li, "IML-Spikeformer: Input-Aware Multilevel Spiking Transformer for Speech Processing," in IEEE Transactions on Neural Networks and Learning Systems, vol. 37, no. 3, pp. 1377-1389, March 2026 is available at https://doi.org/10.1109/TNNLS.2025.3615971. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Song_IML_Spikeformer_Input.pdf | Pre-Published version | 882.91 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



