Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/113675
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Data Science and Artificial Intelligence | - |
| dc.contributor | Department of Computing | - |
| dc.creator | Sun, P | - |
| dc.creator | Wu, J | - |
| dc.creator | Devos, P | - |
| dc.creator | Botteldooren, D | - |
| dc.date.accessioned | 2025-06-17T07:40:49Z | - |
| dc.date.available | 2025-06-17T07:40:49Z | - |
| dc.identifier.issn | 0893-6080 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/113675 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.subject | Efficient neuromorphic inference | en_US |
| dc.subject | Neuromorphic computing | en_US |
| dc.subject | Parameter-free attention | en_US |
| dc.subject | Spiking neural network | en_US |
| dc.title | Towards parameter-free attentional spiking neural networks | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 185 | - |
| dc.identifier.doi | 10.1016/j.neunet.2025.107154 | - |
| dcterms.abstract | Brain-inspired spiking neural networks (SNNs) are increasingly explored for their potential in spatiotemporal information modeling and energy efficiency on emerging neuromorphic hardware. Recent works incorporate attentional modules into SNNs, greatly enhancing their capabilities in handling sequential data. However, these parameterized attentional modules have placed a huge burden on memory consumption, a factor that is constrained on neuromorphic chips. To address this issue, we propose a parameter-free attention (PfA) mechanism that establishes a parameter-free linear space to bolster feature representation. The proposed PfA approach can be seamlessly integrated into the spiking neuron, resulting in enhanced performance without any increase in parameters. The experimental results on the SHD, BAE-TIDIGITS, SSC, DVS-Gesture, DVS-Cifar10, Cifar10, and Cifar100 datasets well demonstrate its competitive or superior classification accuracy compared with other state-of-the-art models. Furthermore, our model exhibits stronger noise robustness than conventional SNNs and those with parameterized attentional mechanisms. Our codes can be accessible at https://github.com/sunpengfei1122/PfA-SNN. | - |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Neural networks, May 2025, v. 185, 107154 | - |
| dcterms.isPartOf | Neural networks | - |
| dcterms.issued | 2025-05 | - |
| dc.identifier.scopus | 2-s2.0-85215234516 | - |
| dc.identifier.eissn | 1879-2782 | - |
| dc.identifier.artn | 107154 | - |
| dc.description.validate | 202506 bcch | - |
| dc.identifier.FolderNumber | a3717b | en_US |
| dc.identifier.SubFormID | 50838 | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Flemish Government; Research Foundation - Flanders | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-05-31 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



