Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118137
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Industrial and Systems Engineering | en_US |
| dc.contributor | Research Institute for Advanced Manufacturing | en_US |
| dc.creator | Ren, J | en_US |
| dc.creator | Han, F | en_US |
| dc.creator | Xu, Y | en_US |
| dc.date.accessioned | 2026-03-18T08:31:39Z | - |
| dc.date.available | 2026-03-18T08:31:39Z | - |
| dc.identifier.issn | 0306-4573 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/118137 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Pergamon Press | en_US |
| dc.subject | Adaptive ensemble classifier (AEC) | en_US |
| dc.subject | Cross-domain feature fusion module (CFFM) | en_US |
| dc.subject | Emotion recognition | en_US |
| dc.subject | Fully-connected multi-scale graph attention network (FM-GAT) | en_US |
| dc.subject | Multi-scale modern temporal convolutional network (MS-MTCN) | en_US |
| dc.title | Dual-stream spatiotemporal graph convolutional networks for EEG-based human emotion recognition | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 63 | en_US |
| dc.identifier.issue | 4 | en_US |
| dc.identifier.doi | 10.1016/j.ipm.2025.104597 | en_US |
| dcterms.abstract | Deep learning has advanced EEG-based human emotion recognition, yet most existing approaches rely on either temporal or spectral features and insufficiently model the fine-grained spatiotemporal structure of neural activity. To address these challenges, this paper develops a dual-stream spatiotemporal graph convolutional network (DSSGCN) for human emotion recognition. In the time domain, a multi-scale modern temporal convolutional network (MS-MTCN) is designed to capture rich temporal information across diverse receptive fields and model long-range temporal dependencies. In the frequency domain, a fully-connected multi-scale graph attention network (FM-GAT) is introduced to learn complex inter-channel relationships and spatial dependencies from the spectral representation of EEG signals. Furthermore, a cross-domain feature fusion module (CFFM) is employed to integrate the complementary information from both temporal and spectral branches, followed by an adaptive ensemble classifier (AEC) to enhance recognition robustness. Finally, an improved online knowledge distillation (IOKD) algorithm is devised to enhance the model's robustness and generalization. Evaluated on two public dataset and a self-collected music-emotion dataset, DSSGCN achieves 93.98%, 85.00%, and 99.20% accuracy, consistently surpassing eleven state-of-the-art methods and validating its effectiveness for decoding affective states from EEG signals. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Information processing and management, June 2026, v. 63, no. 4, 104597 | en_US |
| dcterms.isPartOf | Information processing and management | en_US |
| dcterms.issued | 2026-06 | - |
| dc.identifier.scopus | 2-s2.0-105027223408 | - |
| dc.identifier.eissn | 1873-5371 | en_US |
| dc.identifier.artn | 104597 | en_US |
| dc.description.validate | 202603 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G001259/2026-02 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2028-06-30 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



