Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118137
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineeringen_US
dc.contributorResearch Institute for Advanced Manufacturingen_US
dc.creatorRen, Jen_US
dc.creatorHan, Fen_US
dc.creatorXu, Yen_US
dc.date.accessioned2026-03-18T08:31:39Z-
dc.date.available2026-03-18T08:31:39Z-
dc.identifier.issn0306-4573en_US
dc.identifier.urihttp://hdl.handle.net/10397/118137-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.subjectAdaptive ensemble classifier (AEC)en_US
dc.subjectCross-domain feature fusion module (CFFM)en_US
dc.subjectEmotion recognitionen_US
dc.subjectFully-connected multi-scale graph attention network (FM-GAT)en_US
dc.subjectMulti-scale modern temporal convolutional network (MS-MTCN)en_US
dc.titleDual-stream spatiotemporal graph convolutional networks for EEG-based human emotion recognitionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume63en_US
dc.identifier.issue4en_US
dc.identifier.doi10.1016/j.ipm.2025.104597en_US
dcterms.abstractDeep learning has advanced EEG-based human emotion recognition, yet most existing approaches rely on either temporal or spectral features and insufficiently model the fine-grained spatiotemporal structure of neural activity. To address these challenges, this paper develops a dual-stream spatiotemporal graph convolutional network (DSSGCN) for human emotion recognition. In the time domain, a multi-scale modern temporal convolutional network (MS-MTCN) is designed to capture rich temporal information across diverse receptive fields and model long-range temporal dependencies. In the frequency domain, a fully-connected multi-scale graph attention network (FM-GAT) is introduced to learn complex inter-channel relationships and spatial dependencies from the spectral representation of EEG signals. Furthermore, a cross-domain feature fusion module (CFFM) is employed to integrate the complementary information from both temporal and spectral branches, followed by an adaptive ensemble classifier (AEC) to enhance recognition robustness. Finally, an improved online knowledge distillation (IOKD) algorithm is devised to enhance the model's robustness and generalization. Evaluated on two public dataset and a self-collected music-emotion dataset, DSSGCN achieves 93.98%, 85.00%, and 99.20% accuracy, consistently surpassing eleven state-of-the-art methods and validating its effectiveness for decoding affective states from EEG signals.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationInformation processing and management, June 2026, v. 63, no. 4, 104597en_US
dcterms.isPartOfInformation processing and managementen_US
dcterms.issued2026-06-
dc.identifier.scopus2-s2.0-105027223408-
dc.identifier.eissn1873-5371en_US
dc.identifier.artn104597en_US
dc.description.validate202603 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001259/2026-02-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2028-06-30en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2028-06-30
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.