Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/108075
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Biomedical Engineeringen_US
dc.contributorSchool of Nursingen_US
dc.creatorYu, Gen_US
dc.creatorZou, Jen_US
dc.creatorHu, Xen_US
dc.creatorAviles-Rivero, AIen_US
dc.creatorQin, Jen_US
dc.creatorWang, Sen_US
dc.date.accessioned2024-07-23T04:08:19Z-
dc.date.available2024-07-23T04:08:19Z-
dc.identifier.urihttp://hdl.handle.net/10397/108075-
dc.descriptionForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, 21-27 Jul 2024en_US
dc.language.isoenen_US
dc.rightsPosted with permission of the author.en_US
dc.titleRevitalizing multivariate time series forecasting : learnable decomposition with inter-series dependencies and intra-series variations modelingen_US
dc.typeConference Paperen_US
dcterms.abstractPredicting multivariate time series is crucial, demanding precise modeling of intricate patterns, including inter-series dependencies and intra-series variations. Distinctive trend characteristics in each time series pose challenges, and existing methods, relying on basic moving average kernels, may struggle with the non-linear structure and complex trends in real-world data. Given that, we introduce a learnable decomposition strategy to capture dynamic trend information more reasonably. Additionally, we propose a dual attention module tailored to capture inter-series dependencies and intra-series variations simultaneously for better time series forecasting, which is implemented by channel-wise self-attention and autoregressive self-attention. To evaluate the effectiveness of our method, we conducted experiments across eight open-source datasets and compared it with the state-of-the-art methods. Through the comparison results, our Leddam (LEarnable Decomposition and Dual Attention Module) not only demonstrates significant advancements in predictive performance but also the proposed decomposition strategy can be plugged into other methods with a large performance-boosting, from 11.87% to 48.56% MSE error degradation. Code is available at this link: https://github.com/LeviAckman/Leddam.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024en_US
dcterms.issued2024-
dc.relation.conferenceInternational Conference on Machine Learning [ICML]en_US
dc.description.validate202407 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera3073a-
dc.identifier.SubFormID49381-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextStart-up Fund of The Hong Kong Polytechnic University (No. P0045999) and the Seed Fund of the Research Institute for Smart Ageing (No. P0050946)en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
1452_revitalizing_multivariate_time.pdf7.99 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.