Please use this identifier to cite or link to this item:
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorTan, Qen_US
dc.creatorZhang, Zen_US
dc.creatorLiu, Nen_US
dc.creatorHuang, Xen_US
dc.creatorYang, Hen_US
dc.creatorZhou, Jen_US
dc.creatorHu, Xen_US
dc.rightsPosted with permission of the publisher.en_US
dc.rightsQiaoyu Tan, Jianwei Zhang, Ninghao Liu, Xiao Huang, Hongxia Yang, Jingren Zhou, Xia Hu, Dynamic Memory based Attention Network for Sequential Recommendation, AAAI, 2021en_US
dc.titleDynamic memory based attention network for sequential recommendationen_US
dc.typeConference Paperen_US
dcterms.abstractSequential recommendation has become increasingly essential in various online services. It aims to model the dynamic preferences of users from their historical interactions and predict their next items. The accumulated user behavior records on real systems could be very long. This rich data brings opportunities to track actual interests of users. Prior efforts mainly focus on making recommendations based on relatively recent behaviors. However, the overall sequential data may not be effectively utilized, as early interactions might affect users’ current choices. Also, it has become intolerable to scan the entire behavior sequence when performing inference for each user, since real-world system requires short response time. To bridge the gap, we propose a novel long sequential recommendation model, called Dynamic Memory-based Attention Network (DMAN). It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users. To improve memory fidelity, DMAN dynamically abstracts each user’s long-term interest into its own memory blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic memory, the user’s short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation. Empirical results over four benchmark datasets demonstrate the superiority of our model in capturing long-term dependency over various state-of-the-art sequential models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation35th AAAI Conference on Artificial Intelligence (AAAI-21), A Virtual Conference, February 2-9, 2021, p. 1-9en_US
dc.relation.conferenceConference on Artificial Intelligence [AAAI]en_US
dc.description.validate202105 bcrcen_US
dc.description.oaAccepted Manuscripten_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Qiaoyu_AAAI21.pdfPre-Published version866.26 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Show simple item record

Page views

Citations as of Jul 3, 2022


Citations as of Jul 3, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.