Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/111924
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textiles-
dc.creatorXiao, Q-
dc.creatorWang, J-
dc.date.accessioned2025-03-19T07:35:07Z-
dc.date.available2025-03-19T07:35:07Z-
dc.identifier.urihttp://hdl.handle.net/10397/111924-
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.rights© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Xiao, Q., & Wang, J. (2024). DRL-SRS: A Deep Reinforcement Learning Approach for Optimizing Spaced Repetition Scheduling. Applied Sciences, 14(13), 5591 is available at https://doi.org/10.3390/app14135591.en_US
dc.subjectDeep reinforcement learningen_US
dc.subjectHalf-life regressionen_US
dc.subjectMemory modelsen_US
dc.subjectSpaced repetitionen_US
dc.subjectTransformersen_US
dc.titleDRL-SRS : a deep reinforcement learning approach for optimizing spaced repetition schedulingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume14-
dc.identifier.issue13-
dc.identifier.doi10.3390/app14135591-
dcterms.abstractOptimizing spaced repetition schedules is of great importance for enhancing long-term memory retention in both real-world applications, e.g., online learning platforms, and academic applications, e.g., cognitive science. Traditional methods tackle this problem by employing handcrafted rules while modern methods try to optimize scheduling using deep reinforcement learning (DRL). Existing DRL-based approaches model the problem by selecting the optimal next item to appear, which implies the learner can only learn one item in a day. However, the most essential point to enhancing long-term memory is to select the optimal interval to review. To this end, we present a novel approach to DRL to optimize spaced repetition scheduling. The contribution of our framework is three-fold. We first introduce a Transformer-based model to estimate the recall probability of a learning item accurately, which encodes the temporal dynamics of a learner’s learning trajectories. Second, we build a simulation environment based on our recall probability estimation model. Third, we utilize the Deep Q-Network (DQN) as the agent to learn the optimal review intervals for learning items and train the policy in a recurrent manner. Experimental results demonstrate that our frame-work achieves state-of-the-art performance against competing methods. Our method achieves an MAE (mean average error) score of 0.0274 on a memory prediction task, which is 11% lower than the second-best method. For spaced repetition scheduling, our method achieves mean recall probabilities of 0.92, 0.942, and 0.372 in three different environments, the best performance in all scenarios.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationApplied sciences, July 2024, v. 14, no. 13, 5591-
dcterms.isPartOfApplied sciences-
dcterms.issued2024-07-
dc.identifier.scopus2-s2.0-85198419953-
dc.identifier.eissn2076-3417-
dc.identifier.artn5591-
dc.description.validate202503 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextEngineering Research Center of the Ministry of Education for the Integration and Application of Digital Learning Technologyen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
applsci-14-05591.pdf468.29 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

5
Citations as of Apr 14, 2025

Downloads

5
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.