Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118564
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineering-
dc.contributorMainland Development Office-
dc.creatorQi, X-
dc.creatorXu, B-
dc.date.accessioned2026-04-24T01:55:34Z-
dc.date.available2026-04-24T01:55:34Z-
dc.identifier.issn0018-9545-
dc.identifier.urihttp://hdl.handle.net/10397/118564-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectGlobal Navigation Satellite System (GNSS)en_US
dc.subjectMaximum likelihood estimation (MLE)en_US
dc.subjectMultipath mitigationen_US
dc.subjectOnline reinforcement learning (RL)en_US
dc.subjectQ-learningen_US
dc.titleModelling and assessing reinforcement learning-assisted GNSS multipath estimation and mitigation for urban navigationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TVT.2025.3625730-
dcterms.abstractMultipath has long been considered one of the major error sources for Global Navigation Satellite Systems (GNSS) in urban areas. In recent years, many multipath mitigation techniques based on offline machine learning have been proposed. However, collecting and labeling training data for offline learning based techniques is challenging for the multipath problem. On the other hand, it is difficult for the training data to completely capture multipath characteristics in various locations because of the high dependence of multipath on environments. This undoubtedly causes the performance of pre-trained models on real data to degrade. This paper proposes a multipath parameter estimation algorithm based on maximum likelihood estimation (MLE) using online reinforcement learning (RL). The proposed algorithm searches for optimal parameter estimates by interacting with the environment online, thus avoiding the problems associated with offline learning-based techniques. Specifically, the MLE of multipath parameters is formulated as an optimization problem, and a modified Q-learning-based algorithm is employed to solve the problem using an iterative method. Comprehensive performance evaluations of the proposed reinforcement learning-based multipath parameter estimation (RL-MPE) show that: 1) The proposed algorithm performs better in multipath parameter estimation compared to random search and the Bayesian optimization method named the Tree-structured Parzen Estimator (TPE). 2) RL-MPE has better performance for short multipath signals than the Multipath Estimating Delay Lock Loop (MEDLL) when using a comparable number of correlators. 3) RL-MPE achieves significantly better multipath mitigation performance on real urban data compared to the pre trained random forest (RF)-based multipath parameter estimator.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE transactions on vehicular technology, Date of Publication: 27 October 2025, Early Access, https://doi.org/10.1109/TVT.2025.3625730-
dcterms.isPartOfIEEE transactions on vehicular technology-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105020473369-
dc.identifier.eissn1939-9359-
dc.description.validate202604 bcjz-
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001524/2026-01en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported by the Open Fund from the State Key Laboratory of Satellite Navigation System and Equipment Technology (Grants no.: CEPNT2022A05), the National Natural Science Foundation of China (NSFC) under Grant 62103346, PolyU Start-up Fund for RAPs under the Strategic Hiring Scheme (Project no.: P0036073), and PolyU Start-up Fund for New Recruits (Project no.: P0049535).en_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.