Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116967
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Land Surveying and Geo-Informatics | - |
| dc.creator | Ye, J | - |
| dc.creator | Mansour, A | - |
| dc.creator | Huang, F | - |
| dc.date.accessioned | 2026-01-21T03:54:24Z | - |
| dc.date.available | 2026-01-21T03:54:24Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/116967 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Nature Publishing Group | en_US |
| dc.rights | Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. | en_US |
| dc.rights | © The Author(s) 2025 | en_US |
| dc.rights | The following publication Ye, J., Mansour, A. & Huang, F. Enhancing real-time heading estimation for pedestrian navigation via deep learning and smartphone embedded sensors. Sci Rep 15, 31672 (2025) is available at https://doi.org/10.1038/s41598-025-13390-9. | en_US |
| dc.title | Enhancing real-time heading estimation for pedestrian navigation via deep learning and smartphone embedded sensors | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 15 | - |
| dc.identifier.doi | 10.1038/s41598-025-13390-9 | - |
| dcterms.abstract | The accurate smartphone-based pedestrian navigation significantly depends on the precise heading estimation. However, heading estimation is still a challenging problem in most pedestrian navigation applications because of the bias of low-cost smartphone sensors, thermal drift with long-term operation, and the unexpected changes in the carrying mode of handheld devices. To address these challenges, many existing methods based on pervasive resources encounter severe errors. Conversely, auxiliary resources-based approaches may hinder ubiquitous and seamless indoor-outdoor navigation experiences. This research aims to enhance heading estimation by leveraging pervasive measurements such as LVGOs and straight-line features self-recognized from camera images. The proposed method mitigates the accumulated gyro drift using the absolute heading angle estimated by LVGOs. However, these absolute angles are highly prone to false estimation while navigating near areas with high electric and magnetic activities due to stable geomagnetism anomalies. Encouraged by the pervasiveness of straight-line features in indoor and outdoor environments, we developed a deep learning method-based visual tracking of these features to enhance the gyroscope and magnetic field fusion-based heading estimation. A convolutional neural network was developed using a U-Net network to accurately and quickly recognize these features, then leverage them as heading constraint to overcome long-term gyro drift and short-term compass heading bias. The proposed method superiorly ensured the balance between recognition time delay and precision, which enabled smooth real-time performance. The achieved results improved the heading estimation and could provide significant help, especially for visually impaired people, as they mostly track tactile paving. This encourages future tests and assessments using visually impaired people to reliably include the proposed method in their applications. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Scientific reports, 2025, v. 15, 31672 | - |
| dcterms.isPartOf | Scientific reports | - |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-105014740634 | - |
| dc.identifier.pmid | 40866377 | - |
| dc.identifier.eissn | 2045-2322 | - |
| dc.identifier.artn | 31672 | - |
| dc.description.validate | 202601 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported by Zhejiang Provincial Natural Science Foundation Project (LTGG23D040001) and Open Project of Fujian Key Laboratory of Spatial Information Perception and Intelligent Processing(Yango University, Grant No. FKLSIPIP1023). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| s41598-025-13390-9.pdf | 5.99 MB | Adobe PDF | View/Open |
SCOPUSTM
Citations
2
Citations as of May 8, 2026
WEB OF SCIENCETM
Citations
2
Citations as of Apr 23, 2026
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



