Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115489
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Industrial and Systems Engineering | - |
| dc.creator | Guo, J | en_US |
| dc.creator | Zhou, Y | en_US |
| dc.creator | Burlion, L | en_US |
| dc.creator | Savkin, AV | en_US |
| dc.creator | Huang, C | en_US |
| dc.date.accessioned | 2025-10-02T02:11:47Z | - |
| dc.date.available | 2025-10-02T02:11:47Z | - |
| dc.identifier.issn | 0967-0661 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/115489 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.subject | Autonomous navigation | en_US |
| dc.subject | Deep learning | en_US |
| dc.subject | Hybrid learning | en_US |
| dc.subject | Last-mile delivery | en_US |
| dc.subject | Multi-agent systems | en_US |
| dc.subject | Object detection | en_US |
| dc.subject | Path planning | en_US |
| dc.subject | Real-world adaptability | en_US |
| dc.subject | Reinforcement learning | en_US |
| dc.subject | Unmanned aerial vehicle | en_US |
| dc.subject | Urban logistics | en_US |
| dc.title | Autonomous UAV last-mile delivery in urban environments : a survey on deep learning and reinforcement learning solutions | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 165 | en_US |
| dc.identifier.doi | 10.1016/j.conengprac.2025.106491 | en_US |
| dcterms.abstract | This survey investigates the convergence of deep learning (DL) and reinforcement learning (RL) for unmanned aerial vehicle (UAV) applications, particularly in autonomous last-mile delivery. The ongoing growth of e-commerce heightens the need for advanced UAV technologies to overcome urban logistics challenges, including navigation and package delivery. DL and RL offer promising methods for object detection, path planning, and decision-making, enhancing delivery efficiency. However, significant challenges persist, particularly regarding scalability, computational constraints, and adaptation to volatile urban settings. Large UAV fleets and intricate city environments exacerbate scalability issues, while limited onboard processing capacity hinders the use of computationally intensive DL and RL models. Moreover, adapting to unpredictable conditions demands robust navigation strategies. This survey emphasizes hybrid approaches that integrate supervised and reinforcement learning or employ transfer learning to adapt pre-trained models. Techniques like model based RL and domain adaptation are highlighted as potential pathways to improve generalizability and bridge the gap between simulation and real-world deployment. | - |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Control engineering practice, Dec. 2025, v. 165, 106491 | en_US |
| dcterms.isPartOf | Control engineering practice | en_US |
| dcterms.issued | 2025-12 | - |
| dc.identifier.scopus | 2-s2.0-105013186776 | - |
| dc.identifier.eissn | 1873-6939 | en_US |
| dc.identifier.artn | 106491 | en_US |
| dc.description.validate | 202510 bcch | - |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000153/2025-09 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-12-31 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



