Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115489
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineering-
dc.creatorGuo, Jen_US
dc.creatorZhou, Yen_US
dc.creatorBurlion, Len_US
dc.creatorSavkin, AVen_US
dc.creatorHuang, Cen_US
dc.date.accessioned2025-10-02T02:11:47Z-
dc.date.available2025-10-02T02:11:47Z-
dc.identifier.issn0967-0661en_US
dc.identifier.urihttp://hdl.handle.net/10397/115489-
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.subjectAutonomous navigationen_US
dc.subjectDeep learningen_US
dc.subjectHybrid learningen_US
dc.subjectLast-mile deliveryen_US
dc.subjectMulti-agent systemsen_US
dc.subjectObject detectionen_US
dc.subjectPath planningen_US
dc.subjectReal-world adaptabilityen_US
dc.subjectReinforcement learningen_US
dc.subjectUnmanned aerial vehicleen_US
dc.subjectUrban logisticsen_US
dc.titleAutonomous UAV last-mile delivery in urban environments : a survey on deep learning and reinforcement learning solutionsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume165en_US
dc.identifier.doi10.1016/j.conengprac.2025.106491en_US
dcterms.abstractThis survey investigates the convergence of deep learning (DL) and reinforcement learning (RL) for unmanned aerial vehicle (UAV) applications, particularly in autonomous last-mile delivery. The ongoing growth of e-commerce heightens the need for advanced UAV technologies to overcome urban logistics challenges, including navigation and package delivery. DL and RL offer promising methods for object detection, path planning, and decision-making, enhancing delivery efficiency. However, significant challenges persist, particularly regarding scalability, computational constraints, and adaptation to volatile urban settings. Large UAV fleets and intricate city environments exacerbate scalability issues, while limited onboard processing capacity hinders the use of computationally intensive DL and RL models. Moreover, adapting to unpredictable conditions demands robust navigation strategies. This survey emphasizes hybrid approaches that integrate supervised and reinforcement learning or employ transfer learning to adapt pre-trained models. Techniques like model based RL and domain adaptation are highlighted as potential pathways to improve generalizability and bridge the gap between simulation and real-world deployment.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationControl engineering practice, Dec. 2025, v. 165, 106491en_US
dcterms.isPartOfControl engineering practiceen_US
dcterms.issued2025-12-
dc.identifier.scopus2-s2.0-105013186776-
dc.identifier.eissn1873-6939en_US
dc.identifier.artn106491en_US
dc.description.validate202510 bcch-
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000153/2025-09-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-12-31en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-12-31
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.