Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116451
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Aeronautical and Aviation Engineering | en_US |
| dc.creator | Liu, Q | en_US |
| dc.creator | Li, F | en_US |
| dc.creator | Ng, KKH | en_US |
| dc.creator | Han, J | en_US |
| dc.creator | Feng, S | en_US |
| dc.date.accessioned | 2025-12-30T05:47:41Z | - |
| dc.date.available | 2025-12-30T05:47:41Z | - |
| dc.identifier.issn | 0957-4174 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/116451 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Pergamon Press | en_US |
| dc.subject | Accident investigation | en_US |
| dc.subject | Chain of thought | en_US |
| dc.subject | General aviation | en_US |
| dc.subject | HFACS | en_US |
| dc.subject | Large language models | en_US |
| dc.subject | Witness narratives | en_US |
| dc.title | Accident investigation via LLMs reasoning : HFACS-guided chain-of-thoughts enhance general aviation safety | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 269 | en_US |
| dc.identifier.doi | 10.1016/j.eswa.2025.126422 | en_US |
| dcterms.abstract | Aviation accident investigation is crucial for preventing future accidents. However, traditional investigations in general aviation (GA) are expert-dependent and time-consuming. This study explores the potential of large language models (LLMs) to expedite this process by inferring human errors from witness narratives. Despite their promise, LLMs still struggle with domain-specific reasoning. To address this, we proposed a novel HFACS-CoT prompt that integrates the Human Factors Analysis and Classification System (HFACS) with Chain of Thought (CoT) reasoning, guiding LLMs to infer the pilot's unsafe acts and preconditions in a multi-step, two-stage process. HFACS-CoT+ further refines this prompt by sequentially guiding LLMs through each step and replacing textual instructions with programmatic logic statements. A new HFACS-labeled GA accident dataset was developed to support GA safety research as well as validate our proposed prompts. Using GPT-4o with the selected dataset, we found that HFACS-CoT significantly enhances LLMs’ ability to infer human errors, outperforming basic zero-shot, basic few-shot, auto-CoT and plan-and-solve prompts. HFACS-CoT+ further improves inference of preconditions and addresses deficiencies in answering logic. Moreover, comparative evaluations indicate that LLM surpass human experts in inferring certain human errors. This study highlights the benefits of integrating domain knowledge into prompt design and the potential of LLMs in GA accident investigations. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Expert systems with applications, 15 Apr. 2025, v. 269, 126422 | en_US |
| dcterms.isPartOf | Expert systems with applications | en_US |
| dcterms.issued | 2025-04-15 | - |
| dc.identifier.scopus | 2-s2.0-85214674440 | - |
| dc.identifier.eissn | 1873-6793 | en_US |
| dc.identifier.artn | 126422 | en_US |
| dc.description.validate | 202512 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000558/2025-12 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The study is partially supported by The Hong Kong Polytechnic University ( P0038933 and P0038827 ), by Research Centre Data Science AI ( P0042711 ). In addition, great appreciation is especially given to the anonymous reviewers, whose constructive feedback and suggestions greatly contributed to the improvement of the manuscript. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-04-15 | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



