Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116185
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineeringen_US
dc.creatorLyu, Men_US
dc.creatorLi, Fen_US
dc.date.accessioned2025-11-26T01:47:36Z-
dc.date.available2025-11-26T01:47:36Z-
dc.identifier.issn1071-5819en_US
dc.identifier.urihttp://hdl.handle.net/10397/116185-
dc.language.isoenen_US
dc.publisherAcademic Pressen_US
dc.subjectAviation safetyen_US
dc.subjectEye-trackingen_US
dc.subjectGenerative AIen_US
dc.subjectHuman-centered artificial intelligenceen_US
dc.subjectHuman–computer interactionen_US
dc.titleDo you need help? identifying and responding to pilots’ troubleshooting through eye-tracking and large language modelen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume205en_US
dc.identifier.doi10.1016/j.ijhcs.2025.103617en_US
dcterms.abstractIn-time automation support is crucial for enhancing pilots’ performance and flight safety. While extensive research has been conducted on providing automation support to mitigate risks associated with the Out-of-the-Loop (OOTL) phenomenon, limited attention has been given to supporting pilots who are actively engaged, known as In-the-Loop (ITL) status. Despite their active engagement, ITL pilots face challenges in managing multiple tasks simultaneously without additional support. For instance, providing critical information through in-time automation support can significantly improve efficiency and flight safety when pilots need to visually troubleshoot unexpected incidents while monitoring the aircraft's flying status. This study addresses the gap in ITL support by introducing a method that utilizes eye-tracking data tokenized into Visual Attention Matrices (VAMs), integrated with a Large Language Model (LLM) to identify and respond to troubleshooting activities of ITL pilots. We address two primary challenges: capturing the complex troubleshooting status of pilots, which blends with normal monitoring behaviors, and effectively processing non-semantic eye-tracking data using LLM. The proposed VAM approach provides a structured representation of visual attention that supports LLM reasoning, while empirical VAMs enhance the model's ability to efficiently identify critical features. A case study involving 19 licensed pilots validates the efficacy of the proposed approach in identifying and responding to pilots’ troubleshooting activities. This research contributes significantly to adaptive Human–Computer Interaction (HCI) in aviation by improving support for ITL pilots, thereby laying a foundation for future advancements in human–AI collaboration within automated aviation systems.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationInternational journal of human computer studies, Nov. 2025, v. 205, 103617en_US
dcterms.isPartOfInternational journal of human computer studiesen_US
dcterms.issued2025-11-
dc.identifier.scopus2-s2.0-105016248975-
dc.identifier.eissn1095-9300en_US
dc.identifier.artn103617en_US
dc.description.validate202511 bcjzen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000386/2025-10-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported by the Hong Kong Polytechnic University, Hong Kong under Grant P0038827 and Grant P0038933 . This study has been granted human ethics approval from the PolyU Institutional Review Board of The Hong Kong Polytechnic University (IRB Reference Number: HSEARS20211117002 ).en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-11-30en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-11-30
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.