Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118091
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorDeng, Ken_US
dc.creatorWang, Yen_US
dc.creatorChau, LPen_US
dc.date.accessioned2026-03-13T06:22:18Z-
dc.date.available2026-03-13T06:22:18Z-
dc.identifier.issn0957-4174en_US
dc.identifier.urihttp://hdl.handle.net/10397/118091-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.subjectEgocentric visionen_US
dc.subjectHOI detectionen_US
dc.subjectHuman-object interactionen_US
dc.subjectInteraction recognitionen_US
dc.titleEgocentric human-object interaction detection : a new benchmark and methoden_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume300en_US
dc.identifier.doi10.1016/j.eswa.2025.130216en_US
dcterms.abstractEgocentric human-object interaction (Ego-HOI) detection is crucial for intelligent agents to comprehend and assist human activities from a first-person perspective. However, progress has been hindered by the lack of dedicated benchmarks and methods robust to severe egocentric challenges like hand-object occlusion. This work bridges this gap through three key contributions. Firstly, we introduce Ego-HOIBench, a pioneering benchmark dataset derived from HOI4D for real-world Ego-HOI detection, comprising over 27K real images with explicit, fine-grained <hand, verb, object> triplet annotations. Secondly, we propose Hand Geometry and Interactivity Refinement (HGIR), a novel plug-and-play module that captures the structural geometry of hands to learn occlusion-robust, pose-aware interaction representations. Thirdly, comprehensive experiments demonstrate that HGIR significantly enhances Ego-HOI detection performance across various methods, achieving state-of-the-art results and laying a solid foundation for future research in egocentric vision. Project page:https://dengkunyuan.github.io/EgoHOIBench/en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationExpert systems with applications, 5 Mar. 2026, v. 300, 130216en_US
dcterms.isPartOfExpert systems with applicationsen_US
dcterms.issued2026-03-05-
dc.identifier.scopus2-s2.0-105024331082-
dc.identifier.eissn1873-6793en_US
dc.identifier.artn130216en_US
dc.description.validate202603 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001194/2026-01-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust. This research received partially support from the Global STEM Professorship Scheme from the Hong Kong Special Administrative Region.en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2028-03-05en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2028-03-05
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.