Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118091
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.creator | Deng, K | en_US |
| dc.creator | Wang, Y | en_US |
| dc.creator | Chau, LP | en_US |
| dc.date.accessioned | 2026-03-13T06:22:18Z | - |
| dc.date.available | 2026-03-13T06:22:18Z | - |
| dc.identifier.issn | 0957-4174 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/118091 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Pergamon Press | en_US |
| dc.subject | Egocentric vision | en_US |
| dc.subject | HOI detection | en_US |
| dc.subject | Human-object interaction | en_US |
| dc.subject | Interaction recognition | en_US |
| dc.title | Egocentric human-object interaction detection : a new benchmark and method | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 300 | en_US |
| dc.identifier.doi | 10.1016/j.eswa.2025.130216 | en_US |
| dcterms.abstract | Egocentric human-object interaction (Ego-HOI) detection is crucial for intelligent agents to comprehend and assist human activities from a first-person perspective. However, progress has been hindered by the lack of dedicated benchmarks and methods robust to severe egocentric challenges like hand-object occlusion. This work bridges this gap through three key contributions. Firstly, we introduce Ego-HOIBench, a pioneering benchmark dataset derived from HOI4D for real-world Ego-HOI detection, comprising over 27K real images with explicit, fine-grained <hand, verb, object> triplet annotations. Secondly, we propose Hand Geometry and Interactivity Refinement (HGIR), a novel plug-and-play module that captures the structural geometry of hands to learn occlusion-robust, pose-aware interaction representations. Thirdly, comprehensive experiments demonstrate that HGIR significantly enhances Ego-HOI detection performance across various methods, achieving state-of-the-art results and laying a solid foundation for future research in egocentric vision. Project page:https://dengkunyuan.github.io/EgoHOIBench/ | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Expert systems with applications, 5 Mar. 2026, v. 300, 130216 | en_US |
| dcterms.isPartOf | Expert systems with applications | en_US |
| dcterms.issued | 2026-03-05 | - |
| dc.identifier.scopus | 2-s2.0-105024331082 | - |
| dc.identifier.eissn | 1873-6793 | en_US |
| dc.identifier.artn | 130216 | en_US |
| dc.description.validate | 202603 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G001194/2026-01 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust. This research received partially support from the Global STEM Professorship Scheme from the Hong Kong Special Administrative Region. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2028-03-05 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



