Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114572
PIRA download icon_1.1View/Download Full Text
Title: HiBench : benchmarking LLMs capability on hierarchical structure reasoning
Authors: Jiang, Z 
Wu, P 
Liang, Z 
Chen, PQ 
Yuan, X 
Jia, Y 
Tu, J 
Li, C 
Ng, PHF 
Li, Q 
Issue Date: 2025
Source: KDD '25: Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, p. 5505-5515
Abstract: Structure reasoning is a fundamental capability of large language models (LLMs), enabling them to reason about structured commonsense and answer multi-hop questions. However, existing benchmarks for structure reasoning mainly focus on horizontal and coordinate structures (e.g. graphs), overlooking the hierarchical relationships within them. Hierarchical structure reasoning is crucial for human cognition, particularly in memory organization and problem-solving. It also plays a key role in various real-world tasks, such as information extraction and decision-making. To address this gap, we propose HiBench, the first framework designed to systematically benchmark the hierarchical reasoning capabilities of LLMs from initial structure generation to final proficiency assessment. It encompasses six representative scenarios, covering both fundamental and practical aspects, and consists of 30 tasks with varying hierarchical complexity, totaling 39,519 queries. To evaluate LLMs comprehensively, we develop five capability dimensions that depict different facets of hierarchical structure understanding. Through extensive evaluation of 20 LLMs from 10 model families, we reveal key insights into their capabilities and limitations: 1) existing LLMs show proficiency in basic hierarchical reasoning tasks; 2) they still struggle with more complex structures and implicit hierarchical representations, especially in structural modification and textual reasoning. Based on these findings, we create a small yet well-designed instruction dataset, which enhances LLMs' performance on HiBench by an average of 88.84% (Llama-3.1-8B) and 31.38% (Qwen2.5-7B) across all tasks. The HiBench dataset and toolkit are available at https://github.com/jzzzzh/HiBench to encourage evaluation.
Keywords: Benchmark
Hierarchical reasoning
Large language models
Natural language processing
Publisher: Association for Computing Machinery
ISBN: 979-8-4007-1454-2
DOI: 10.1145/3711896.3737378
Rights: This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0).
© 2025 Copyright held by the owner/author(s).
The following publication Jiang, Z., Wu, P., Liang, Z., Chen, P. Q., Yuan, X., Jia, Y., Tu, J., Li, C., Ng, P. H. F., & Li, Q. (2025). HiBench: Benchmarking LLMs Capability on Hierarchical Structure Reasoning Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, Toronto ON, Canada, 5505-5515 is available at https://doi.org/10.1145/3711896.3737378.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
3711896.3737378.pdf1.69 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.