Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116360
PIRA download icon_1.1View/Download Full Text
Title: QualBench : benchmarking Chinese LLMs with localized professional qualifications for vertical domain evaluation
Authors: Hong, M 
Ng, W 
Zhang, CJ 
Jiang, D 
Issue Date: 2025
Source: In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, p. 5949-5964. Kerrville, TX: Association for Computational Linguistics , 2025
Abstract: The rapid advancement of Chinese LLMs underscores the need for vertical-domain evaluations to ensure reliable applications. However, existing benchmarks often lack domain coverage and provide limited insights into the Chinese working context. Leveraging qualification exams as a unified framework for expertise evaluation, we introduce QualBench, the first multi-domain Chinese QA benchmark dedicated to localized assessment of Chinese LLMs. The dataset includes over 17,000 questions across six vertical domains, drawn from 24 Chinese qualifications to align with national policies and professional standards. Results reveal an interesting pattern of Chinese LLMs consistently surpassing non-Chinese models, with the Qwen2.5 model outperforming the more advanced GPT-4o, emphasizing the value of localized domain knowledge in meeting qualification requirements. The average accuracy of 53.98% reveals the current gaps in domain coverage within model capabilities. Furthermore, we identify performance degradation caused by LLM crowdsourcing, assess data contamination, and illustrate the effectiveness of prompt engineering and model fine-tuning, suggesting opportunities for future improvements through multi-domain RAG and Federated Learning. Data and code are publicly available at https://github.com/mengze-hong/QualBench.
Publisher: Association for Computational Linguistics (ACL)
ISBN: 979-8-89176-332-6
DOI: 10.18653/v1/2025.emnlp-main.303
Research Data: https://github.com/mengze-hong/QualBench
Description: 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025), Suzhou, China, November 4th-9th, 2025
Rights: ©2025 Association for Computational Linguistics
Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)
The following publication Hong, M., Ng, W., Zhang, C. J., & Jiang, D. (2025, November). QualBench: Benchmarking Chinese LLMs with Localized Professional Qualifications for Vertical Domain Evaluation. In C. Christodoulopoulos, T. Chakraborty, C. Rose, & V. Peng, Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, Suzhou, China is available at https://doi.org/10.18653/v1/2025.emnlp-main.303.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
2025.emnlp-main.303.pdf759.81 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.