Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116360
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorHong, Men_US
dc.creatorNg, Wen_US
dc.creatorZhang, CJen_US
dc.creatorJiang, Den_US
dc.date.accessioned2025-12-19T02:41:27Z-
dc.date.available2025-12-19T02:41:27Z-
dc.identifier.isbn979-8-89176-332-6en_US
dc.identifier.urihttp://hdl.handle.net/10397/116360-
dc.description2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025), Suzhou, China, November 4th-9th, 2025en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights©2025 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Hong, M., Ng, W., Zhang, C. J., & Jiang, D. (2025, November). QualBench: Benchmarking Chinese LLMs with Localized Professional Qualifications for Vertical Domain Evaluation. In C. Christodoulopoulos, T. Chakraborty, C. Rose, & V. Peng, Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, Suzhou, China is available at https://doi.org/10.18653/v1/2025.emnlp-main.303.en_US
dc.titleQualBench : benchmarking Chinese LLMs with localized professional qualifications for vertical domain evaluationen_US
dc.typeConference Paperen_US
dc.identifier.spage5949en_US
dc.identifier.epage5964en_US
dc.identifier.doi10.18653/v1/2025.emnlp-main.303en_US
dcterms.abstractThe rapid advancement of Chinese LLMs underscores the need for vertical-domain evaluations to ensure reliable applications. However, existing benchmarks often lack domain coverage and provide limited insights into the Chinese working context. Leveraging qualification exams as a unified framework for expertise evaluation, we introduce QualBench, the first multi-domain Chinese QA benchmark dedicated to localized assessment of Chinese LLMs. The dataset includes over 17,000 questions across six vertical domains, drawn from 24 Chinese qualifications to align with national policies and professional standards. Results reveal an interesting pattern of Chinese LLMs consistently surpassing non-Chinese models, with the Qwen2.5 model outperforming the more advanced GPT-4o, emphasizing the value of localized domain knowledge in meeting qualification requirements. The average accuracy of 53.98% reveals the current gaps in domain coverage within model capabilities. Furthermore, we identify performance degradation caused by LLM crowdsourcing, assess data contamination, and illustrate the effectiveness of prompt engineering and model fine-tuning, suggesting opportunities for future improvements through multi-domain RAG and Federated Learning. Data and code are publicly available at https://github.com/mengze-hong/QualBench.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, p. 5949-5964. Kerrville, TX: Association for Computational Linguistics , 2025en_US
dcterms.issued2025-
dc.relation.ispartofbookProceedings of the 2025 Conference on Empirical Methods in Natural Language Processingen_US
dc.relation.conferenceEmpirical Methods in Natural Language Processing [EMNLP]en_US
dc.publisher.placeKerrville, TXen_US
dc.description.validate202512 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera4223-
dc.identifier.SubFormID52299-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
dc.relation.rdatahttps://github.com/mengze-hong/QualBenchen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2025.emnlp-main.303.pdf759.81 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.