Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113067
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorLi, Sen_US
dc.creatorChen, Xen_US
dc.creatorSong, Yen_US
dc.creatorSong, Yen_US
dc.creatorZhang, CJen_US
dc.creatorHao, Fen_US
dc.creatorChen, Len_US
dc.date.accessioned2025-05-19T00:52:31Z-
dc.date.available2025-05-19T00:52:31Z-
dc.identifier.issn1066-8888en_US
dc.identifier.urihttp://hdl.handle.net/10397/113067-
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© The Author(s) 2025. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Li, S., Chen, X., Song, Y. et al. prompt4vis: prompting large language models with example mining for tabular data visualization. The VLDB Journal 34, 38 (2025) is available at https://doi.org/10.1007/s00778-025-00912-0.en_US
dc.subjectIn-context learningen_US
dc.subjectLarge language modelen_US
dc.subjectNLP for databaseen_US
dc.subjectPrompt engineeringen_US
dc.subjectText-to-visen_US
dc.titlePrompt4Vis : prompting large language models with example mining for tabular data visualizationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume34en_US
dc.identifier.issue4en_US
dc.identifier.doi10.1007/s00778-025-00912-0en_US
dcterms.abstractWe are currently in the epoch of Large Language Models (LLMs), which have transformed numerous technological domains within the database community. In this paper, we examine the application of LLMs in text-to-visualization (text-to-vis). The advancement of natural language processing technologies has made natural language interfaces more accessible and intuitive for visualizing tabular data. However, despite utilizing advanced neural network architectures, current methods such as Seq2Vis, ncNet, and RGVisNet for transforming natural language queries into DV commands still underperform, indicating significant room for improvement. In this paper, we introduce Prompt4Vis, a novel framework that leverages LLMs and In-context learning to enhance the generation of data visualizations from natural language. Given that In-context learning’s effectiveness is highly dependent on the selection of examples, it is critical to optimize this aspect. Additionally, encoding the full database schema of a query is not only costly but can also lead to inaccuracies. This framework includes two main components: (1) an example mining module that identifies highly effective examples to enhance In-context learning capabilities for text-to-vis applications, and (2) a schema filtering module designed to streamline database schemas. Comprehensive testing on the NVBench dataset has shown that Prompt4Vis significantly outperforms the current state-of-the-art model, RGVisNet, by approximately 35.9% on development sets and 71.3% on test sets. To the best of our knowledge, Prompt4Vis is the first framework to incorporate In-context learning for enhancing text-to-vis, marking a pioneering step in the domain.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationVLDB journal, July 2025, v. 34, no. 4, 38en_US
dcterms.isPartOfVLDB journalen_US
dcterms.issued2025-07-
dc.identifier.scopus2-s2.0-105004177830-
dc.identifier.eissn0949-877Xen_US
dc.identifier.artn38en_US
dc.description.validate202505 bcwcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA, a3706-
dc.identifier.SubFormID50794-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextP0036742, P0038989, P0046701, P0046703, P0040041, P0040568, P0043864, P0045948, P0046453, P0048183, P0048191, P0048566, P0048887, WEB24EG01-Hen_US
dc.description.pubStatusPublisheden_US
dc.description.TASpringer Nature (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s00778-025-00912-0.pdf1.87 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

128
Citations as of Feb 9, 2026

Downloads

69
Citations as of Feb 9, 2026

SCOPUSTM   
Citations

5
Citations as of May 8, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.