Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114025
PIRA download icon_1.1View/Download Full Text
Title: WorldCuisines : a massive-scale benchmark for multilingual and multicultural visual question answering on global cuisines
Authors: Genta, I
Hudi, F
Irawan, PA
Anugraha, D
Putri, RA
Yutong, W
Nohejl, A
Prathama, UA
Ousidhoum, N
Amriani, A
Rzayev, A
Das, A
Pramodya, A
Adila, A
Wilie, B
Mawalim, CO
Lam, CL
Abolade, D
Chersoni, E 
Santus, E
Ikhwantri, F
Kuwanto, G
Zhao, H
Wibowo, HA
Lovenia, H
Cruz, JCB
Putra, JWG
Myung, J
Susanto, L
Machin, MAR
Zhukova, M
Anugraha, M
Adilazuarda, MF
Santosa, N
Limkonchotiwat, P
Dabre, R
Audino, RA
Cahyawijaya, S
Zhang, SX
Salim, SY
Zhou, Y
Gui, Y
Adelani, DI
Lee, EA
Okada, S
Purwarianti, A
Aji, A
Watanabe, T
Wijaya, DT
Oh, A
Ngo, CW
Issue Date: 2025
Source: In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (v. 1: Long Papers), p. 3242–3264. Albuquerque, New Mexico: Association for Computational Linguistics, 2025
Abstract: Vision Language Models (VLMs) often struggle with culture-specific knowledge, particularly in languages other than English and in underrepresented cultural contexts. To evaluate their understanding of such knowledge, we introduce WorldCuisines, a massive-scale benchmark for multilingual and multicultural, visually grounded language understanding. This benchmark includes a visual question answering (VQA) dataset with text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark to date. It includes tasks for identifying dish names and their origins. We provide evaluation datasets in two sizes (12k and 60k instances) alongside a training dataset (1 million instances). Our findings show that while VLMs perform better with correct location context, they struggle with adversarial contexts and predicting specific regional cuisines and languages. To support future research, we release a knowledge base with annotated food entries and images along with the VQA data.
Publisher: Association for Computational Linguistics
ISBN: 979-8-89176-189-6
DOI: 10.18653/v1/2025.naacl-long.167
Description: 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, Albuquerque, New Mexico, April 29- May 4, 2025
Rights: ©2025 Association for Computational Linguistics
ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
The following publication Winata, G. I., Hudi, F., Irawan, P. A., Anugraha, D., Putri, R. A., Wang, Y., ... & Ngo, C. W. 2025. WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3242–3264, Albuquerque, New Mexico. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2025.naacl-long.167.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
2025.naacl-long.167.pdf5.79 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.