Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110676
PIRA download icon_1.1View/Download Full Text
Title: Log probabilities are a reliable estimate of semantic plausibility in base and instruction-tuned language models
Authors: Kauf, C
Chersoni, E 
Lenci, A
Fedorenko, E
Ivanova, AA
Issue Date: 2024
Source: In The 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP: Proceedings of the Workshop, November 15, 2024, p. 263-277. Kerrville, TX : Association for Computational Linguistics (ACL), 2024
Abstract: Semantic plausibility (e.g. knowing that “the actor won the award” is more likely than “the actor won the battle”) serves as an effective proxy for general world knowledge. Language models (LMs) capture vast amounts of world knowledge by learning distributional patterns in text, accessible via log probabilities (LogProbs) they assign to plausible vs. implausible outputs. The new generation of instruction-tuned LMs can now also provide explicit estimates of plausibility via prompting. Here, we evaluate the effectiveness of LogProbs and basic prompting to measure semantic plausibility, both in single-sentence minimal pairs (Experiment 1) and short context-dependent scenarios (Experiment 2). We find that (i) in both base and instruction-tuned LMs, LogProbs offers a more reliable measure of semantic plausibility than direct zero-shot prompting, which yields inconsistent and often poor results; (ii) instruction-tuning generally does not alter the sensitivity of LogProbs to semantic plausibility (although sometimes decreases it); (iii) across models, context mostly modulates LogProbs in expected ways, as measured by three novel metrics of context-sensitive plausibility and their match to explicit human plausibility judgments. We conclude that, even in the era of prompt-based evaluations, LogProbs constitute a useful metric of semantic plausibility, both in base and instruction-tuned LMs.
Publisher: Association for Computational Linguistics (ACL)
ISBN: 979-8-89176-170-4
DOI: 10.18653/v1/2024.blackboxnlp-1.18
Description: BlackboxNLP 2024: Analyzing and interpreting neural networks for NLP -- EMNLP 2024, November 15, 2024, Miami, Florida
Rights: ©2024 Association for Computational Linguistics
ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License (https://creativecommons.org/licenses/by-nc-sa/3.0/). Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
The following publication Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, and Anna A Ivanova. 2024. Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 263–277, Miami, Florida, US. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2024.blackboxnlp-1.18.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
2024.blackboxnlp-1.18.pdf5.81 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

3
Citations as of Jan 5, 2025

Downloads

6
Citations as of Jan 5, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.