Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106695
PIRA download icon_1.1View/Download Full Text
Title: Investigating the effect of discourse connectives on transformer surprisal : language models understand connectives, even so they are surprised
Authors: Cong, Y
Chersoni, E 
Hsu, YY 
Blache, P
Issue Date: 2023
Source: In BlackboxNLP Analyzing and Interpreting Neural Networks for NLP : Proceedings of the Sixth Workshop, December 7, 2023, p. 222-232. Kerrville, TX : Association for Computational Linguistics, 2023
Abstract: As neural language models (NLMs) based on Transformers are becoming increasingly dominant in natural language processing, several studies have proposed analyzing the semantic and pragmatic abilities of such models. In our study, we aimed at investigating the effect of discourse connectives on NLMs with regard to Transformer Surprisal scores by focusing on the English stimuli of an experimental dataset, in which the expectations about an event in a discourse fragment could be reversed by a concessive or a contrastive connective. By comparing the Surprisal scores of several NLMs, we found that bigger NLMs show patterns similar to humans’ behavioral data when a concessive connective is used, while connective-related effects tend to disappear with a contrastive one. We have additionally validated our findings with GPT-Neo using an extended dataset, and results mostly show a consistent pattern.
Publisher: Association for Computational Linguistics
ISBN: 979-8-89176-052-3
DOI: 10.18653/v1/2023.blackboxnlp-1.17
Description: BlackboxNLP Analyzing and Interpreting Neural Networks for NLP, December 7, 2023, Singapore
Rights: ©2023 Association for Computational Linguistics
ACL materials are Copyright © 1963–2024 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License (https://creativecommons.org/licenses/by-nc-sa/3.0/). Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
The following publication Yan Cong, Emmanuele Chersoni, Yu-Yin Hsu, and Philippe Blache. 2023. Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 222–232, Singapore. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2023.blackboxnlp-1.17.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
48140_2023.blackboxnlp-1.17.pdf1.05 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

90
Last Week
7
Last month
Citations as of Dec 21, 2025

Downloads

35
Citations as of Dec 21, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.