Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95678
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorCappellini, Men_US
dc.creatorHolt, Ben_US
dc.creatorHsu, YYen_US
dc.date.accessioned2022-10-05T00:50:08Z-
dc.date.available2022-10-05T00:50:08Z-
dc.identifier.issn0346-251Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/95678-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.rights© 2022 Elsevier Ltd. All rights reserved.en_US
dc.rights© 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication adv Cappellini, M., Holt, B., & Hsu, Y.-Y. (2022). Multimodal alignment in telecollaboration: A methodological exploration. System, 102931 is available at https://dx.doi.org/10.1016/j.system.2022.102931.en_US
dc.subjectTelecollaborationen_US
dc.subjectVirtual exchangeen_US
dc.subjectInteractive alignmenten_US
dc.subjectLexical alignmenten_US
dc.subjectStructural alignmenten_US
dc.subjectFacial expression alignmenten_US
dc.titleMultimodal alignment in telecollaboration : a methodological explorationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume110en_US
dc.identifier.doi10.1016/j.system.2022.102931en_US
dcterms.abstractThis paper presents an analysis of interactive alignment (Pickering & Garrod, 2004) from a multimodal perspective (Guichon & Tellier, 2017) in two telecollaborative settings. We propose a framework to analyze alignment during desktop videoconferencing in its multimodality, including lexical and structural alignment (Michel & Cappellini, 2019) as well as multimodal alignment involving facial expressions. We analyze two datasets coming from two different models of telecollaboration. The first one is based on the Français en (première) ligne model (Develotte et al., 2007) which puts future foreign language teachers in contact with learners of that language. The second one is based on the teletandem model (Telles, 2009), where students of two different mother tongues interact to help each other use and learn the other's language. The paper makes explicit a semi-automatic procedure to study alignment multimodally. We tested our method on a dataset that is composed of two one-hour sessions. Results show that in desktop videoconferencing-based telecollaboration, facial expression alignment is a pivotal component of multimodal alignment.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSystem, Nov. 2022, v. 110, 102931en_US
dcterms.isPartOfSystemen_US
dcterms.issued2022-11-
dc.identifier.artn102931en_US
dc.description.validate202210 bckwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera1784-
dc.identifier.SubFormID45945-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextAgence Nationale de la Recherche; Department of Chinese and Bilingual Studies at the Hong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Cappellini_Multimodal_Alignment_Telecollaboration.pdfPre-Published version1.8 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

81
Citations as of Apr 14, 2025

Downloads

13
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

7
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

7
Citations as of Dec 5, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.