Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93050
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorCappellini, Men_US
dc.creatorHsu, YYen_US
dc.date.accessioned2022-06-06T02:44:34Z-
dc.date.available2022-06-06T02:44:34Z-
dc.identifier.issn0958-3440en_US
dc.identifier.urihttp://hdl.handle.net/10397/93050-
dc.language.isoenen_US
dc.publisherCambridge University Pressen_US
dc.rightsThis article has been published in a revised form in ReCALL https://dx.doi.org/10.1017/S0958344022000076. This version is free to view and download for private research and study only. Not for re-distribution or re-use. © The Author(s), 2022. Published by Cambridge University Press on behalf of European Association for Computer Assisted Language Learning..en_US
dc.subjectWebconferencingen_US
dc.subjectOnline tutoringen_US
dc.subjectTelecollaborationen_US
dc.subjectEye trackingen_US
dc.subjectAffordancesen_US
dc.subjectMultimodalityen_US
dc.titleMultimodality in webconference-based language tutoring : an ecological approach integrating eye trackingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.spage255-
dc.identifier.epage19en_US
dc.identifier.epage273-
dc.identifier.volume34-
dc.identifier.issue3-
dc.identifier.doi10.1017/S0958344022000076en_US
dcterms.abstractDrawing on existing research with a holistic stance toward multimodal meaning-making, this paper takes an analytic approach to integrating eye-tracking data to study the perception and use of multimodality by teachers and learners. To illustrate this approach, we analyse two webconference tutoring sessions from a telecollaborative project involving pre-service teachers and learners of Mandarin Chinese. The tutoring sessions were recorded and transcribed multimodally, and our analysis of two types of conversational side sequences shows that the integration of eye-tracking data into an ecological approach provides richer results. Specifically, our proposed approach provided a window on the participants’ cognitive management of graphic and visual affordances during interaction and uncovered episodes of joint attention.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationReCALL, 2022, Sept 2022, v. 34, no. 3, p. 255-273-
dcterms.isPartOfReCALLen_US
dcterms.issued2022-09-
dc.identifier.eissn1474-0109en_US
dc.description.validate202206 bcrcen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera1270, CBS-0009en_US
dc.identifier.SubFormID44417-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS54799632en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
REC-RA-2020-0107.pdfPre-Published version4.78 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

101
Last Week
0
Last month
Citations as of Apr 14, 2025

Downloads

337
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

2
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

3
Citations as of Oct 10, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.