Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107027
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studies-
dc.creatorDai, Y-
dc.creatorWu, Z-
dc.date.accessioned2024-06-07T08:55:27Z-
dc.date.available2024-06-07T08:55:27Z-
dc.identifier.issn0958-8221-
dc.identifier.urihttp://hdl.handle.net/10397/107027-
dc.language.isoenen_US
dc.publisherRoutledgeen_US
dc.rights© 2021 Informa UK Limited, trading as Taylor & Francis Groupen_US
dc.rightsThis is an Accepted Manuscript of an article published by Taylor & Francis in Computer Assisted Language Learning on 26 Jul 2021 (published online), available at: http://www.tandfonline.com/10.1080/09588221.2021.1952272.en_US
dc.subjectAutomatic speech recognitionen_US
dc.subjectMixed methodsen_US
dc.subjectMobile-assisted language learningen_US
dc.subjectPeer feedbacken_US
dc.subjectPronunciationen_US
dc.titleMobile-assisted pronunciation learning with feedback from peers and/or automatic speech recognition : a mixed-methods studyen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage861-
dc.identifier.epage884-
dc.identifier.volume36-
dc.identifier.issue5-6-
dc.identifier.doi10.1080/09588221.2021.1952272-
dcterms.abstractAlthough social networking apps and dictation-based automatic speech recognition (ASR) are now widely available in mobile phones, relatively little is known about whether and how these technological affordances can contribute to EFL pronunciation learning. The purpose of this study is to investigate the effectiveness of feedback from peers and/or ASR in mobile-assisted pronunciation learning. 84 Chinese EFL university students were assigned into three conditions, using WeChat (a multi-purpose mobile app) for autonomous ASR feedback (the Auto-ASR group), peer feedback (the Co-non-ASR group), or peer plus ASR feedback (the Co-ASR group). Quantitative data included the pronunciation pretest, posttest, and delayed posttest, and students’ perception questionnaires, while qualitative data included students’ interviews. The main findings are: (a) all three groups improved their pronunciation, but the Co-non-ASR and the Co-ASR groups outperformed the Auto-ASR group; (b) the three groups showed no significant difference in perception questionnaires; and (c) the interviews revealed some common and unique technical, social/psychological, and educational affordances and concerns about the three mobile-assisted learning conditions.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationComputer assisted language learning, 2023, v. 36, no. 5-6, p. 861-884-
dcterms.isPartOfComputer assisted language learning-
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85111707183-
dc.identifier.eissn1744-3210-
dc.description.validate202406 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera2785en_US
dc.identifier.SubFormID48327en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextChina Postdoctoral Science Foundation (2020M683165)en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Dai_Mobile-assisted_Pronunciation_Learning.pdfPre-Published version1.27 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

96
Last Week
2
Last month
Citations as of Nov 9, 2025

Downloads

875
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

74
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

47
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.