Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112570
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of English and Communicationen_US
dc.creatorEbadi, Sen_US
dc.creatorNejadghanbar, Hen_US
dc.creatorSalman, ARen_US
dc.creatorKhosravi, Hen_US
dc.date.accessioned2025-04-17T06:34:35Z-
dc.date.available2025-04-17T06:34:35Z-
dc.identifier.issn1570-1727en_US
dc.identifier.urihttp://hdl.handle.net/10397/112570-
dc.language.isoenen_US
dc.publisherSpringer Dordrechten_US
dc.rights© The Author(s) 2025en_US
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Ebadi, S., Nejadghanbar, H., Salman, A.R. et al. Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers. J Acad Ethics 23, 1383–1397 (2025) is available at https://doi.org/10.1007/s10805-025-09604-4.en_US
dc.subjectAcademic publishingen_US
dc.subjectJournal reviewen_US
dc.subjectLarge language models (LLMs)en_US
dc.subjectPeer reviewen_US
dc.subjectReviewer perspectivesen_US
dc.titleExploring the impact of generative ai on peer review : insights from journal reviewersen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1383en_US
dc.identifier.epage1397en_US
dc.identifier.volume23en_US
dc.identifier.issue3en_US
dc.identifier.doi10.1007/s10805-025-09604-4en_US
dcterms.abstractThis study investigates the perspectives of 12 journal reviewers from diverse academic disciplines on using large language models (LLMs) in the peer review process. We identified key themes regarding integrating LLMs through qualitative data analysis of verbatim responses to an open-ended questionnaire. Reviewers noted that LLMs can automate tasks such as preliminary screening, plagiarism detection, and language verification, thereby reducing workload and enhancing consistency in applying review standards. However, significant ethical concerns were raised, including potential biases, lack of transparency, and risks to privacy and confidentiality. Reviewers emphasized that LLMs should not replace human judgment but rather complement it with human oversight, which is essential to ensure the relevance and accuracy of AI-generated feedback. This study underscores the need for clear guidelines and policies, as well as their proper dissemination among researchers, to address the ethical and practical challenges of using LLMs in academic publishing.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of academic ethics, Sept 2025, v. 23, no. 3, p. 1383–1397en_US
dcterms.isPartOfJournal of academic ethicsen_US
dcterms.issued2025-09-
dc.identifier.scopus2-s2.0-85217763584-
dc.identifier.eissn1572-8544en_US
dc.description.validate202504 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.TASpringer Nature (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s10805-025-09604-4.pdf814.89 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

6
Citations as of Oct 24, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.