Large Language Models (LLMs): Smart Work or Academic Doping?

Abstract
LLMs are transforming academic research and publishing by significantly increasing scholars' productivity. A 2023 Nature survey revealed that nearly a third of scientists use generative AI for manuscript preparation, with LLMs aiding in tasks such as coding, brainstorming, and literature reviews. LLMs help overcome language barriers and allow researchers to create personalised models tailored to their fields, automate repetitive tasks, and boost productivity, leading to faster publication readiness and enhancing the research journey. However, LLMs raise significant issues, including biases and exploitation in their training processes and generating errors or inaccurate information. This outsourcing of thought (and, of course, the facilitating of outright cheating by students and scholars) raises concerns about overburdening journal editors, peer reviewers, and course administrators alike. The ease of generating papers with LLMs is increasing the volume of lower-value research, making it harder to identify impactful studies and threatening the integrity and sustainability of scientific publishing. This debate juxtaposes the optimistic view of LLMs as catalysts for scientific progress with critical perspectives on their potential todilute research quality and integrity.
Description
MP4 video, Size: 2.15 GB; Duration: 2:05:53
Contributor ORCIDs
Soodyall, Himla ; Majozi, Thokozani ; Verhoef, Anne ; Morris, Lynn ; Walwyn, David ; Tjano, Nicky
DOI
Citation
Peer review status
Non-Peer Reviewed
Collections