Integrity Concerns of AI in medical education scholarship
Mildred López
The introduction of artificial intelligence (AI) in learning settings has been a recent debate in educational institutions. The discussion used to be if institutions should allow these technologies to be part of campus life, just as we discussed if social media should be part of academic life. The different social media platforms not only became part of it but are where many interactions of the educational community take place. As institutions from different educational levels banned ChatGPT use in their learning settings because of concerns with academic integrity (Johnson, 2023), the question we should be asking is not whether we should allow it but how we will lay the ground rules for its use.
Some of the arguments behind the controversy of the use of AI are ethical concerns, for example, how can we prevent students from plagiarizing using AI (Cotton et al., 2023). The general concerns of its use are described by Busch et al. (2023) with an analysis using the principles of biomedical ethics describing the need for autonomy for users to have transparency of the algorithms in models, justice on the equitable access of applications by all users, non-maleficence in the use of AI with critical thinking, and beneficence through the training of users before engaging with AI. Masters (2023) delves into the concerns from the development part of AI models; for example, it might be impossible to assure anonymity and privacy in the use of data that users provide through interactions with Chatbots. Another example described by the author is incorporating the algorithms with Learning Management Systems (LMS) that could provide feedback and personalized recommendations for learners, but how to ensure the data protection standards.
But what if the use of AI is not by the students but rather by academics? The concern migrates from the ethical dimension to more of the integrity behind scholarly engagement. Cotton et al. (2023) present an example of a paper where ChatGPT was used to lay ideas and draft the article but pose the question if it can be considered an author if it cannot take responsibility and not hold accountable. Hosseini et al. (2023) present a policy draft that focuses on disclosing the use of AI in tasks such as getting ideas or writing full text and accepting full responsibility for the submitted text. The discussion is only starting, but few developments are setting precedents for what academia should embrace to address the integrity concerns of AI in medical education scholarship.
References
Busch, F., Adams, L., Bressem, K. (2023). Biomedical Ethical Aspects Towards the Implementation of Artificial Intelligence in Medical Education. Medical Science Educator. https://doi.org/10.1007/s40670-023-01815-x
Cotton, D., Cotton, P., Shipway, R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT, Innovations in Education and Teaching International, DOI: 10.1080/14703297.2023.2190148
Hosseini, M., Rasmussen, L., Resnik, D. (2023). Using AI to write scholarly publications, Accountability in Research, DOI: 10.1080/08989621.2023.2168535
Johnson, A. (2023). ChatGPT In Schools: Here’s Where It’s Banned—And How It Could Potentially Help Students. Forbes. https://www.forbes.com/sites/ariannajohnson/2023/01/18/chatgpt-in-schools-heres-where-its-banned-and-how-it-could-potentially-help-students/?sh=1aea888a6e2c
Masters, K. (2023). Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide No. 158, Medical Teacher, 45:6, 574-584, DOI: 10.1080/0142159X.2023.2186203
No comments:
Post a Comment