Daniel Wyzynski

Clinical Ethicist | Adjunct Research Professor | PhD Student

Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4


Journal article


M. Balas, Jordan Joseph Wadden, Philip C Hébert, Eric Mathison, Marika D Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A Crawford, Parnian Arjmand, Edsel B Ing
Journal of Medical Ethics, 2023

Semantic Scholar DOI PubMed
Cite

Cite

APA   Click to copy
Balas, M., Wadden, J. J., Hébert, P. C., Mathison, E., Warren, M. D., Seavilleklein, V., … Ing, E. B. (2023). Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4. Journal of Medical Ethics.


Chicago/Turabian   Click to copy
Balas, M., Jordan Joseph Wadden, Philip C Hébert, Eric Mathison, Marika D Warren, Victoria Seavilleklein, Daniel Wyzynski, et al. “Exploring the Potential Utility of AI Large Language Models for Medical Ethics: an Expert Panel Evaluation of GPT-4.” Journal of Medical Ethics (2023).


MLA   Click to copy
Balas, M., et al. “Exploring the Potential Utility of AI Large Language Models for Medical Ethics: an Expert Panel Evaluation of GPT-4.” Journal of Medical Ethics, 2023.


BibTeX   Click to copy

@article{m2023a,
  title = {Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4},
  year = {2023},
  journal = {Journal of Medical Ethics},
  author = {Balas, M. and Wadden, Jordan Joseph and Hébert, Philip C and Mathison, Eric and Warren, Marika D and Seavilleklein, Victoria and Wyzynski, Daniel and Callahan, Alison and Crawford, Sean A and Arjmand, Parnian and Ing, Edsel B}
}

Abstract

Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes. The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4’s responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4’s responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles. This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in