Background: Recent enhancements in Large Language Models (LLMs) such as ChatGPT have exponentially increased user adoption. These models are accessible on mobile devices and support multimodal interactions, including conversations, code generation, and patient image uploads, broadening their utility in providing healthcare professionals with real-time support for clinical decision-making. Nevertheless, many authors have highlighted serious risks that may arise from the adoption of LLMs, principally related to safety and alignment with ethical guidelines. Objective: To address these challenges, we introduce a novel methodological approach designed to assess the specific feasibility of adopting LLMs within a healthcare area, with a focus on clinical nursing, evaluating their performance and thereby directing their choice. Emphasizing LLMs’ adherence to scientific advancements, this approach prioritizes safety and care personalization, according to the “Organization for Economic Co-operation and Development” frameworks for responsible AI. Moreover, its dynamic nature is designed to adapt to future evolutions of LLMs. Method: Through integrating advanced multidisciplinary knowledge, including Nursing Informatics, and aided by a prospective literature review, seven key domains and specific evaluation items were identified as follows: 1. State of the Art Alignment & Safety. 2. Focus, Accuracy & Management of Prompt Ambiguity. 3. Data Integrity, Data Security, Ethics & Sustainability, in accordance with OECD Recommendations for Responsible AI. 4. Temporal Variability of Responses (Consistency) 5. Adaptation to specific standardized terminology and Classifications for healthcare professionals. 6. General Capabilities: Post User Feedback Self-Evolution Capability and Organization in Chapters. 7. Ability to Drive Evolution in Healthcare.A Peer Review by experts in Nursing and AI was performed, ensuring scientific rigor and breadth of insights for an essential, reproducible, and coherent methodological approach. By means of a 7-point Likert scale, thresholds are defined in order to classify LLMs as “unusable”, “usable with high caution”, and “recommended” categories. Nine state of the art LLMs were evaluated using this methodology in clinical oncology nursing decision-making, producing preliminary results. Gemini Advanced, Anthropic Claude 3 and ChatGPT 4 achieved the minimum score of the State of the Art Alignment & Safety domain for classification as “recommended”, being also endorsed across all domains. LLAMA 3 70B and ChatGPT 3.5 were classified as “usable with high caution.” Others were classified as unusable in this domain. Conclusion: The identification of a recommended LLM for a specific healthcare area, combined with its critical, prudent, and integrative use, can support healthcare professionals in decision-making processes.

Integrating human expertise & automated methods for a dynamic and multi-parametric evaluation of large language models’ feasibility in clinical decision-making

Dentamaro V.
;
Cicolini G.
2024-01-01

Abstract

Background: Recent enhancements in Large Language Models (LLMs) such as ChatGPT have exponentially increased user adoption. These models are accessible on mobile devices and support multimodal interactions, including conversations, code generation, and patient image uploads, broadening their utility in providing healthcare professionals with real-time support for clinical decision-making. Nevertheless, many authors have highlighted serious risks that may arise from the adoption of LLMs, principally related to safety and alignment with ethical guidelines. Objective: To address these challenges, we introduce a novel methodological approach designed to assess the specific feasibility of adopting LLMs within a healthcare area, with a focus on clinical nursing, evaluating their performance and thereby directing their choice. Emphasizing LLMs’ adherence to scientific advancements, this approach prioritizes safety and care personalization, according to the “Organization for Economic Co-operation and Development” frameworks for responsible AI. Moreover, its dynamic nature is designed to adapt to future evolutions of LLMs. Method: Through integrating advanced multidisciplinary knowledge, including Nursing Informatics, and aided by a prospective literature review, seven key domains and specific evaluation items were identified as follows: 1. State of the Art Alignment & Safety. 2. Focus, Accuracy & Management of Prompt Ambiguity. 3. Data Integrity, Data Security, Ethics & Sustainability, in accordance with OECD Recommendations for Responsible AI. 4. Temporal Variability of Responses (Consistency) 5. Adaptation to specific standardized terminology and Classifications for healthcare professionals. 6. General Capabilities: Post User Feedback Self-Evolution Capability and Organization in Chapters. 7. Ability to Drive Evolution in Healthcare.A Peer Review by experts in Nursing and AI was performed, ensuring scientific rigor and breadth of insights for an essential, reproducible, and coherent methodological approach. By means of a 7-point Likert scale, thresholds are defined in order to classify LLMs as “unusable”, “usable with high caution”, and “recommended” categories. Nine state of the art LLMs were evaluated using this methodology in clinical oncology nursing decision-making, producing preliminary results. Gemini Advanced, Anthropic Claude 3 and ChatGPT 4 achieved the minimum score of the State of the Art Alignment & Safety domain for classification as “recommended”, being also endorsed across all domains. LLAMA 3 70B and ChatGPT 3.5 were classified as “usable with high caution.” Others were classified as unusable in this domain. Conclusion: The identification of a recommended LLM for a specific healthcare area, combined with its critical, prudent, and integrative use, can support healthcare professionals in decision-making processes.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/487493
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact