Background and Objective: Artificial Intelligence (AI) has significantly advanced medical imaging, yet the opacity of deep learning models remains challenging, often reducing the trust of medical professionals towards AI-driven diagnoses. As a result, there is a strong focus on making AI models more transparent and interpretable to boost healthcare providers’ confidence in these technologies. Methods: This paper introduces a novel approach to enhance AI explainability in critical medical tasks by integrating state-of-the-art semantic segmentation models with atlas-based mapping and Large Language Models (LLMs) to produce comprehensive, human-readable medical reports. The proposed framework ensures that the generated outputs are factual and contextually rich. Our anti-hallucination design, which combines structured JSON with prompt constraints, is a critical innovation compared to most naïve LLM report generation methods, thereby enhancing the transparency and interpretability of AI systems. Results: Experimental results show that the SegResNet model achieves high segmentation accuracy, while LLMs (Gemma, Llama, and Mistral) demonstrate diverse strengths in generating explanatory reports. Numerous metrics have been employed to assess the quality and effectiveness of generated textual explanations, such as lexical diversity, readability, coherence, and information coverage. Conclusions: The method has been specifically tested for brain tumor detection in glioma, one of the most aggressive forms of cancer, and subsequently applied to multiple sclerosis lesion detection to further validate its generalizability across various medical imaging scenarios, thereby contributing to the trustworthiness of healthcare AI applications. Reproducibility: The complete source code for implementing the framework and reproducing the results is publicly available, along with full pipeline examples demonstrating each step – from segmentation to report generation – at the following repository: https://github.com/albertovalerio/from-segmentation-to-explanation.

From segmentation to explanation: Generating textual reports from MRI with LLMs

Valerio, Alberto G.
;
Trufanova, Katya;de Benedictis, Salvatore;Vessio, Gennaro;Castellano, Giovanna
2025-01-01

Abstract

Background and Objective: Artificial Intelligence (AI) has significantly advanced medical imaging, yet the opacity of deep learning models remains challenging, often reducing the trust of medical professionals towards AI-driven diagnoses. As a result, there is a strong focus on making AI models more transparent and interpretable to boost healthcare providers’ confidence in these technologies. Methods: This paper introduces a novel approach to enhance AI explainability in critical medical tasks by integrating state-of-the-art semantic segmentation models with atlas-based mapping and Large Language Models (LLMs) to produce comprehensive, human-readable medical reports. The proposed framework ensures that the generated outputs are factual and contextually rich. Our anti-hallucination design, which combines structured JSON with prompt constraints, is a critical innovation compared to most naïve LLM report generation methods, thereby enhancing the transparency and interpretability of AI systems. Results: Experimental results show that the SegResNet model achieves high segmentation accuracy, while LLMs (Gemma, Llama, and Mistral) demonstrate diverse strengths in generating explanatory reports. Numerous metrics have been employed to assess the quality and effectiveness of generated textual explanations, such as lexical diversity, readability, coherence, and information coverage. Conclusions: The method has been specifically tested for brain tumor detection in glioma, one of the most aggressive forms of cancer, and subsequently applied to multiple sclerosis lesion detection to further validate its generalizability across various medical imaging scenarios, thereby contributing to the trustworthiness of healthcare AI applications. Reproducibility: The complete source code for implementing the framework and reproducing the results is publicly available, along with full pipeline examples demonstrating each step – from segmentation to report generation – at the following repository: https://github.com/albertovalerio/from-segmentation-to-explanation.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/543760
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact