Computer Aided Decision (CAD) systems, based on 3D tomosynthesis imaging, could support radiologists in classifying different kinds of breast lesions and then improve the diagnosis of breast cancer (BC) with a lower X-ray dose than in Computer Tomography (CT) systems. In previous work, several Convolutional Neural Network (CNN) architectures were evaluated to discriminate four different classes of lesions considering high-resolution images automatically segmented: (a) irregular opacity lesions, (b) regular opacity lesions, (c) stellar opacity lesions and (d) no-lesions. In this paper, instead, we use the same previously extracted relevant Regions of Interest (ROIs) containing the lesions, but we propose and evaluate two different approaches to better discriminate among the four classes. In this work, we evaluate and compare the performance of two different frameworks both considering supervised classifiers topologies. The first framework is feature-based, and consider morphological and textural hand-crafted features, extracted from each ROI, as input to optimised Artificial Neural Network (ANN) classifiers. The second framework, instead, considers non-neural classifiers based on automatically computed features evaluating the classification performance extracting several sets of features using different Convolutional Neural Network models. Final results show that the second framework, based on features computed automatically by CNN architectures performs better than the first approach, in terms of accuracy, specificity, and sensitivity

A performance comparison between shallow and deeper neural networks supervised classification of tomosynthesis breast lesions images

Telegrafo, Michele;Moschetta, Marco
2019-01-01

Abstract

Computer Aided Decision (CAD) systems, based on 3D tomosynthesis imaging, could support radiologists in classifying different kinds of breast lesions and then improve the diagnosis of breast cancer (BC) with a lower X-ray dose than in Computer Tomography (CT) systems. In previous work, several Convolutional Neural Network (CNN) architectures were evaluated to discriminate four different classes of lesions considering high-resolution images automatically segmented: (a) irregular opacity lesions, (b) regular opacity lesions, (c) stellar opacity lesions and (d) no-lesions. In this paper, instead, we use the same previously extracted relevant Regions of Interest (ROIs) containing the lesions, but we propose and evaluate two different approaches to better discriminate among the four classes. In this work, we evaluate and compare the performance of two different frameworks both considering supervised classifiers topologies. The first framework is feature-based, and consider morphological and textural hand-crafted features, extracted from each ROI, as input to optimised Artificial Neural Network (ANN) classifiers. The second framework, instead, considers non-neural classifiers based on automatically computed features evaluating the classification performance extracting several sets of features using different Convolutional Neural Network models. Final results show that the second framework, based on features computed automatically by CNN architectures performs better than the first approach, in terms of accuracy, specificity, and sensitivity
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/230485
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 40
  • ???jsp.display-item.citation.isi??? 33
social impact