Predictive process monitoring (PPM) techniques have become a key element in both public and private organizations by enabling crucial operational support of their business processes. Thanks to the availability of large amounts of data, different solutions based on machine and deep learning have been proposed in the literature for the monitoring of process instances. These state-of-the-art approaches leverage accuracy as main objective of the predictive modeling, while they often neglect the interpretability of the model. Recent studies have addressed the problem of interpretability of predictive models leading to the emerging area of Explainable AI (XAI). In an attempt to bring XAI in PPM, in this paper we propose a fully interpretable model for outcome prediction. The proposed method is based on a set of fuzzy rules acquired from event data via the training of a neuro-fuzzy network. This solution provides a good trade-off between accuracy and interpretability of the predictive model. Experimental results on different benchmark event logs are encouraging and motivate the importance to develop explainable models for predictive process analytics.

FOX: a neuro-Fuzzy model for process Outcome prediction and eXplanation

Pasquadibisceglie V.;Castellano G.;Appice A.;Malerba D.
2021-01-01

Abstract

Predictive process monitoring (PPM) techniques have become a key element in both public and private organizations by enabling crucial operational support of their business processes. Thanks to the availability of large amounts of data, different solutions based on machine and deep learning have been proposed in the literature for the monitoring of process instances. These state-of-the-art approaches leverage accuracy as main objective of the predictive modeling, while they often neglect the interpretability of the model. Recent studies have addressed the problem of interpretability of predictive models leading to the emerging area of Explainable AI (XAI). In an attempt to bring XAI in PPM, in this paper we propose a fully interpretable model for outcome prediction. The proposed method is based on a set of fuzzy rules acquired from event data via the training of a neuro-fuzzy network. This solution provides a good trade-off between accuracy and interpretability of the predictive model. Experimental results on different benchmark event logs are encouraging and motivate the importance to develop explainable models for predictive process analytics.
2021
978-1-6654-3514-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/384455
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? ND
social impact