Artificial Intelligence (AI) systems are increasingly embedded in our everyday lives, ranging from consumer-facing applications such as chatbots and recommendation engines to applications in high-risk domains like healthcare and autonomous driving. While these systems promise improved efficiency, creativity, and decision-making for their human users, they also introduce significant challenges related to usability, transparency, trust, and alignment with human values. This dissertation addresses these challenges by advancing the field of Human-Centred Artificial Intelligence (HCAI) and its specialization, Symbiotic Artificial Intelligence (SAI), which envisions AI systems that augment rather than replace human capabilities. The main contribution of this thesis is a negotiation-based model of Human--AI Interaction, grounded in Human-Computer Interaction (HCI), Artificial Intelligence, Software Engineering, and Ethics. This model reconceptualizes interaction as a dynamic and adaptive process, enabling users to retain control, share decision-making authority, and iteratively influence AI behavior. The model is validated through empirical studies in multiple domains, spanning different levels of risk and technological paradigms, including traditional and Generative Artificial Intelligence (GenAI). These studies show that the effectiveness of AI systems is dependent not only on algorithmic performance but also on their ability to accommodate user goals, workflows, and mental models. Additional key contributions include: (i)~a precise definition of HCAI and SAI, clarifying their theoretical foundations and solving terminological inconsistencies; (ii)~an empirical demonstration of the necessity of Human-Centred Design (HCD) when developing AI systems that support their users; (iii)~a structured set of case studies validating the Human--AI Interaction Model, spanning low-, medium-, and high-risk applications; and (iv)~the derivation of actionable guidelines for the design, evaluation, and development of trustworthy, explainable, and user-aligned AI systems. Examples of such guidelines refer to designing explanation-driven interventions, strategies for eXplanation User Interfaces (XUIs), and techniques for aligning AI behavior with users' mental models. The findings of the research reported in this thesis suggest that neither automation nor augmentation should be preferred by default; instead, design decisions must consider task characteristics, user goals, and contextual risk. Overall, the thesis provides a comprehensive and validated framework for designing AI systems that are not only technically powerful but also trustworthy, transparent, and genuinely supportive of human autonomy and decision-making.
I sistemi di Intelligenza Artificiale (IA) sono sempre più integrati nella vita quotidiana, dalle applicazioni rivolte ai consumatori, come chatbot e motori di raccomandazione, fino ai domini ad alto rischio, come la sanità e la guida autonoma. Sebbene promettano maggiore efficienza, creatività e supporto alle decisioni, questi sistemi sollevano sfide significative legate all’usabilità, alla trasparenza, alla fiducia e all’allineamento con i valori umani. Questa tesi affronta tali sfide avanzando il paradigma dell’Intelligenza Artificiale Centrata sulle Persone (Human-Centered Artificial Intelligence, HCAI) e della sua specializzazione, l’Intelligenza Artificiale Simbiotica (Symbiotic AI, SAI), che concepisce l’IA come uno strumento per potenziare, piuttosto che sostituire, le capacità umane. Il contributo principale è un modello dell’interazione Persona-IA basato sulla “negoziazione”, con radici nelle discipline dell’Interazione Persona-Macchina, intelligenza artificiale, ingegneria del software ed etica. Questo modello riconcettualizza l’interazione come un processo dinamico e adattivo, consentendo agli utenti di mantenere il controllo, condividere il processo di decision-making, e di influenzare iterativamente il comportamento dell’IA. Questo modello è validato attraverso studi empirici in diversi domini su diversi livelli di rischio e paradigmi tecnologici, includendo sia IA tradizionale che generativa. Questi studi mostrano che l’efficacia dei sistemi di IA non dipende solo dalle performance algoritmiche, ma anche dalla loro capacità di lasciar spazio agli obiettivi degli utenti, ai loro flussi di lavoro, e ai loro modelli mentali. I principali contributi includono: (1) una definizione formale di HCAI e SAI, che ne chiarisce i fondamenti teorici; (2) una dimostrazione empirica della necessità del Design Centrato sulla Persona (HCD) nella progettazione di sistemi di IA che supportino realmente l’utente; (3) un insieme strutturato di casi studio per la validazione del modello di interazione Persona–IA, che spaziano da applicazioni a basso rischio fino a scenari ad alto rischio; e (4) l’elaborazione di linee guida operative per la progettazione, valutazione e sviluppo di sistemi di IA affidabili, spiegabili e allineati agli utenti. Esempi di tali linee guida forniscono indicazioni per la progettazione di interventi guidati dalle spiegazioni, strategie per le interfacce utente per le spiegazioni (XUI) e tecniche per allineare il comportamento dell’IA ai modelli mentali degli utenti. I risultati della ricerca discussa in questa tesi evidenziano che né l’automazione né l’augmentazione debbano essere scelte predefinite: le decisioni progettuali devono tener conto delle caratteristiche del compito, degli obiettivi dell’utente e del rischio contestuale. In sintesi, la tesi fornisce un framework completo e validato per la progettazione di sistemi di IA che siano non solo tecnicamente competenti, ma anche trasparenti, affidabili e autenticamente a supporto dell’autonomia e della qualità decisionale umana.
A Human–AI Interaction Model: Leveraging Human-Centred Design for Symbiotic AI / Esposito, Andrea. - (2025 Feb 25).
A Human–AI Interaction Model: Leveraging Human-Centred Design for Symbiotic AI
ESPOSITO, ANDREA
2025-02-25
Abstract
Artificial Intelligence (AI) systems are increasingly embedded in our everyday lives, ranging from consumer-facing applications such as chatbots and recommendation engines to applications in high-risk domains like healthcare and autonomous driving. While these systems promise improved efficiency, creativity, and decision-making for their human users, they also introduce significant challenges related to usability, transparency, trust, and alignment with human values. This dissertation addresses these challenges by advancing the field of Human-Centred Artificial Intelligence (HCAI) and its specialization, Symbiotic Artificial Intelligence (SAI), which envisions AI systems that augment rather than replace human capabilities. The main contribution of this thesis is a negotiation-based model of Human--AI Interaction, grounded in Human-Computer Interaction (HCI), Artificial Intelligence, Software Engineering, and Ethics. This model reconceptualizes interaction as a dynamic and adaptive process, enabling users to retain control, share decision-making authority, and iteratively influence AI behavior. The model is validated through empirical studies in multiple domains, spanning different levels of risk and technological paradigms, including traditional and Generative Artificial Intelligence (GenAI). These studies show that the effectiveness of AI systems is dependent not only on algorithmic performance but also on their ability to accommodate user goals, workflows, and mental models. Additional key contributions include: (i)~a precise definition of HCAI and SAI, clarifying their theoretical foundations and solving terminological inconsistencies; (ii)~an empirical demonstration of the necessity of Human-Centred Design (HCD) when developing AI systems that support their users; (iii)~a structured set of case studies validating the Human--AI Interaction Model, spanning low-, medium-, and high-risk applications; and (iv)~the derivation of actionable guidelines for the design, evaluation, and development of trustworthy, explainable, and user-aligned AI systems. Examples of such guidelines refer to designing explanation-driven interventions, strategies for eXplanation User Interfaces (XUIs), and techniques for aligning AI behavior with users' mental models. The findings of the research reported in this thesis suggest that neither automation nor augmentation should be preferred by default; instead, design decisions must consider task characteristics, user goals, and contextual risk. Overall, the thesis provides a comprehensive and validated framework for designing AI systems that are not only technically powerful but also trustworthy, transparent, and genuinely supportive of human autonomy and decision-making.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


