As Artificial Intelligence (AI) spreads in modern society, academia, companies, and governments are working towards the common goal of creating systems that are not just accurate, but that can be used by humans with efficiency, effectiveness, and satisfaction while respecting their rights and well-being. As the Human-Centered Design (HCD) establishes the processes that can be followed to reach this objective, other related challenges concern the privacy and security of those systems. This study investigates the impact that the HCD approach can have in the cybersecurity of AI systems to propose an approach that considers the different factors that can influence their evaluation and assessment, considering three main components: Human-Computer Interaction, Cybersecurity, and Ethics and Law.
Assessing Usability and Cybersecurity of AI Systems through the Human-Centered Design
Vita Santa Barletta;Miriana Calvano;Antonio Curci;Chaudhry Muhammad Nadeem Faisal;Antonio Piccinno
2025-01-01
Abstract
As Artificial Intelligence (AI) spreads in modern society, academia, companies, and governments are working towards the common goal of creating systems that are not just accurate, but that can be used by humans with efficiency, effectiveness, and satisfaction while respecting their rights and well-being. As the Human-Centered Design (HCD) establishes the processes that can be followed to reach this objective, other related challenges concern the privacy and security of those systems. This study investigates the impact that the HCD approach can have in the cybersecurity of AI systems to propose an approach that considers the different factors that can influence their evaluation and assessment, considering three main components: Human-Computer Interaction, Cybersecurity, and Ethics and Law.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


