The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.
The EU regulatory approach(es) to ai liability, and its application to the financial services market
Davola Antonio
2024-01-01
Abstract
The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.