Until now, the mass spread of fake news and its negative consequences have implied mainly textual content towards a loss of citizens' trust in institutions. Recently, a new type of machine learning framework has arisen, Generative Adversarial Networks (GANs) – a class of deep neural network models capable of creating multimedia content (photos, videos, audio) that simulate accurate content with extreme precision. While there are several areas of worthwhile application of GANs – e.g., in the field of audio-visual production, human-computer interactions, satire, and artistic creativity – their deceptive uses, at least as currently foreseeable, are just as numerous and worrying. The main concern is linked to the so-called “deepfakes”, fake images or videos that simulate real events with extreme precision. When trained on a human face, GANs can make the face assume hyper-realistic movements, expressions and (verbal and non-verbal) communication abilities. This technology poses an urgent threat to the governance of democratic processes concerning the production of public opinions and political discourses, with significant potential for reality-altering and disinformation. After a short introduction of their current technical state-of-the-art, in this paper we want to enquire the GANs` socio-technical system alongside different and intertwined philosophical accounts. Firstly, we will argue about the conditions that make perceived a GANs-generated content as trustworthy, arguing also about the general effects GANs might have on the perceived trustworthiness of individuals. Thereafter, we will discuss about the inadequacy to approach GANs only as perception-altering technology. Against this backdrop, we will propose a theoretical turn that considers the human-machine relationships of trustworthiness as elements of a broader hybrid socio-technical systems. This turn come up with political repercussions that we will discuss in the last part of the paper.

Hybrid Ethics for Generative AI: Some Philosophical Inquiries on GANs

Carnevale, Antonio
;
2023-01-01

Abstract

Until now, the mass spread of fake news and its negative consequences have implied mainly textual content towards a loss of citizens' trust in institutions. Recently, a new type of machine learning framework has arisen, Generative Adversarial Networks (GANs) – a class of deep neural network models capable of creating multimedia content (photos, videos, audio) that simulate accurate content with extreme precision. While there are several areas of worthwhile application of GANs – e.g., in the field of audio-visual production, human-computer interactions, satire, and artistic creativity – their deceptive uses, at least as currently foreseeable, are just as numerous and worrying. The main concern is linked to the so-called “deepfakes”, fake images or videos that simulate real events with extreme precision. When trained on a human face, GANs can make the face assume hyper-realistic movements, expressions and (verbal and non-verbal) communication abilities. This technology poses an urgent threat to the governance of democratic processes concerning the production of public opinions and political discourses, with significant potential for reality-altering and disinformation. After a short introduction of their current technical state-of-the-art, in this paper we want to enquire the GANs` socio-technical system alongside different and intertwined philosophical accounts. Firstly, we will argue about the conditions that make perceived a GANs-generated content as trustworthy, arguing also about the general effects GANs might have on the perceived trustworthiness of individuals. Thereafter, we will discuss about the inadequacy to approach GANs only as perception-altering technology. Against this backdrop, we will propose a theoretical turn that considers the human-machine relationships of trustworthiness as elements of a broader hybrid socio-technical systems. This turn come up with political repercussions that we will discuss in the last part of the paper.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/469400
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact