The recent rapid development of Generative AI, and the resulting market growth, has introduced new challenges for social responsibility, an area where companies may need more guidance. In this regard, the literature covers a broad spectrum, from the impact of bias to the potential use of this technology to implement undemocratic surveillance. Another focus area discusses the AI industry’s commitment to human rights and social responsibility, examining the diverse actors involved in this commitment and the context-dependent nature of their impact on human rights. This work performs a systematic review and a comparative analysis of the strategies and actions taken by four leading companies—OpenAI, Meta AI Research, Google AI, and Microsoft AI—with respect to five critical dimensions: bias, privacy, cybersecurity, hate speech, and misinformation. Our study analyzes 192 publicly available documents and reveals that depending on the diversity of products and their nature, some companies excel in the research and development of technologies and methodologies for privacy preservation and bias reduction, offering user-friendly tools for managing personal data, establishing expert groups to research the social impact of their technologies, and possessing significant expertise in tackling hate speech and misinformation. Nonetheless, there is an urgent need for greater linguistic, cultural, and geographic diversity in research lines, tools, and collaborative efforts. From this analysis, we draw a set of actionable best practices aimed at supporting the responsible development of AI models, and Foundation Models in particular, that are aligned with human rights principles.

Fostering Human Rights in Responsible AI: A Systematic Review for Best Practices in Industry

Teresa, Baldassarre Maria;Danilo, Caivano;Domenico, Gigante;Azzurra, Ragone
2024-01-01

Abstract

The recent rapid development of Generative AI, and the resulting market growth, has introduced new challenges for social responsibility, an area where companies may need more guidance. In this regard, the literature covers a broad spectrum, from the impact of bias to the potential use of this technology to implement undemocratic surveillance. Another focus area discusses the AI industry’s commitment to human rights and social responsibility, examining the diverse actors involved in this commitment and the context-dependent nature of their impact on human rights. This work performs a systematic review and a comparative analysis of the strategies and actions taken by four leading companies—OpenAI, Meta AI Research, Google AI, and Microsoft AI—with respect to five critical dimensions: bias, privacy, cybersecurity, hate speech, and misinformation. Our study analyzes 192 publicly available documents and reveals that depending on the diversity of products and their nature, some companies excel in the research and development of technologies and methodologies for privacy preservation and bias reduction, offering user-friendly tools for managing personal data, establishing expert groups to research the social impact of their technologies, and possessing significant expertise in tackling hate speech and misinformation. Nonetheless, there is an urgent need for greater linguistic, cultural, and geographic diversity in research lines, tools, and collaborative efforts. From this analysis, we draw a set of actionable best practices aimed at supporting the responsible development of AI models, and Foundation Models in particular, that are aligned with human rights principles.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/479162
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact