The phenomenon of hate messages on the web is unfortunately in continuous expansion and evolution. Even if the big companies that offer their users a social network service have expressly included in their terms of services rules against hate messages, they are still produced at a huge rate. Therefore, moderators are often employed to monitor these platforms and use their critical skills to decide if the content is offensive or not. Unfortunately, this censorship process is complex and costly in terms of human resources. The system we propose in this work is a system that supports moderators by providing them a set of candidate elements to censor with annexed explanations in natural language. It will then be a task of the human operator to understand if to proceed with the censorship and eventually supply feedback to the result of the classification algorithm to extend its data set of examples and improve its future performances. The proposed system has been designed to merge information coming from data, syntactic tags and a manually annotated lexicon. The messages are then processed through deep learning approaches based on both transformer and deep neural network architecture. The output is consequently supported by an explanation in a human-like form. The model has been evaluated on three state-of-the-art datasets showing excellent effectiveness and clear and understandable explanations.

Lexicon Enriched Hybrid Hate Speech Detection with Human-Centered Explanations

MARCO POLIGNANO
Methodology
;
GIUSEPPE COLAVITO
Software
;
CATALDO MUSTO
Validation
;
MARCO DE GEMMIS
Investigation
;
GIOVANNI SEMERARO
Supervision
2022-01-01

Abstract

The phenomenon of hate messages on the web is unfortunately in continuous expansion and evolution. Even if the big companies that offer their users a social network service have expressly included in their terms of services rules against hate messages, they are still produced at a huge rate. Therefore, moderators are often employed to monitor these platforms and use their critical skills to decide if the content is offensive or not. Unfortunately, this censorship process is complex and costly in terms of human resources. The system we propose in this work is a system that supports moderators by providing them a set of candidate elements to censor with annexed explanations in natural language. It will then be a task of the human operator to understand if to proceed with the censorship and eventually supply feedback to the result of the classification algorithm to extend its data set of examples and improve its future performances. The proposed system has been designed to merge information coming from data, syntactic tags and a manually annotated lexicon. The messages are then processed through deep learning approaches based on both transformer and deep neural network architecture. The output is consequently supported by an explanation in a human-like form. The model has been evaluated on three state-of-the-art datasets showing excellent effectiveness and clear and understandable explanations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/406243
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact