Affective Computing is the science of creating emotionally aware systems, including the automatic analysis of affect and expressive behaviors [19]. Social computing is an interdisciplinary area that is concerned with the intersection of social behavior and computational systems [26]. Affective and social computing are two research fields that involve human and computer sciences defining them as featured by an interdisciplinary nature. In this perspective, a growing need within research concerning affective and social technologies is to combine and integrate psychological and computer sciences in order to make the technologies aware of human emotions and behaviors and able to interact with humans as well. The main purpose of this special issue is to advance emotion and social behavior recognition in order to understand their psychological features but also implement multimodal interaction with applications. The idea is to set the state of the scientific research on socio-affective technologies by integrating both the computational and psychological approaches in understanding, recognizing and shaping affective processes in the real and new social environments (social media, virtual reality) even in educational contexts. In psychological sciences, affective processes can be focused on individual, interactional and social dimensions [6, 21, 22]. All three dimensions can be explored by means of the analysis of verbal, physiological and expressive aspects which generally use one or more communicative modalities [20]: linguistic or rhetorical [3, 5, 16, 17], voice [25], facial [13], head [23] gestural or touching [20], postural modality [1]. In this context, from a computational point of view it is necessary to cope with real-time recognition of the meaning of human behaviors in contextual situations both from the cognitive and affective points of view. Especially in domains such as education [2, 4, 7, 9, 12, 18, 29], social robotics [10, 15], personal assistants [14, 28], recommender systems [11, 12, 27], it is possible to design and develop systems that understand people’s needs, recognize problems and intelligently compute behaviors in order to improve the quality of people’s lives. The starting point of this special issue was the workshop on Socio-Affective Technologies: An Interdisciplinary Approach co-located with IEEE SMC 2019 (Systems, Man and Cybernetics) [7, 8]. The aim was to bring together psychological and technological research to design and build effective socio-affective applications. The paper by Rahman et al., “Robust modeling of epistemic mental states” pursued the individual dimension by considering several communicative modalities with high prevalence of the face [24]. The paper analyzes facial features and their temporal dynamics with epistemic mental states like Agreement, Concentration, Thoughtful, Certain, and Interest in dyadic conversations. Also the paper by Rossi et al., “Personalized models for facial emotion recognition through transfer learning” proposes the use of the analysis of facial features to produce subject-specific models to extract emotional content of facial images. Their findings suggest that with few personal data their system can obtain high recognition performances. Furthermore, in the contribution by Palestra et al., “Detecting emotions during a memory training assisted by a Social Robot for individuals with Mild Cognitive Impairment (MCI)”, the study addresses affective computing to older people. The paper describes the effectiveness of a system able to decode facial expression from video-recorded sessions of robot-assisted memory training by also testing the robot’s potential to engage participants in the intervention and its effects on their emotional state. Their analysis revealed that the system is able to recognize facial expressions from robot-assisted group therapy sessions even when faces are partially occluded. Other research papers focus on the individual dimension of emotions but consider physiological measures, as in the paper by Gasparini et al., “Discriminating affective state intensity using physiological responses”, which explores whether the physiological signals obtained by a person through wearable sensors can be useful for interpreting human emotional states. Through an ad hoc laboratory protocol, the study shows how the signals of galvanic skin response (GSR) and photoplethysmography (PPG) can successfully distinguish emotions such as relaxation and stress. In a similar vein Van Beurden et al. in “Towards user-adapted training paradigms: Physiological responses to physical threat during cognitive task performance” focuses on the measures of stress by examining the sensitivity of a range of physiological measures derived from electrodermal activity (EDA) and blood pressure (BP) to indicate stress as induced by the threat of an electric shock (ES). The results show that in threat and non-threat conditions, using a classifier that uses EDA and BP features it is possible to distinguish the stress level with good accuracy. The authors state that these parameters can be used to evaluate training paradigms for stress management or to adapt VR training environments to the individual user. Also the paper by Francese et al., “A user-centered approach for detecting emotions with low-cost sensors” describes an emotion detection system able to infer the user’s emotions using low-cost biometric sensors and artificial intelligence. The authors trained a neural network by asking participants to classify their emotion when subjected to visual stimulation; results suggest that reducing device invasiveness may affect the user perceptions and also improve the classification of performances. And finally, the individual affective dimension is also pursued by Impedovo et al., “Affective states recognition through touch dynamics”, in which the touch dynamics are used with a mobile device. The innovativeness of the contribution is amplified by the analysis presented based on machine learning techniques with data collected developing a specific mobile app designed to acquire common unlock Android touch patterns that reaches a good accuracy and thus validating the proposed approach. In addition to the individual processes, the present special issue also investigates the interactional processes mainly by means of linguistic/textual communicative modality and considering the case of a novel product recommender system. In their paper “CapsMF: a novel product recommender system using deep learning based text analysis model”, Katarya and Arora propose a unique deep neural network text analysis model that includes newly discovered neural network architecture, capsule networks stacked on bi-directional recurrent neural network (Bi-RNN) for developing a robust representation of textual descriptions of items and users. The proposed model is called “CapsMF”, since it applies the advanced neural network architecture capsule networks (Caps) for document representation and MF represents matrix factorization that is being enhanced to improve recommendations. In “Dialogue management in conversational agents through psychology of persuasion and machine learning” by Catellani et al., an exploratory study is presented that focuses on a method for integrating well-assessed methods from the field of social psychology in the design of task-oriented conversational agents in which the dialogue management module is developed through machine learning. By applying well-known social psychology theories in the field of persuasion, such as the Theory of Planned Behavior, on the reduction of red meat consumption, authors develop new multidisciplinary and integrated techniques for the development of automated dialogue managing systems. The special issue shows also how social dynamics can be explored mainly by means of two modalities: textual and vocal. Starting from the textual modality, the contribution “Impact memes: PhDs HuMor(e)” by Papapicco and Mininni aims at understanding functions and emotions emerging from the PhD memes’ humor by performing a fine-grained qualitative analysis about rhetorical aspects based on linguistic and visual elements of online memes. Their psycho-rhetorical approach can contribute to deepening critical aspects of automatic humor detection by stressing also on different social media (Facebook and Instagram). The paper “Synthetic minority oversampling in addressing imbalanced sarcasm detection in social media” authored by Sankhadeep et al. proposes a synthetic minority oversampling based method to mitigate the issue of imbalanced classes which can severely affect the classifier performance in social media sarcasm detection. For this purpose, it trains and tests six well-known classifiers and measures their performance by reaching good performances. More generally speaking about ‘sentimental analysis’, Bordoloi and Biswas in “Graph based sentiment analysis using keyword rank based polarity assignment” propose an effective model using a graph-based keyword ranking and domain specific rank-based polarity assignment technique. The proposed model is compared and validated using three different existing models for four different datasets, which institutes the significance of the proposed model. Finally, the contribution by Franzoni et al., “Emotional sounds of the crowd, spectrogram analysis using deep learning”, explores the social dimension of the crowd by means of emotional sounds such as collective booing, laughing or cheering. In their analysis crowd sounds are analyzed with techniques similar to those applied on individual voices, by applying deep learning classification to spectrogram images derived by sound transformations. The paper by Panico et al. “Ethical issues in assistive ambient living technologies for ageing well”, presents an interesting ethical reflection on the assistive ambient living (AAL) technologies. The authors exploit all the ethical dimensions that should be considered during the design and implementation of AAL technologies especially when addressed to the aging population. This special issue will help researchers and practitioners interested in exploring the use of socio-affective factors in designing and developing technologies able to recognize, interpret, process, and simulate human emotions. The special issue has combined research in both psychological and technological fields since, in our opinion, socio-affective computing can be more successful when the research features interdisciplinarity.

Socio-affective technologies

Berardina De Carolis;Francesca D’Errico;Veronica Rossano
2020-01-01

Abstract

Affective Computing is the science of creating emotionally aware systems, including the automatic analysis of affect and expressive behaviors [19]. Social computing is an interdisciplinary area that is concerned with the intersection of social behavior and computational systems [26]. Affective and social computing are two research fields that involve human and computer sciences defining them as featured by an interdisciplinary nature. In this perspective, a growing need within research concerning affective and social technologies is to combine and integrate psychological and computer sciences in order to make the technologies aware of human emotions and behaviors and able to interact with humans as well. The main purpose of this special issue is to advance emotion and social behavior recognition in order to understand their psychological features but also implement multimodal interaction with applications. The idea is to set the state of the scientific research on socio-affective technologies by integrating both the computational and psychological approaches in understanding, recognizing and shaping affective processes in the real and new social environments (social media, virtual reality) even in educational contexts. In psychological sciences, affective processes can be focused on individual, interactional and social dimensions [6, 21, 22]. All three dimensions can be explored by means of the analysis of verbal, physiological and expressive aspects which generally use one or more communicative modalities [20]: linguistic or rhetorical [3, 5, 16, 17], voice [25], facial [13], head [23] gestural or touching [20], postural modality [1]. In this context, from a computational point of view it is necessary to cope with real-time recognition of the meaning of human behaviors in contextual situations both from the cognitive and affective points of view. Especially in domains such as education [2, 4, 7, 9, 12, 18, 29], social robotics [10, 15], personal assistants [14, 28], recommender systems [11, 12, 27], it is possible to design and develop systems that understand people’s needs, recognize problems and intelligently compute behaviors in order to improve the quality of people’s lives. The starting point of this special issue was the workshop on Socio-Affective Technologies: An Interdisciplinary Approach co-located with IEEE SMC 2019 (Systems, Man and Cybernetics) [7, 8]. The aim was to bring together psychological and technological research to design and build effective socio-affective applications. The paper by Rahman et al., “Robust modeling of epistemic mental states” pursued the individual dimension by considering several communicative modalities with high prevalence of the face [24]. The paper analyzes facial features and their temporal dynamics with epistemic mental states like Agreement, Concentration, Thoughtful, Certain, and Interest in dyadic conversations. Also the paper by Rossi et al., “Personalized models for facial emotion recognition through transfer learning” proposes the use of the analysis of facial features to produce subject-specific models to extract emotional content of facial images. Their findings suggest that with few personal data their system can obtain high recognition performances. Furthermore, in the contribution by Palestra et al., “Detecting emotions during a memory training assisted by a Social Robot for individuals with Mild Cognitive Impairment (MCI)”, the study addresses affective computing to older people. The paper describes the effectiveness of a system able to decode facial expression from video-recorded sessions of robot-assisted memory training by also testing the robot’s potential to engage participants in the intervention and its effects on their emotional state. Their analysis revealed that the system is able to recognize facial expressions from robot-assisted group therapy sessions even when faces are partially occluded. Other research papers focus on the individual dimension of emotions but consider physiological measures, as in the paper by Gasparini et al., “Discriminating affective state intensity using physiological responses”, which explores whether the physiological signals obtained by a person through wearable sensors can be useful for interpreting human emotional states. Through an ad hoc laboratory protocol, the study shows how the signals of galvanic skin response (GSR) and photoplethysmography (PPG) can successfully distinguish emotions such as relaxation and stress. In a similar vein Van Beurden et al. in “Towards user-adapted training paradigms: Physiological responses to physical threat during cognitive task performance” focuses on the measures of stress by examining the sensitivity of a range of physiological measures derived from electrodermal activity (EDA) and blood pressure (BP) to indicate stress as induced by the threat of an electric shock (ES). The results show that in threat and non-threat conditions, using a classifier that uses EDA and BP features it is possible to distinguish the stress level with good accuracy. The authors state that these parameters can be used to evaluate training paradigms for stress management or to adapt VR training environments to the individual user. Also the paper by Francese et al., “A user-centered approach for detecting emotions with low-cost sensors” describes an emotion detection system able to infer the user’s emotions using low-cost biometric sensors and artificial intelligence. The authors trained a neural network by asking participants to classify their emotion when subjected to visual stimulation; results suggest that reducing device invasiveness may affect the user perceptions and also improve the classification of performances. And finally, the individual affective dimension is also pursued by Impedovo et al., “Affective states recognition through touch dynamics”, in which the touch dynamics are used with a mobile device. The innovativeness of the contribution is amplified by the analysis presented based on machine learning techniques with data collected developing a specific mobile app designed to acquire common unlock Android touch patterns that reaches a good accuracy and thus validating the proposed approach. In addition to the individual processes, the present special issue also investigates the interactional processes mainly by means of linguistic/textual communicative modality and considering the case of a novel product recommender system. In their paper “CapsMF: a novel product recommender system using deep learning based text analysis model”, Katarya and Arora propose a unique deep neural network text analysis model that includes newly discovered neural network architecture, capsule networks stacked on bi-directional recurrent neural network (Bi-RNN) for developing a robust representation of textual descriptions of items and users. The proposed model is called “CapsMF”, since it applies the advanced neural network architecture capsule networks (Caps) for document representation and MF represents matrix factorization that is being enhanced to improve recommendations. In “Dialogue management in conversational agents through psychology of persuasion and machine learning” by Catellani et al., an exploratory study is presented that focuses on a method for integrating well-assessed methods from the field of social psychology in the design of task-oriented conversational agents in which the dialogue management module is developed through machine learning. By applying well-known social psychology theories in the field of persuasion, such as the Theory of Planned Behavior, on the reduction of red meat consumption, authors develop new multidisciplinary and integrated techniques for the development of automated dialogue managing systems. The special issue shows also how social dynamics can be explored mainly by means of two modalities: textual and vocal. Starting from the textual modality, the contribution “Impact memes: PhDs HuMor(e)” by Papapicco and Mininni aims at understanding functions and emotions emerging from the PhD memes’ humor by performing a fine-grained qualitative analysis about rhetorical aspects based on linguistic and visual elements of online memes. Their psycho-rhetorical approach can contribute to deepening critical aspects of automatic humor detection by stressing also on different social media (Facebook and Instagram). The paper “Synthetic minority oversampling in addressing imbalanced sarcasm detection in social media” authored by Sankhadeep et al. proposes a synthetic minority oversampling based method to mitigate the issue of imbalanced classes which can severely affect the classifier performance in social media sarcasm detection. For this purpose, it trains and tests six well-known classifiers and measures their performance by reaching good performances. More generally speaking about ‘sentimental analysis’, Bordoloi and Biswas in “Graph based sentiment analysis using keyword rank based polarity assignment” propose an effective model using a graph-based keyword ranking and domain specific rank-based polarity assignment technique. The proposed model is compared and validated using three different existing models for four different datasets, which institutes the significance of the proposed model. Finally, the contribution by Franzoni et al., “Emotional sounds of the crowd, spectrogram analysis using deep learning”, explores the social dimension of the crowd by means of emotional sounds such as collective booing, laughing or cheering. In their analysis crowd sounds are analyzed with techniques similar to those applied on individual voices, by applying deep learning classification to spectrogram images derived by sound transformations. The paper by Panico et al. “Ethical issues in assistive ambient living technologies for ageing well”, presents an interesting ethical reflection on the assistive ambient living (AAL) technologies. The authors exploit all the ethical dimensions that should be considered during the design and implementation of AAL technologies especially when addressed to the aging population. This special issue will help researchers and practitioners interested in exploring the use of socio-affective factors in designing and developing technologies able to recognize, interpret, process, and simulate human emotions. The special issue has combined research in both psychological and technological fields since, in our opinion, socio-affective computing can be more successful when the research features interdisciplinarity.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/314630
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact