Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/111327
| Title: | “I feel you” : emotional alignment during conversations. A computational linguistic and neurocognitive study | Authors: | Salicchi, Lavinia | Degree: | Ph.D. | Issue Date: | 2024 | Abstract: | In this thesis, I focused on emotional alignment during conversations. Given the different definitions of emotional alignment, two aspects came out as crucial: mirroring of emotions and appropriate reactions to others’ emotions and storytelling. Moving from the Theory of Affective Pragmatics by Andrea Scarantino, I approached the study of emotional reaction and mirroring taking into account two modalities for expressing emotions: verbal (shared emotional expressions) and visual (displayed emotional expressions). In the context of online video chats, I investigated how people react to both shared and displayed emotional expressions, focusing on facial expression mirroring, through video analysis and action units, and pupil dilation. The results suggest that video-mediated conversations do not differ significantly from face-to-face conversations and that both pupil dilation and facial expressions are mainly influenced by displayed emotional expressions, rather than shared ones. On the other hand, to account for the "appropriate reactions" aspect, I created a computational model based on the socio-psychological theories of Reinhard Fiehler. The model stores into a graph the "generalized emotional knowledge" retrieved from conversational datasets and presents information regarding emotional states, their causes, their effects, emotional reactions, and dialog acts through which people express emotions. The model is compared with a deep-learning architecture on the task of emotion prediction. Given the models’ performances, it is clear how beneficial, for a cognitive-based emotion prediction model, are the representation of the internal emotional state of the conversational parties and the control over the dialog acts of the exchanged utterances. Finally, I created a multimodal model combining the deep-learning model previously employed and the action units found to be significant in distinguishing reactions to positive and negative stimuli in the neurocognitive experiments. Although the model performed below expectations, it outperforms the baseline, proving the potential of multimodal approaches in the emotion prediction task. |
Subjects: | Emotions Conversation Online chat groups Hong Kong Polytechnic University -- Dissertations |
Pages: | iii, 177 pages : color illustrations |
| Appears in Collections: | Thesis |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/13387
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


