Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115349
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Design | en_US |
| dc.creator | Wang, SJ | en_US |
| dc.date.accessioned | 2025-09-22T06:14:46Z | - |
| dc.date.available | 2025-09-22T06:14:46Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115349 | - |
| dc.language.iso | en | en_US |
| dc.rights | All rights reserved. | en_US |
| dc.rights | Posted with permission of the author. | en_US |
| dc.title | Integrating emotional intelligence in design | en_US |
| dc.type | Design Research Portfolio | en_US |
| dcterms.abstract | Professor Stephen Jia Wang’s research explores the potential of developing and integrating multimodal emotion recognition technologies into people’s daily lives to enhance their emotional intelligence and well-being. The research leverages advanced design and AI techniques to address the growing global concerns of distress, sadness and anxiety, as documented by Daly & Macchia (2010). It examines how interactive and intelligent systems design might effectively reduce stress by recognising and responding to subtle emotional expressions, such as micro-gestures (MGs). It seeks to understand users’ emotional support needs to establish the foundational parameters for a framework that guides the design of emotionally intelligent systems, with a specific focus on prioritising and stimulating emotional interactions. | en_US |
| dcterms.abstract | Conducted between 2014 and 2023, this comprehensive literature review identifies a significant shift from traditional subjective emotion assessment methods to multimodal emotion recognition technologies. It highlights that reliance on single modalities limits robustness, especially in complex, real-life scenarios and emphasises three key advantages of multimodal methods: | en_US |
| dcterms.abstract | 1) Enhanced Accuracy: Simultaneous access to multiple emotional modalities (e.g. smiling and applauding when happy) enhances contextual understanding and accuracy (Baltrušaitis et al., 2018). | en_US |
| dcterms.abstract | 2) Complementary Strengths: Different multimodal inputs complement each other by addressing their respective limitations during testing (Xue et al., 2024). | en_US |
| dcterms.abstract | 3) Missing Data Compensation: Missing inputs from one modality can be compensated for by data from another (D'mello & Kory, 2015), such as recognising emotion from visual cues when audio cues are absent. | en_US |
| dcterms.abstract | The Major Collaborative Output (MCO) encompasses five interrelated research projects funded by the Transport Department of Hong Kong’s Smart Traffic Fund and industry partners Huawei, GAC and Research Centre for Future (Caring) Mobility (RcFCM), with a total funding of HK$5.7M. These projects collectively address critical research questions (RQs) and provide foundational knowledge for a family of patents. Notable outcomes include the award winning ‘EmoSense’ technology and the ‘EmoFriends’ toolkit, which transform common plush toys into intelligent companion robots that provide real-time emotional support and healthier emotional environments. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.issued | 2025-09 | - |
| dc.relation.publication | unpublished | en_US |
| dc.description.validate | 202509 bcjz | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.FolderNumber | a4065-n01 | - |
| dc.description.oaCategory | Copyright retained by author | en_US |
| Appears in Collections: | Creative Work | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Wang_Integrating_Emotional_Intelligence.pdf | 5.96 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



