Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115349
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Designen_US
dc.creatorWang, SJen_US
dc.date.accessioned2025-09-22T06:14:46Z-
dc.date.available2025-09-22T06:14:46Z-
dc.identifier.urihttp://hdl.handle.net/10397/115349-
dc.language.isoenen_US
dc.rightsAll rights reserved.en_US
dc.rightsPosted with permission of the author.en_US
dc.titleIntegrating emotional intelligence in designen_US
dc.typeDesign Research Portfolioen_US
dcterms.abstractProfessor Stephen Jia Wang’s research explores the potential of developing and integrating multimodal emotion recognition technologies into people’s daily lives to enhance their emotional intelligence and well-being. The research leverages advanced design and AI techniques to address the growing global concerns of distress, sadness and anxiety, as documented by Daly & Macchia (2010). It examines how interactive and intelligent systems design might effectively reduce stress by recognising and responding to subtle emotional expressions, such as micro-gestures (MGs). It seeks to understand users’ emotional support needs to establish the foundational parameters for a framework that guides the design of emotionally intelligent systems, with a specific focus on prioritising and stimulating emotional interactions.en_US
dcterms.abstractConducted between 2014 and 2023, this comprehensive literature review identifies a significant shift from traditional subjective emotion assessment methods to multimodal emotion recognition technologies. It highlights that reliance on single modalities limits robustness, especially in complex, real-life scenarios and emphasises three key advantages of multimodal methods:en_US
dcterms.abstract1) Enhanced Accuracy: Simultaneous access to multiple emotional modalities (e.g. smiling and applauding when happy) enhances contextual understanding and accuracy (Baltrušaitis et al., 2018).en_US
dcterms.abstract2) Complementary Strengths: Different multimodal inputs complement each other by addressing their respective limitations during testing (Xue et al., 2024).en_US
dcterms.abstract3) Missing Data Compensation: Missing inputs from one modality can be compensated for by data from another (D'mello & Kory, 2015), such as recognising emotion from visual cues when audio cues are absent.en_US
dcterms.abstractThe Major Collaborative Output (MCO) encompasses five interrelated research projects funded by the Transport Department of Hong Kong’s Smart Traffic Fund and industry partners Huawei, GAC and Research Centre for Future (Caring) Mobility (RcFCM), with a total funding of HK$5.7M. These projects collectively address critical research questions (RQs) and provide foundational knowledge for a family of patents. Notable outcomes include the award winning ‘EmoSense’ technology and the ‘EmoFriends’ toolkit, which transform common plush toys into intelligent companion robots that provide real-time emotional support and healthier emotional environments.en_US
dcterms.accessRightsopen accessen_US
dcterms.issued2025-09-
dc.relation.publicationunpublisheden_US
dc.description.validate202509 bcjzen_US
dc.description.oaNot applicableen_US
dc.identifier.FolderNumbera4065-n01-
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Creative Work
Files in This Item:
File Description SizeFormat 
Wang_Integrating_Emotional_Intelligence.pdf5.96 MBAdobe PDFView/Open
Open Access Information
Status open access
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.