Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributorSchool of Design-
dc.creatorSong, Yao-
dc.titleInitial trust in AI agent : communicating facial anthropomorphic trustworthiness for social robot design-
dcterms.abstractAs a technical application in artificial intelligence (AI), a social robot is one of the branches of robotic studies, which emphasizes socially communicating and interacting with human beings. Distinct from other humanoid and industrial robots, which usually have limited human-like features, the state-of-art (SOTA) social robots are usually equipped with a screen-based head, interfaced with an animated or human-like face, to respond and meet the need of human being. Similar to interpersonal interaction, trustworthiness towards a social robot is crucial in human-robot interaction since the social robot, in daily lives, usually acts as a "listener and responder", providing not only practical assistance but also emotional support for humans. In this view, it is important that it should be considered as a trustworthy "partner or friend". However, to communicate facial trustworthiness for a social robot is still a challenging problem where few studies have tried to address this issue in a comprehensive way. This thesis tries to fulfill this research gap by deliberately exploring the concept, structurally identifying significant features, examining, modeling the effects for statistic and dynamic features, and providing practical guidelines. Different from the constructs from human trustworthiness, facial anthropomorphic trustworthiness could have four distinct dimensions: capability, anthropomorphism, positive affect, and ethics concern. Accordingly, facial anthropomorphic trustworthiness could be initially summarized as impression-based trustworthiness where people rely on facial cues of social robots to evaluate their capability, ethics concern, anthropomorphism, and positive affect. As for the essential features communicating facial anthropomorphic trustworthiness, four facial categories emerged accordingly: internal features, external features, combinations of features, and emotional expressions. For static features, internal features (eye shape and mouth shape), external features (fWHR and face shape), and feature combination (feature sizes and positions) were analyzed. In general, eye shape and mouth shape have a significant impact on facial anthropomorphic trustworthiness where a robotic face with round eyes (vs. narrow eyes) and upturned or neutral mouth (vs. downturned mouth) enjoyed a higher level of facial anthropomorphic trustworthiness. fWHR has a significant impact on facial anthropomorphic trustworthiness where a robotic face with high fWHR (vs. low fWHR) enjoyed a higher level of facial anthropomorphic trustworthiness. Eye size and positions of eyes and mouth have a significant impact on facial anthropomorphic trustworthiness for social robot where a robotic face with large eyes (vs. small eyes), medium horizontal and vertical position of eye and mouth (vs. high or low horizontal and vertical positions) enjoyed a higher level of facial anthropomorphic trustworthiness. Moreover, a model for facial anthropomorphic trustworthiness in static features was developed and a validation study via variations in planned and random stimuli was conducted to confirm the reliability of the current model. For dynamic features, facial valence and arousal, and their interaction with regulatory-focused contexts were analyzed. In general, facial valence and its interaction with regulatory contexts have a significant impact on facial anthropomorphic trustworthiness where a robotic face with positive (vs. negative expressions) enjoyed a higher level of facial anthropomorphic trustworthiness while positive expressions were compatible with the promotion-focused context and negative expressions were compatible with prevention-focused context, in signaling facial anthropomorphic trustworthiness. Similarly, a model for facial anthropomorphic trustworthiness in dynamic features was developed. This research would advance the theory of human-robot interaction. This study, for the first time, set out to explore and develop the meaning of facial anthropomorphic trustworthiness, providing a relatively holistic picture when evaluating the trustworthiness of a social robot at first sight. In addition, through a series of experiments, this research tried to adopt both subject and physiological measures to examine the difference and similarities between human facial features and robot facial features in signaling perceived trustworthiness and further model and discuss the mechanism leading to such phenomenon. As for practical implications, although the current market has various social robots and some of them even won some honors, detailed and specific guidelines are still missing to some extent that companies design a social robot primarily relying on their own understandings and intuitions. Accordingly, this research advanced the design guidelines of a social robot, which could work as preliminary instructions in helping robot designers to make a trustworthy robot and increasing purchase intentions for the consumers.-
dcterms.accessRightsopen access-
dcterms.extent[22], 230, [26] pages : color illustrations-
dcterms.LCSHRobotics -- Philosophy-
dcterms.LCSHHuman-robot interaction -- Philosophy-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

Citations as of Jun 26, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.