Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/91730
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorZheng, Huadi-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11406-
dc.language.isoEnglish-
dc.titleTowards privacy protection in the era of adversarial machine learning : attack and defense-
dc.typeThesis-
dcterms.abstractIn recent years, the pervasive application of machine learning has encouraged a boosting amount of artificial intelligent services, such as voice assistant, facial recognition, autonomous driving, word suggestion, and security diagnostics. Essentially, it enables a computer system to learn the underlying patterns from data and represents them in a model, which is then integrated into designated software for assessing new input. While machine learning has significantly reshaped the modern paradigm of system development, it also stirs up extensive social debates over privacy and confidentiality concerns. In particular, adversarial machine learning has grown in importance as researchers discover that a trained model can be deceived, extracted, inverted or applied in malicious inference. Nevertheless, up to now, the understanding of privacy risks and the countermeasures against them remain limited. To unveil privacy challenges and tackle potential vulnerabilities, I focus my PhD study on the emerging attacks and defenses in the context of adversarial machine learning. Adversarial machine learning originally refers to the manipulation of model behavior by supplying deceptive samples. With the rapid development of alternative attacks such as model extraction and membership inference, it has been bestrewed in a broader domain - corruption of functionality and confidentiality with respect to the adoption of machine learning, where new threats are not only presented in the decision stage but also demonstrated across the pipeline of machine learning. The works described in this thesis are mainly divided into three parts, in a top-down order of attack surfaces, from model prediction to data collection. In the first part, I present a novel mechanism for preventing the extraction of private decision boundary on machine learning services. The proposal consists of obfuscating the output of a classifier with the guarantee of boundary differential privacy, in such a way that fine-grained queries designed to infer the boundary have their accuracy sufficiently diminished in the critical zone, thus hampering the goal of delineating a clear inter-class border. In the second part, I present a side-channel attack system MISSILE to infer sensitive indoor locations in a given premise with machine learning inference. A spyware can stealthily collect these sensory data from typical inertial sensors, such as accelerator, gyroscope and magnetic sensor. In the third part, I will turn to the very source of data collection and study the future of privacy-preserving data collection with an empirical evaluation of local differential privacy and federated learning under a designated task. Finally, I conclude the insights revealed in this study and discuss possible directions of privacy protection with adversarial machine learning in mind.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxix, 108 pages : color illustrations-
dcterms.issued2021-
dcterms.LCSHMachine learning-
dcterms.LCSHComputer security-
dcterms.LCSHPrivacy, Right of-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

6
Citations as of Jul 3, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.