Please use this identifier to cite or link to this item:
Title: Multi-view learning for classification
Authors: Li, Jinxing
Degree: Ph.D.
Issue Date: 2018
Abstract: Multi-view learning methods often achieve performance improvement compared with single-view based methods in many applications. In this thesis, we mainly focus on multi-view learning and its application of classifcation. Four novel multi-view based methods are proposed. Considering the similarity and diversity existing in various views, the Joint Similar and Specifc Learning (JSSL) is first proposed to jointly and sparsely represent different views. In this method, each view is sparsely represented by the production of the view-specifc dictionary and representation coefficient. Particularly, the coefficient is divided into two parts: the similar one and specific one. Thanks to this structure, we can not only exploit the correlation among various views, but also extract individual components, representing multiple views in an reasonable way. An efficient algorithm is proposed to alternatively update the similar and specific parts in JSSL. However, in many real-world datasets, the distributions are complex and linearly representing the data, like JSSL, can not meet our requirement in many applications. To address this problem, we propose another multi-view learning method based on the Gaussian Process Latent Variable Model (GPLVM) to learn a set of non-linear and non-parametric mapping functions. GPLVM is a generative model which assumes there is a mapping function that can project a latent variable from a low-dimensional subspace to the observed high-dimensional space. In order to apply GPLVM to the multi-view data, we assume that there is a shared latent variable among various views and the conditionally independent mapping function with Gaussian Process (GP) prior can be learned for each view which can project the shared variable to its specific observed space. Here we denote this as the decoding part. Furthermore, we also introduce an additional mapping from the observed data to the shared space. Here we denote this as the encoding part. Due to this auto-encoder framework, both non-linear projections from and to the observation are considered simultaneously. Furthermore, different from the conventional auto-encoder methods, we enforce the GP prior on the encoder, which remarkably reduces the number of estimated parameters and avoids the phenomenon of over-fitting. In this thesis, we call this method as Shared Auto-encoder Gaussian Process latent variable model (SAGP). To apply SAGP to the classification, a discriminative regularization is also embedded to encourage the latent variables belonging to the same class to be close while these belonging to different classes to be far. In the optimization process, the alternating direction method (ADM) and gradient decent technique are combined to update the encoder and decoder effectively.
Although SAGP achieves an improvement compared with JSSL, it makes a strong assumption on the kernel function, such as Radial Basis Function (RBF), which limits the capacity of real data modeling. In order to address this issue, we further propose another GPLVM based method, named Multi-Kernel Shared Gaussian Process latent variable model (MKSGP). Instead of designing a deterministic kernel function, like SAGP, multiple kernels are adaptively selected to construct the encoder and decoder. In MKSGP, weights corresponding to different kernel functions are automatically learned for different input datasets, which can represent the data in a more reasonable way. Furthermore, different from SAGP which uses the classifer offline, a hinge loss is embedded into the model to jointly learn the classification hyperplane, encouraging the latent variables belonging to different classes to be separated. In this way, the learned classifier would be more suitable for the input data. Being similar to SAGP, we optimize the encoder and decoder alternatively by using ADM and gradient decent method. Experimental results demonstrate that MKSGP is more powerful than SAGP. JSSL, SAGP and MKSGP can well exploit the correlation among different views. However, in many real-world applications, each view often contains multiple features, which means this type of data has a hierarchical structure, while JSSL, SAGP and MKSGP do not take these features with the multi-layer structure into account. In this thesis, we propose a probabilistic hierarchical model to tackle this issue and apply it to classifcation. Here, this approach is named as Hierarchical Multi-view Multi-feature Fusion (HMMF). In this model, the multiple features extracted from a common view are first fused as a latent variable. Concretely, we assume that the extracted multiple features are generated from this latent variable. To achieve this goal, projection matrices corresponding to a certain view are learned to map the latent variable from a shared space to the multiple observations. Since we prefer to applying this method to classification, the estimated latent variables corresponding to different views are influenced by their ground-truth label. In this way, this kind of multi-view and multi-feature data are fused hierarchically. In order to effectively solve the proposed method, the Expectation-Maximization (EM) algorithm is applied to estimate the parameters and latent variables. To quantitatively evaluate the effectiveness of our four proposed multi-view methods, some experiments are conducted on several datasets, including synthetic dataset and real-world datasets. Experimental results on these datasets substantiate the effectiveness and superiority of our approaches as compared with some state-of-the-art methods.
Subjects: Hong Kong Polytechnic University -- Dissertations
Machine learning
Pages: xvi, 102 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

Last Week
Last month
Citations as of Sep 24, 2023

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.