Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110303
PIRA download icon_1.1View/Download Full Text
Title: Supervised maximum variance unfolding
Authors: Yang, D
Qi, HD 
Issue Date: Sep-2024
Source: Machine learning, Sept 2024, v. 113, no. 9, p. 6197-6226
Abstract: Maximum Variance Unfolding (MVU) is among the first methods in nonlinear dimensionality reduction for data visualization and classification. It aims to preserve local data structure and in the meantime push the variance among data as big as possible. However, MVU in general remains a computationally challenging problem and this may explain why it is less popular than other leading methods such as Isomap and t-SNE. In this paper, based on a key observation that the structure-preserving term in MVU is actually the squared stress in Multi-Dimensional Scaling (MDS), we replace the term with the stress function from MDS, resulting in a model that is usable. The property of the usability guarantees the “crowding phenomenon” will not happen in the dimension reduced results. The new model also allows us to combine label information and hence we call it the supervised MVU (SMVU). We then develop a fast algorithm that is based on Euclidean distance matrix optimization. By making use of the majorization-mininmization technique, the algorithm at each iteration solves a number of one-dimensional optimization problems, each having a closed-form solution. This strategy significantly speeds up the computation. We demonstrate the advantage of SMVU on some standard data sets against a few leading algorithms including Isomap and t-SNE.
Keywords: Data visualization
Dimensionality reduction
Euclidean distance matrix
Maximum variance unfolding
Multi-dimensional scaling
Publisher: Springer New York LLC
Journal: Machine learning 
ISSN: 0885-6125
EISSN: 1573-0565
DOI: 10.1007/s10994-024-06553-8
Rights: © The Author(s) 2024
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
The following publication Yang, D., Qi, HD. Supervised maximum variance unfolding. Mach Learn 113, 6197–6226 (2024) is available at https://doi.org/10.1007/s10994-024-06553-8.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
s10994-024-06553-8.pdf4.52 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

14
Citations as of Apr 14, 2025

Downloads

6
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.