Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114801
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorQiu, Zicheng-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13780-
dc.language.isoEnglish-
dc.titleA quasi-newton subspace trust region algorithm for nonmonotone variational inequalities and applications in adversarial learning-
dc.typeThesis-
dcterms.abstractNonmonotone Variational Inequalities (VIs) are widely applied in the fields of data sciences and machine learning. However, the design of algorithms towards nonmonotone VIs is still a challenge. State-of-the-art algorithms can only solve nonmonotone VIs under some strong assumptions, such as pseudomonotonicity or Minty's condition. The main purpose of this thesis is to study a class of nonmonotone VIs with box constraints, which is equivalent to a system of nonsmooth equations. The thesis is primarily divided into two parts.-
dcterms.abstractIn the first part of the thesis, we propose a smoothing Quasi-Newton Subspace Trust Region (QNSTR) algorithm for the least squares problems defined by the smoothing approximation of nonsmooth equations. Based on the structure of the nonmonotone VI, we use an adaptive quasi-Newton formula to approximate the Hessian matrix and solve a low-dimensional strongly convex quadratic program with ellipse constraints in a subspace at each step of the QNSTR algorithm efficiently. Moreover, we study the relationship between solutions of the VI and first order stationary points of the least squares problem, and prove the global convergence of the QNSTR algorithm to a solution of the VI under some mild conditions. We also propose a strategy to update the smoothing parameter and establish its complexity.-
dcterms.abstractIn the second part of the thesis, we implement the QNSTR algorithm to solve a box constrained nonconvex-nonconcave minimax optimization problem with application to practical problems. Since the objective function of the optimization problem has expectation, we apply the Sample Average Approximation (SAA) method to solve the optimization problem. We prove that any accumulation points of the global minimax point, first order stationary point, second order stationary point of the SAA problem is a global minimax point, first order stationary point, second order stationary point of the original problem respectively, as the sample size N tends to be infinity. We formulate the first order optimality condition of the box constrained SAA minimax problem as a nonmonotone VI, and apply the QNSTR to find a first order stationary point of the SAA problem via the nonmonotone VI. We also present numerical results based on the QNSTR algorithm with different subspaces for generative adversarial networks on several practical adversarial learning problems using real data on eyes. The numerical results show that the QNSTR algorithm is efficient and effective for solving large scale minimax optimization problems.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxvii, 98 pages : color illustrations-
dcterms.issued2025-
dcterms.LCSHNonsmooth optimization-
dcterms.LCSHVariational inequalities (Mathematics)-
dcterms.LCSHMathematical optimization-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.