Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92512
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorGratton, Sen_US
dc.creatorVicente, LNen_US
dc.creatorZhang, Zen_US
dc.date.accessioned2022-04-12T08:43:29Z-
dc.date.available2022-04-12T08:43:29Z-
dc.identifier.urihttp://hdl.handle.net/10397/92512-
dc.language.isoenen_US
dc.rightsPosted with permission of the author.en_US
dc.subjectSpace transformationen_US
dc.subjectSpace decompositionen_US
dc.subjectCoarse spaceen_US
dc.subjectParallelen_US
dc.subjectInaccurate gradienten_US
dc.subjectTrust-region methodsen_US
dc.subjectWorst-case complexityen_US
dc.titleOptimization by space transformation and decompositionen_US
dc.typePreprinten_US
dcterms.abstractWe introduce a space transformation framework for unconstrained optimization of a function f that is smooth but not necessarily convex. At each iteration of the framework, the gradient of f is mapped by a forward transformation to an auxiliary (possibly lifted) space, in which a model is established based on the transformed gradient and a step is obtained by minimizing this model. A backward transformation then maps the step back to the original space, where the iterate is updated accordingly. Using a trust-region globalization scheme, and by inspection of the consistency between the forward and backward transformations, the framework is guaranteed to converge globally to first-order criticality with an O(E 2) worstcase iteration complexity to reach a gradient with norm smaller than E > 0. The complexity is improved to O(E 1) and O(log(E 1)) when f is convex and strongly convex respectively.en_US
dcterms.abstractThe space transformation framework can be directly specialized to a parallel space decomposition framework for nonlinear optimization, which can be regarded as an extension of the parallel Schwarz domain decomposition method for PDEs and linear systems. Such a decomposition framework is motivated by problems that cannot be directly solved (or even saved) in the full space and hence divide-and-conquer is needed. As in domain decomposition, introducing overlaps between subspaces can improve the performance of space decomposition methods provided that overlaps are handled properly. Our space decomposition framework covers a wide range of overlap-handling strategies, including Restricted Additive Schwarz, Weighted Restricted Additive Schwarz, and Additive Schwarz with Harmonic Extension, all of which have achieved remarkable success in domain decomposition. Similar to the coarse grid techniques in domain decomposition, we incorporate a coarse space correction into our framework. An unsophisticated implementation of the framework works quite well in our numerical experiments. With Restricted Additive Schwarz and coarse space correction, the algorithm scales nicely when the number of subspaces increases.en_US
dcterms.abstractAt the same time, the space transformation framework can be specialized to analyze trustregion methods when the gradient of f is evaluated inaccurately. Applying the theory of the space transformation framework, we provide sharp bounds on the gradient inaccuracy that guarantee global convergence of trust-region methods, and prove new global convergence results while recovering some existing ones. More importantly, we establish a complete theory of worst-case complexities for trust-region methods based on inaccurate gradients. If the gradient inaccuracy respects the aforementioned bounds, these complexities are of the same order as when the gradients are accurate, and the gradient inaccuracy only enlarges the multiplicative constants in them by a mild magnitude that is entirely decided by trust-region algorithmic parameters and independent of the optimization problem. On the other hand, once the gradient inaccuracy exceeds such bounds, the behavior of trust-region methods deteriorates dramatically, which is demonstrated by our numerical experiment. Our theory provides theoretical insights into the tight connection between algorithmic parameters and the amount of relative gradient error that trust-region methods can tolerate. We show that such a connection is clearly observable in computation. This enables us to propose parameters that are well adapted to a given accuracy level of the gradient.en_US
dcterms.abstract[Figure not available: see fulltext.]en_US
dcterms.accessRightsopen accessen_US
dcterms.issued2019-
dc.description.validate202204 bcwhen_US
dc.description.oaAuthor’s Originalen_US
dc.identifier.FolderNumberRGC-B1-178-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusNot yet publisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Gratton_Optimizatiby_Space_Transformatidecomposition.pdfPreprint version904.96 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Author’s Original
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

89
Last Week
0
Last month
Citations as of May 19, 2024

Downloads

36
Citations as of May 19, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.