Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93316
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorYu, CKWen_US
dc.creatorHu, Yen_US
dc.creatorYang, Xen_US
dc.creatorChoy, SKen_US
dc.date.accessioned2022-06-15T03:42:43Z-
dc.date.available2022-06-15T03:42:43Z-
dc.identifier.issn0233-1934en_US
dc.identifier.urihttp://hdl.handle.net/10397/93316-
dc.language.isoenen_US
dc.publisherTaylor & Francisen_US
dc.rights© 2018 Informa UK Limited, trading as Taylor & Francis Groupen_US
dc.rightsThis is an Accepted Manuscript of an article published by Taylor & Francis in Optimization on 26 Mar 2018 (published online), available at: http://www.tandfonline.com/10.1080/02331934.2018.1455831en_US
dc.subjectAbstract convergence theoremen_US
dc.subjectBasic inequalityen_US
dc.subjectCobb–Douglas production efficiency problemen_US
dc.subjectQuasi-convex programmingen_US
dc.subjectSubgradient methoden_US
dc.titleAbstract convergence theorem for quasi-convex optimization problems with applicationsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1289en_US
dc.identifier.epage1304en_US
dc.identifier.volume68en_US
dc.identifier.issue7en_US
dc.identifier.doi10.1080/02331934.2018.1455831en_US
dcterms.abstractQuasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Hölder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb–Douglas production efficiency problem.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationOptimization, 2019, v. 68, no. 7, p. 1289-1304en_US
dcterms.isPartOfOptimizationen_US
dcterms.issued2019-
dc.identifier.scopus2-s2.0-85044441978-
dc.identifier.eissn1029-4945en_US
dc.description.validate202206 bcfcen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberAMA-0393-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextPolyUen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS6830456-
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Yang_Abstract_Convergence_Theorem.pdfPre-Published version726.15 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

57
Last Week
0
Last month
Citations as of May 19, 2024

Downloads

61
Citations as of May 19, 2024

SCOPUSTM   
Citations

8
Citations as of May 16, 2024

WEB OF SCIENCETM
Citations

6
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.