Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/89888
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Logistics and Maritime Studies | en_US |
| dc.creator | Zhou, S | en_US |
| dc.creator | Zhang, H | en_US |
| dc.creator | Shi, N | en_US |
| dc.creator | Xu, Z | en_US |
| dc.creator | Wang, F | en_US |
| dc.date.accessioned | 2021-05-13T08:32:01Z | - |
| dc.date.available | 2021-05-13T08:32:01Z | - |
| dc.identifier.issn | 0377-2217 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/89888 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier | en_US |
| dc.rights | © 2019 Elsevier B.V. All rights reserved. | en_US |
| dc.rights | © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. | en_US |
| dc.rights | The following publication Zhou, S., Zhang, H., Shi, N., Xu, Z., & Wang, F. (2020). A new convergent hybrid learning algorithm for two-stage stochastic programs. European Journal of Operational Research, 283(1), 33-46 is available at https://dx.doi.org/10.1016/j.ejor.2019.11.001. | en_US |
| dc.subject | Network | en_US |
| dc.subject | Optimization | en_US |
| dc.subject | Piecewise linear approximation | en_US |
| dc.subject | Stochastic programming | en_US |
| dc.title | A new convergent hybrid learning algorithm for two-stage stochastic programs | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 33 | en_US |
| dc.identifier.epage | 46 | en_US |
| dc.identifier.volume | 283 | en_US |
| dc.identifier.issue | 1 | en_US |
| dc.identifier.doi | 10.1016/j.ejor.2019.11.001 | en_US |
| dcterms.abstract | This study proposes a new hybrid learning algorithm to approximate the expected recourse function for two-stage stochastic programs. The proposed algorithm, which is called projected stochastic hybrid learning algorithm, is a hybrid of piecewise linear approximation and stochastic subgradient methods. Piecewise linear approximations are updated adaptively by using stochastic subgradient and sample information on the objective function itself. In order to achieve a global optimum, a projection step that implements the stochastic subgradient method is performed to jump out from a local optimum. For general two-stage stochastic programs, we prove the convergence of the algorithm. Furthermore, the algorithm can drop the projection steps for two-stage stochastic programs with network recourse. Therefore, the pure piecewise linear approximation method is convergent when the initial piecewise linear functions are properly constructed. Computational results indicate that the algorithm exhibits rapid convergence. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | European journal of operational research, 16 May 2020, v. 283, no. 1, p. 33-46 | en_US |
| dcterms.isPartOf | European journal of operational research | en_US |
| dcterms.issued | 2020-05-16 | - |
| dc.identifier.scopus | 2-s2.0-85076205473 | - |
| dc.identifier.eissn | 1872-6860 | en_US |
| dc.description.validate | 202105 bchy | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | a0831-n01 | - |
| dc.identifier.SubFormID | 1933 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | NNSFC | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zhou_New_Convergent_Hybrid.pdf | Pre-Published version | 2.31 MB | Adobe PDF | View/Open |
Page views
98
Last Week
0
0
Last month
Citations as of Apr 14, 2025
Downloads
94
Citations as of Apr 14, 2025
SCOPUSTM
Citations
13
Citations as of Dec 19, 2025
WEB OF SCIENCETM
Citations
9
Citations as of Oct 10, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



