Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114431
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Applied Mathematics | - |
| dc.creator | Bo, L | - |
| dc.creator | Huang, Y | - |
| dc.creator | Yu, X | - |
| dc.date.accessioned | 2025-08-06T09:12:13Z | - |
| dc.date.available | 2025-08-06T09:12:13Z | - |
| dc.identifier.issn | 0363-0129 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/114431 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Society for Industrial and Applied Mathematics | en_US |
| dc.rights | © 2025 Society for Industrial and Applied Mathematics | en_US |
| dc.rights | Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. | en_US |
| dc.rights | The following publication Bo, L., Huang, Y., & Yu, X. (2025). On Optimal Tracking Portfolio in Incomplete Markets: The Reinforcement Learning Approach. SIAM Journal on Control and Optimization, 63(1), 321-348 is available at https://doi.org/10.1137/23m1620892. | en_US |
| dc.subject | Capital injection | en_US |
| dc.subject | Continuous-time q-learning | en_US |
| dc.subject | Incomplete market | en_US |
| dc.subject | Optimal tracking portfolio | en_US |
| dc.subject | Reflected diffusion process | en_US |
| dc.title | On optimal tracking portfolio in incomplete markets : the reinforcement learning approach | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 321 | - |
| dc.identifier.epage | 348 | - |
| dc.identifier.volume | 63 | - |
| dc.identifier.issue | 1 | - |
| dc.identifier.doi | 10.1137/23M1620892 | - |
| dcterms.abstract | This paper studies an infinite horizon optimal tracking portfolio problem using capital injection in incomplete market models. The benchmark process is modeled by a geometric Brownian motion with zero drift driven by some unhedgeable risk. The relaxed tracking formulation is adopted where the fund account is compensated by the injected capital needs to outperform the benchmark process at any time, and the goal is to minimize the cost of the discounted total capital injection. When model parameters are known, we formulate the equivalent auxiliary control problem with reflected state dynamics, for which the classical solution of the HJB equation with Neumann boundary condition is obtained explicitly. When model parameters are unknown, we introduce the exploratory formulation for the auxiliary control problem with entropy regularization and develop the continuous-time q-learning algorithm in models of reflected diffusion processes. In some illustrative numerical examples, we show the satisfactory performance of the q-learning algorithm. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | SIAM journal on control and optimization, 2025, v. 63, no. 1, p. 321-348 | - |
| dcterms.isPartOf | SIAM journal on control and optimization | - |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-85218636131 | - |
| dc.identifier.eissn | 1095-7138 | - |
| dc.description.validate | 202508 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | a3961 | en_US |
| dc.identifier.SubFormID | 51835 | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | VoR allowed | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 23m1620892.pdf | 784 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



