Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/100919
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Logistics and Maritime Studiesen_US
dc.creatorFeng, Zen_US
dc.creatorDawande, Men_US
dc.creatorJanakiraman, Gen_US
dc.creatorQi, Aen_US
dc.date.accessioned2023-08-14T07:57:55Z-
dc.date.available2023-08-14T07:57:55Z-
dc.identifier.issn0030-364xen_US
dc.identifier.urihttp://hdl.handle.net/10397/100919-
dc.language.isoenen_US
dc.publisherInstitute for Operations Research and the Management Sciencesen_US
dc.rightsCopyright © 2023 INFORMSen_US
dc.rightsThis is the accepted manuscript of the following article: Feng, Z., Dawande, M., Janakiraman, G., & Qi, A. (2023). Technical Note—Dynamic Pricing and Learning with Discounting. Operations Research, 72(2), 481-492, which is available at https://doi.org/10.1287/opre.2023.2477.en_US
dc.subjectDynamic pricingen_US
dc.subjectLearningen_US
dc.subjectDiscountingen_US
dc.subjectRegret minimizationen_US
dc.titleDynamic pricing and learning with discountingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage481en_US
dc.identifier.epage492en_US
dc.identifier.volume72en_US
dc.identifier.issue2en_US
dc.identifier.doi10.1287/opre.2023.2477en_US
dcterms.abstractIn many practical settings, learning algorithms can take a substantial amount of time to converge, thereby raising the need to understand the role of discounting in learning. We illustrate the impact of discounting on the performance of learning algorithms by examining two classic and representative dynamic-pricing and learning problems studied in Broder and Rusmevichientong (2012) [BR] and Keskin and Zeevi (2014) [KZ]. In both settings, a seller sells a product with unlimited inventory over T periods. The seller initially does not know the parameters of the general choice model in BR (resp., the linear demand curve in KZ). Given a discount factor ρ, the retailer's objective is to determine a pricing policy to maximize the expected discounted revenue over T periods. In both settings, we establish lower bounds on the regret under any policy and show limiting bounds of Ω(√1/(1-ρ)) and Ω(√T) when T → ∞ and ρ → 1, respectively. In the model of BR with discounting, we propose an asymptotically tight learning policy and show that the regret under our policy as well that under the MLE-CYCLE policy in BR is Ο(√1/(1-ρ)) (resp., Ο(√T)) when T → ∞ (resp., ρ → 1). In the model of KZ with discounting, we present sufficient conditions for a learning policy to guarantee asymptotic optimality, and show that the regret under any policy satisfying these conditions is Ο(log(1/(1-ρ))√1/(1-ρ)) (resp., Ο(logT√T)) when T → ∞ (resp., ρ → 1). We show that three different policies - namely, the two variants of the greedy Iterated-Least-Squares policy in KZ and a different policy that we propose - achieve this upper bound on the regret. We numerically examine the behavior of the regret under our policies as well as those in BR and KZ in the presence of discounting. We also analyze a setting in which the discount factor per period is a function of the number of decision periods in the planning horizon.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationOperations research, Mar.-Apr. 2024, v. 72, no. 2, p. 481-492en_US
dcterms.isPartOfOperations researchen_US
dcterms.issued2024-03-
dc.identifier.eissn1526-5463en_US
dc.description.validate202308 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera2354-
dc.identifier.SubFormID47565-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of Chinaen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Feng_Dynamic_Pricing_Learning.pdfPre-Published version1.55 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

190
Last Week
8
Last month
Citations as of Nov 30, 2025

Downloads

174
Citations as of Nov 30, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.