Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/99780
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Logistics and Maritime Studies | en_US |
| dc.creator | Feng, Z | en_US |
| dc.creator | Dawande, M | en_US |
| dc.creator | Janakiraman, G | en_US |
| dc.creator | Qi, A | en_US |
| dc.date.accessioned | 2023-07-20T02:59:12Z | - |
| dc.date.available | 2023-07-20T02:59:12Z | - |
| dc.identifier.issn | 0025-1909 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/99780 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute for Operations Research and the Management Sciences | en_US |
| dc.rights | © 2022 INFORMS | en_US |
| dc.rights | This is the accepted manuscript of the following article: Zhichao Feng, Milind Dawande, Ganesh Janakiraman, Anyan Qi (2022) An Asymptotically Tight Learning Algorithm for Mobile-Promotion Platforms. Management Science 69(3):1536-1554, which has been published in final form at https://doi.org/10.1287/mnsc.2022.4441. | en_US |
| dc.subject | Online advertising | en_US |
| dc.subject | Learning | en_US |
| dc.subject | Regret minimization | en_US |
| dc.subject | Stochastic dynamic programming | en_US |
| dc.title | An asymptotically tight learning algorithm for mobile-promotion platforms | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 1536 | en_US |
| dc.identifier.epage | 1554 | en_US |
| dc.identifier.volume | 69 | en_US |
| dc.identifier.issue | 3 | en_US |
| dc.identifier.doi | 10.1287/mnsc.2022.4441 | en_US |
| dcterms.abstract | Operating under both supply-side and demand-side uncertainties, a mobile-promotion platform conducts advertising campaigns for individual advertisers. Campaigns arrive dynamically over time, which is divided into seasons; each campaign requires the platform to deliver a target number of mobile impressions from a desired set of locations over a desired time interval. The platform fulfills these campaigns by procuring impressions from publishers, who supply advertising space on apps via real-time bidding on ad exchanges. Each location is characterized by its win curve, that is, the relationship between the bid price and the probability of winning an impression at that bid. The win curves at the various locations of interest are initially unknown to the platform, and it learns them on the fly based on the bids it places to win impressions and the realized outcomes. Each acquired impression is allocated to one of the ongoing campaigns. The platform’s objective is to minimize its total cost (the amount spent in procuring impressions and the penalty incurred due to unmet targets of the campaigns) over the time horizon of interest. Our main result is a bidding and allocation policy for this problem. We show that our policy is the best possible (asymptotically tight) for the problem using the notion of regret under a policy, namely the difference between the expected total cost under that policy and the optimal cost for the clairvoyant problem (i.e., one in which the platform has full information about the win curves at all the locations in advance): The lower bound on the regret under any policy is of the order of the square root of the number of seasons, and the regret under our policy matches this lower bound. We demonstrate the performance of our policy through numerical experiments on a test bed of instances whose input parameters are based on our observations at a real-world mobile-promotion platform. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Management science, Mar. 2023, v. 69, no. 3, p. 1536-1554 | en_US |
| dcterms.isPartOf | Management science | en_US |
| dcterms.issued | 2023-03 | - |
| dc.identifier.eissn | 1526-5501 | en_US |
| dc.description.validate | 202307 bcch | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | a2186 | - |
| dc.identifier.SubFormID | 46926 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Feng_Asymptotically_Tight_Learning.pdf | Pre-Published version | 1.89 MB | Adobe PDF | View/Open |
Page views
95
Citations as of Apr 14, 2025
Downloads
172
Citations as of Apr 14, 2025
SCOPUSTM
Citations
1
Citations as of Jun 21, 2024
WEB OF SCIENCETM
Citations
4
Citations as of Oct 10, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



