Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/7945
Title: Dynamically optimal policies for stochastic scheduling subject to preemptive-repeat machine breakdowns
Authors: Cai, X
Wu, X
Zhou, X
Keywords: Discounted reward
Dynamic policy
Gittins index
Machine breakdowns
Min-max criterion
Number of tardy jobs
Preemptive-repeat
Semi-Markov decision process
Stochastic scheduling
Weighted flow time
Issue Date: 2005
Publisher: Institute of Electrical and Electronics Engineers
Source: IEEE transactions on automation science and engineering, 2005, v. 2, no. 2, p. 158-172 How to cite?
Journal: IEEE transactions on automation science and engineering 
Abstract: We consider the problem of finding a dynamically optimal policy to process n jobs on a single machine subject to stochastic breakdowns. We study the preemptive-repeat breakdown model, i.e., if a machine breaks down during the processing of a job, the work done on the job prior to the breakdown is lost and the job will have to be started over again. Our study is built on a general setting, which allows: 1) the uptimes and downtimes of the machine to follow general probability distributions, not necessarily independent of each other; 2) the breakdown process to depend upon the job being processed; and 3) the processing times of the jobs to be random variables following arbitrary distributions. We consider two possible cases for the processing time of a job interrupted by a breakdown: a) it is resampled according to its probability distribution or b) it is the same random variable as that before the breakdown. We introduce the concept of occupying time and find its Laplace and integral transforms. For the problem with resampled processing times, we establish a general optimality equation on the optimal dynamic policy under a unified objective measure. We deduce from the optimality equation the optimal dynamic policies for several problems with well-known criteria, including weighted discounted reward, weighted flowtime, truncated cost, number of tardy jobs under stochastic order, and maximum holding cost. For the problem with same random processing time, we develop the optimal dynamic policy via the theory of bandit process. A set of Gittins indices are derived that give the optimal dynamic policies under the criteria of weighted discounted reward and weighted flowtime.
URI: http://hdl.handle.net/10397/7945
ISSN: 1545-5955
EISSN: 1558-3783
DOI: 10.1109/TASE.2005.844119
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

16
Last Week
0
Last month
0
Citations as of Sep 9, 2017

WEB OF SCIENCETM
Citations

12
Last Week
0
Last month
0
Citations as of Sep 22, 2017

Page view(s)

47
Last Week
2
Last month
Checked on Sep 18, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.