465256 Real-Time Proactive-Reactive Scheduling Under Uncertainty Using Approximate Dynamic Programming

Monday, November 14, 2016: 3:25 PM
Carmel II (Hotel Nikko San Francisco)
Go Bong Choi, School of Chemical and Biological Engineering, Seoul National University, Seoul, 151744, South Korea and Jong Min Lee, School of Chemical and Biological Engineering, Seoul National University, Seoul, South Korea

Scheduling is a key decision-making step for operation of chemical processes, concerned with sequencing over time the given jobs or operations which use resources or process equipment to meet some objective. There are many types of uncertainties that affect the scheduling, and among them are rush order and machine breakdown. Not only do these uncertainties affect the overall scheduling decision but they also bring about infeasibility in different levels of decision making including scheduling, strategic planning and even process control.

In order to deal with the uncertainty, two methods are often employed: proactive scheduling and reactive scheduling. Proactive scheduling makes decision prior to realization of the given activities using the knowledge of the uncertainty that can be generated. In contrast, reactive scheduling modifies production schedule when unexpected event occurs with a part of the schedule already realized.

Existing reactive scheduling makes a decision considering only the current situation in each decision epoch when unpredicted event occurs. However, it is only optimal until another unpredicted event occurs, and can be suboptimal or infeasible for long term operation under uncertainty. While proactive scheduling in each time taking into account of the uncertainty in the form of probability distribution can complement the issue of reactive scheduling, it is nearly impossible to obtain the solution in real time with existing stochastic formulations.

This work proposes a combined strategy of proactive and reactive scheduling where the proactive part is formulated as a stochastic dynamic program. The inherent computational complexity of stochastic dynamic optimization is addressed by approximate dynamic programming approach where the optimal value function is approximated in a recursive manner with Monte Carlo simulation and newly observed data. We illustrate the example in Kondili et al[i] to show the effectiveness of the proposed approach. First, Markov decision process based on mixed integer linear programming for state-task-network is constructed. Instead of price optimization, we reformulate the problem that minimizes makespan with uncertainty including machine breakdown and demand. The decision epoch depends on operation availability of each equipment. The nearly-optimal value function constructed from approximate dynamic programming can deduce proactive optimal policy for stochastic scheduling and when unpredicted event occurs because such an event can be regarded as a outcome of state transition. Moreover, it is possible to produce an optimal policy in real time. This value function can also help to analyze the feasibility under current situation when unpredicted events occur.

[i] Kondili, E., C. C. Pantelides, and R. W. H. Sargent. "A general algorithm for short-term scheduling of batch operations—I. MILP formulation." Computers & Chemical Engineering 17.2 (1993): 211-227.

Extended Abstract: File Uploaded
See more of this Session: CAST Rapid Fire Session: II
See more of this Group/Topical: Computing and Systems Technology Division