317717 Network Optimization Under Uncertainty: A Case Study

Wednesday, November 6, 2013: 5:35 PM
Continental 8 (Hilton)
Amy David, Department of Mechanical and Industrial Engineering (MC 251), University of Illinois at Chicago, Chicago, IL and Urmila Diwekar, Vishwamitra Research Institute, Center for Uncertain Systems: Tools for Optimization and Management, Clarendon Hills, IL

Network Planning Under Uncertainty

We present a case study of a large-scale stochastic optimization problem for a manufacturer with plants and customers throughout North America seeking to minimize total delivered cost (including production, handling, and freight), subject to capacity constraints and uncertainties in both demand and costs. We use novel methods both for sampling uncertainties and for determining an optimal solution under uncertainty, and show the value gained from using this methods over the deterministic linear programming (LP) methods previously in place.


The sponsoring company is a leading building supplies manufacturer in North America. For the product line that is the focus of this product, several dozen items are produced in three locations in the United States, and may pass through any one of a network of warehouses before being delivered to customers throughout the United States and Canada.

Currently, the production and distribution decisions are made through the use of a large scale linear program (LP). The goal of the LP is to minimize the total delivered cost (production, freight, and handling) for all items in the planning network, while meeting the managerial constraints of capacity at each plant and demand at each customer location.

The problem is extremely large, with upwards of 1500 customer locations being used in the current model. In addition, the number of items fluctuates between thirty and forty, and combined with the large number of warehouse locations and the choice between rail and truck modes of freight, over 90,000 decision variables are used in the current state.

This problem is solved using existing commercial software to create an input file that is then optimized on a CPLEX server; the resulting output is then interpreted by the software as well, and manually converted into “sourcing rules” that link the optimal plan to the order fulfillment system, ensuring that customer orders are placed at the correct plant or warehouse, and that warehouse replenishment orders are placed at the correct originating plant.

However, the current planning process does a poor job of accounting for variability in inputs. Most notably, the customer demand at each location changes from month to month, and at an item/customer level, this alone results in over 50,000 uncertainties. Further, the cost for each item at each producing plant may vary, adding over 100 more uncertainties to the problem. In the current state, the averages of the inputs are taken and put into the plan as though they are deterministic parameters, a method that is unlikely to produce the optimal stochastic solution.


Because of the computational complexity of a stochastic problem as compared to the deterministic problem, we first reduce the number of uncertainties and variables from its initial size. Customers are grouped into larger locations (in most cases, by state or province), improving not only the tractability of the problem, but ensuring that each combination of item and location had enough historical data to meaningfully infer a probability distribution for the demand parameter. This results in sixty-four locations, which, multiplied by thirty-seven items, yielded a total of 2368 demands. However, we then heuristically eliminate any demand representing an item/location combination that had been unused in the past twelve months, reducing the demands of interest to 463. In addition, the problem includes three production plants, each of which produced all thirty-seven items, for an additional 111 uncertain parameters, or 574 overall.

This large number of uncertainties requires the use of sampling methods designed to handle large-scale problems without creating unexpected correlations among parameters. We therefore investigate four sampling methods: Monte Carlo, Hammersley, Latin Hypercube, and Latin Hypercube - Hammersley (Diwekar and Ulas, 2007). We present the expected values and standard deviations of cost when demand and cost are sampled and input to a cost function prior to reoptimization, and compare among them to determine the method that results in the most precise output.

We then apply stochastic optimization methods to the network optimization problem itself. We used the BONUS method (Shastri and Diwekar, 2008) to account for the uncertainties in the cost function (here introduced by the uncertain cost of production for each item at each plant), and the chance constraint method (Charnes and Cooper, 1959) to account for the uncertainties in demands. We calculate a solution using these stochastic methods, and then compare it to the deterministic solution previously in place, showing the value of both the stochastic solution and the value of perfect information.


Charnes, A., & Cooper, W. W. (1959). Chance-constrained programming.Management science, 6(1), 73-79.

Diwekar U. and S. Ulas, (2007) “Sampling Techniques”, -Othmer Encyclopedia of Chemical Technology, Online Edition, 26,998.

Shastri Y. and U. Diwekar., L-Shaped BONUS algorithm with application to water pollutant trading, (2008), In Industrial and Engineering Chemistry Research, 47, 9417.

Extended Abstract: File Not Uploaded
See more of this Session: Supply Chain Optimization II
See more of this Group/Topical: Computing and Systems Technology Division