- 3:15 PM
343a

Global Optimization of Algorithms

Alexander Mitsos, Aices, RWTH Aachen, Pauwelstrasse 12, Aachen, Germany, Benoît Chachuat, Automatic Control Laboratory, Ecole Polytechnique Federale de Lausanne (EPFL), Station 9, Lausanne, CH-1015, Switzerland, and Paul I. Barton, Chemical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., RM 66-464, Cambridge, MA 02139.

Theory, examples and an open-source implementation of subgradient propagation in the McCormick relaxations [1] are presented. This development allows the convex relaxation and global optimization of algorithms. In particular, parameter estimation problems with dynamic systems embedded can be solved provided that an algorithm with a fixed number of iterations exists for the corresponding simulation problem.

Deterministic algorithms based on continuous and/or discrete branch-and-bound have facilitated the global optimization of nonconvex programs. These algorithms rely on convex or affine relaxations of the functions participating in the optimization problem. Convex and concave envelopes or tight relaxations are known for a variety of simple nonlinear terms [2] and this allows the construction of convex and concave relaxations for a quite general class of functions through several methods [1,3,4]. The majority of the literature on global optimization considers nonconvex programs for which explicit functions are known for the objective and constraints. A more general case is that the objective function and/or constraints are calculated implicitly by an algorithm, e.g., the solution of a boundary value problem via the finite element method. Similar to automatic differentiation [5], the relaxations based on McCormick's composition theorem [1,6] can be used to construct convex relaxations of algorithms automatically. However, these relaxations are nonsmooth. Here, the systematic propagation of subgradients is described without introducing auxiliary variables. The calculation of subgradients allows the use of a nonsmooth NLP solver to obtain the required lower bound. Alternatively, linearizations via the subgradients can give an affine relaxation. Similar to the convex relaxation, the subgradient propagation relies on the recursive application of a few rules, namely the calculation of subgradients for addition, multiplication and composition operations. Subgradients at interior points can be calculated for any factorable function for which a McCormick relaxation exists, provided that subgradients are known for the relaxations of the univariate intrinsic functions. For boundary points additional assumptions are necessary.

While the proposed subgradient propagation and relaxation of algorithms can be used in a variety of optimization problems, this talk focuses on dynamic optimization problems, and more specifically on parameter estimation problems. Sophisticated global optimization methods exist for initial value problems [7,8]. Similar to the existing methods, the proposal in this talk relies on discretization of the independent variable and on McCormick relaxations. The main difference is that in the existing algorithms, relaxation is performed before the discretization (done by the integrator), whereas in the proposal the relaxation is performed after the discretization. Another difference is that here state bounds and relaxations are obtained simultaneously. In addition to being more flexible (e.g., applicable also to boundary value problems), it is interesting that the proposed relaxation of algorithms provides a very simple alternative to the elaborate theory of the existing methods.

Case studies from both initial and boundary value problems are presented, including the solution of the heat equation via finite differences and the solution of a nonlinear ordinary differential equation (ODE) system via the explicit-Euler method. The commercial code BARON [9] is used as a benchmark for the performance of the proposed algorithmic relaxations. BARON requires explicit functions and therefore the discretized state variables are encoded as optimization variables along with the equality constraints from the discretization of the ODEs relating them. The case studies have an imperfect match between model and measurement and the challenge for global optimization methods is to converge the lower bound. Establishing that the lower bound is higher than the statistical test for the fit proves that the mismatch is due to the model and not to a suboptimal parameter fit [10]. An obvious advantage of the lower bounding scheme proposed here is the significantly smaller number of optimization variables. The computational results demonstrate that this advantage can result in drastically faster convergence of the lower bound. BARON is a state-of-the-art deterministic global NLP solver, and therefore the comparison shows the promise of the proposed method.

[1] G. P. McCormick. Computability of global solutions to factorable nonconvex programs: Part I. Convex underestimating problems. Mathematical Programming, 10(2):147--175, 1976.

[2] M. Tawarmalani and N. V. Sahinidis. Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. Mathematical Programming, 99(3):563--591, 2004.

[3] C. S. Adjiman and C. A. Floudas. Rigorous convex underestimators for general twice-differentiable problems. Journal of Global Optimization, 9(1):23--40, 1996.

[4] E. M. B. Smith and C. C. Pantelides. A symbolic reformulation/spatial branch-and-bound algorithm for the

global optimisation of nonconvex MINLPs. Computers & Chemical Engineering, 23(4-5):457--478, 1999.

[5] A. Griewank. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Frontiers in Applied Mathematics. SIAM, Philadelphia (PA), 2000.

[6] G. P. McCormick. Nonlinear Programming: Theory, Algorithms and Applications. John Wiley and Sons, New York, 1983.

[7] A. B. Singer. Global Dynamic Optimization, http://yoric.mit.edu/download/Reports/SingerThesis.pdf. PhD thesis, Massachusetts Institute of Technology, 2004.

[8] B. Chachuat, A. B. Singer, and P. I. Barton. Global methods for dynamic optimization and mixed-integer dynamic optimization. Industrial & Engineering Chemistry Research, 45(25):8373--8392, 2006.

[9] N. V. Sahinidis. BARON: A general purpose global optimization software package. Journal of Global Optimization, 8:201--205, 1996.

[10] A. B. Singer, J. W. Taylor, P. I. Barton, and W. H. Green. Global dynamic optimization for parameter estimation in chemical kinetics. Journal of Physical Chemistry A, 110(3):971--976, 2006.