Tuesday, November 10, 2015: 1:45 PM
Salon D (Salt Lake Marriott Downtown at City Creek)
Dynamic optimization problems are constrained by differential and algebraic equations and are found everywhere in science and engineering. A well-established method to solve these types of problems is direct transcription where the differential equations are replaced with algebraic approximations using some numerical method such as a finite-difference or Runge-Kutta (collocation) scheme. However, for problems with thousands of state variables and discretization points, direct transcription may result in nonlinear optimization problems which are too large for general-purpose optimization solvers to handle. In the case of applying an interior point solver, the dominant computational cost is solving the linear systems resulting from the Newton step. For large-scale nonlinear programming (NLP) problems, these linear systems become prohibitively expensive to solve. Furthermore, the systems quickly become too large to formulate and store in memory on a standard computer. Despite these challenges, direct transcription also has the nice property of imposing sparsity and structure on the linear systems. In this talk we discuss ways to take advantage of this property. We compare several matrix decomposition strategies which can be parallelized in order to solve large-scale dynamic optimization problems. In addition, we describe our implementation framework which can easily be applied to arbitrary dynamic optimization problems. Finally, we demonstrate the effectiveness of these techniques on some classical chemical engineering applications.