261198 Advanced-Multi-Step Nonlinear Model Predictive Control

Thursday, November 1, 2012: 8:55 AM
323 (Convention Center )
Xue Yang1, Gilvan A. G. Fischer2 and Lorenz T. Biegler1, (1)Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA, (2)Henrique Lage Refinery, Petróleo Brasileiro S.A. – Petrobras, São José dos Campos, Brazil

Nonlinear Model Predictive Control (NMPC) has gained wide attention through the application of dynamic optimization. It has the ability to handle variable bounds and deal with multi-input-multi-output systems. However, NMPC requires that the optimization problem be solved within one sampling time. Also, computational delay may lead to deterioration of controller performance and system stability. Here we propose an advanced-multi-step NMPC (amsNMPC) method with negligible computational delay. It is based on nonlinear programming (NLP) and NLP sensitivity.

Two variants are developed: the serial approach and the parallel approach. The basic ideas of the two approaches are identical:

  • Background: the controller predicts the state multiple steps ahead in the future, uses the prediction as an initial value and solves an NLP problem to get predictions of manipulated variables.

  • On-line: as state measurements are obtained, the controller uses NLP sensitivity to update predicted manipulated variables.

The main difference between these two approaches is that the NLP problems are solved at different frequencies and the Karush-Kuhn-Tucker (KKT) matrices used for update are formed in different ways. The serial approach applies one processor, and solves the NLP problem over a number of sampling times. The KKT matrix from the NLP solution is updated at every sampling time in order to update the manipulated variables, until the next KKT matrix is achieved. On the other hand, the parallel approach solves an NLP problem every sampling time. It applies multiple processors to assure that a free processor is always available for a new NLP problem, even if the solution of the previous one has not finished. Since we solve an NLP problem at every sampling time, we would use the KKT matrix at the solution only once and no KKT matrix update is needed. Both approaches can be applied to optimization problems, which take longer than one sampling time to solve. As both of them use predictions of states as initial values of the NLP problem, the optimization problem can be solved in advance without knowing the real states, and the computational delay can be avoided. In this presentation we detail the application of Lyapunov stability theory to prove the nominal stability and robust stability of the amsNMPC method. In addition, a large-scale first principles distillation column problem is used as an example to demonstrate the performance of the amsNMPC, and to compare it with competing strategies.

Extended Abstract: File Not Uploaded
See more of this Session: Optimization and Predictive Control I
See more of this Group/Topical: Computing and Systems Technology Division