##
389951 Optimal Control of Uncertain Systems Using Dual Model Predictive Control (DMPC)

We present an adaptive model predictive controller (AMPC) for plants with uncertain parameters. Inspection of the objective function shows that the AMPC must include caution and probing in order to generate optimal controls and avoid singularities that may result when the model has close pole-zero cancellations. We show that DMPC avoids singularities, and provides performance than certainty equivalence AMPC by combing caution and probing in an optimal manner.

We consider linear ARX models in discrete time with Gaussian disturbances and quadratic performance cost. The least squares estimate provides the optimal estimate of the model at current time given past data. In certainty equivalence adaptive control this estimate is used directly to generate the controls by assuming that the estimate (being the optimal one) actually provides the best control. However, evaluation of the cost to go shows that optimal controls are functions of the current and future parameter-estimate error covariances. The current covariance matrix provides a rationale for caution, whereas the future covariance matrices can be reduced by probing. The optimal trade-off among control, caution, and probing is provided by the objective function.

In order to facilitate online solution of the DMPC problem we replace the recursive least-squares (Kalman) covariance update with an information matrix, which contains the same information, but provides the inverse. The nonlinear programming formulation remains nonlinear, but all nonlinear terms are bilinear. Although optimization problems constrained by bilinear equality constraints are not trivial to solve, there exist methods for solving this class of problem to global optimality that apply to the DMPC problem. We further simplify and speed up solution by exploiting the symmetry of the information matrix, reduce the number of variables through decomposition, and use of constraints for controls and outputs.

We demonstrate the application of DMPC to a finite impulse response (FIR) system. The advantage of the FIR is that the objective function reformulation is exact and the covariance predictions are explicit functions of the decision variables. Generalizations to Laguerre and ARX formulations follow but the computational expense is higher. The performance of the controller is compared with AMPC and fixed-parameter MPC through Monte Carlo simulations. We discuss the online solution of the optimization problem and the use of local instead of global solvers. The simulation examples show that the optimal excitation strategy provides better mean-square performance than approaches that rely on persistent excitation. Our controller excites the system just enough to obtain good estimates of the unknown parameters while minimizing the resulting adverse effect on output regulation. Simulation shows that the parameter estimates converge rapidly and that exact parameter estimates are obtained in the limit even though the excitation vanishes quickly.

**Extended Abstract:**File Not Uploaded

See more of this Group/Topical: Computing and Systems Technology Division