Model predictive control (MPC) is widely used for achieving high-performance operation of complex systems due to its ability to handle multivariate dynamics, system constraints, and competing sets of objectives [1,2]. To date, the majority of work on MPC of uncertain systems relies on (bounded) deterministic uncertainty descriptions. In deterministic approaches to robust MPC, the control performance is realized in terms of worst-case uncertainties and the system constraints are fulfilled with respect to all uncertainty realizations . Hence, these approaches can lead to an overly conservative closed-loop control performance. Alternatively, stochastic MPC (SMPC) has recently emerged to reduce the inherent conservatism of robust control approaches by using the probabilistic information of system uncertainties in control design. SMPC enables shaping the probability distribution of system states/outputs (either in terms of the complete distribution , or its statistical moments ) and defining chance constraints, which allow the state constraints to be violated with a-priori specified level of probability.
In this talk, a full state-feedback SMPC approach is presented for linear discrete-time systems subject to independent (possibly) unbounded stochastic disturbances (aka process noise). The stochastic optimal control approach includes hard input bounds and joint state chance constraints. The key challenges in solving the SMPC problem are: 1) incorporating state feedback optimally, 2) guaranteeing hard input bounds in the presence of unbounded disturbances, 3) nonconvexity of joint state chance constraints, and 4) establishing (recursive) feasibility and stability of the control approach. A SMPC algorithm is proposed to address these challenges for the above mentioned stochastic linear setup. To obtain a tractable algorithm, the feedback control law is parameterized as an affine disturbance feedback policy, which is equivalent to affine state feedback . Hard input bounds are guaranteed (for the affine disturbance feedback policy) by directly saturating the disturbances in the control law [7,8]. Joint state chance constraints are replaced with a conservative deterministic surrogate in terms of the mean and variance of the predicted states using the Cantelli-Chebyshev inequality . It is demonstrated that this (and, in fact, any) SMPC problem cannot be feasible at all times due to the possibly unbounded stochastic disturbances. In addition, the inclusion of state chance constraints leads to a small region of attraction for the controller. To obviate the feasibility issue, the state chance constraints are softened using an exact penalty function method that gives the same control law as the fully-constrained SMPC problem when the problem is feasible and minimal constraint violation when the problem is infeasible. It is shown that the stochastic control algorithm will result in a stochastically stable closed-loop system by proving that the closed-loop states satisfy a geometric drift condition outside of a compact set when the system matrix is Schur stable.
The proposed SMPC approach is applied to a continuous acetone-butanol-ethanol (ABE) fermentation process . The validated fermentation model consists of 12 state variables corresponding to concentration of different species in the Clostridium acetobutylicum metabolic pathway . One hundred Monte Carlo simulations have been performed to assess the closed-loop performance of the SMPC approach. The simulation results indicate satisfactory closed-loop control performance in terms of keeping the process at the desired setpoints, as well as effective constraint satisfaction in the presence of stochastic disturbances.
 D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36 (2000), 789-814.
 M. Morari and J. H. Lee. Model predictive control: Past, present and future. Computers & Chemical Engineering, 23 (1999): 667-682.
 A. Bemporad and M. Morari. Robust model predictive control: A survey. In Robustness in Identification and Control. Springer London, 1999, 207-226.
 E. A. Buehler, J. A. Paulson, A. Akhavan, and A. Mesbah. Lyapunov-based stochastic nonlinear model predictive control: Shaping the state probability density functions. In Proceedings of the IEEE Conference on Decision and Control, 2015, Submitted, Osaka.
 A. Mesbah, S. Streif, R. Findeisen, and R. D. Braatz. Stochastic nonlinear model predictive control with probabilistic constraints. In Proceedings of the American Control Conference, 2014: 2413-2419, Portland.
 P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski. Optimization over state feedback policies for robust control with constraints. Automatica 42 (2006): 523-533.
 P. Hokayem, D. Chatterjee, and J. Lygeros. On stochastic receding horizon control with bounded control inputs. In Proceedings of IEEE Conference on Decision and Control, 2009: 6359:6364, Shanghai.
 D. Chatterjee, P. Hokayem, and J. Lygeros. Stochastic receding horizon control with bounded control inputs: a vector space approach. IEEE Transactions on Automatic Control, 56 (2011) 2704-2710.
 A. W. Marshall, O. Ingram, and B. C. Arnold. Theory of Majorization and its Applications. Springer Science & Business Media, New York, 2011.
 S. Haus, S. Jabbari, T. Millat, H. Janssen, R. J. Fischer, H. Bahl, J. R. King, and O. A. Wolkenhauer. A systems biology approach to investigate the effect of pH-induced gene regulation on solvent production by Clostridium acetobutylicum in continuous culture. BMC Systems Biology, 5 (2011): 1-13.
 T. Lütke-Eversloh and B. Hubert. Metabolic engineering of Clostridium acetobutylicum: Recent advances to improve butanol production. Current Opinion in Biotechnology, 22 (2011): 634-647.
See more of this Group/Topical: Computing and Systems Technology Division