The state estimation part of this problem is a challenging one. Traditional methods such as the extended Kalman filter (EKF) tend to diverge due to simplifying the problem using Taylor series approximations and assumptions of Gaussian probability distributions. On the other hand the more recent and popular approaches such as the moving horizon estimators (MHE) tend to be computationally expensive for online applications. In this paper we propose a fast and novel probability density based nonlinear filter called the cell filter . The cell filter is shown to provide more accurate estimates at a fraction of the computational cost of the MHE, which makes state estimation feasible for real time feedback control.
In the subsequent task of designing an optimal control policy at every time instant, the traditional approach is to use nonlinear programming to repeatedly solve the problem at every time instance with state estimates fed back as initial conditions. This approach places enormous computational burden for online applications. Together, MHE and nonlinear programming based MPC are impossible to implement in real time even for low dimensional problems. The alternative is to resort to approximations such as model order reduction and linearization. This bottle neck has been the limiting factor for the dearth of successful industrial applications of nonlinear MPC.
In this paper we propose a novel and fast optimization strategy using what is known as simple cell mapping (SCM) in global analysis of nonlinear systems . The proposed approach is based on dynamic programming in discretized state space, which provides a large number of suboptimal solutions computed off line. We show that SCM and the search for suboptimal solutions is extremely fast compared to nonlinear programming. At each time instance, based on the feedback state of the system, the closest among the suboptimal solution is chosen and it is refined in real time using iterated dynamic programming (IDP). Because a suboptimal solution very close to the optimal solution is fed to IDP as the initial guess, the IDP converges very fast. Thus, a combination of dynamic programming in cell space and iterated dynamic programming is an effective tool for real time applications of nonlinear MPC. Together with the cell filter for state estimation, the proposed cell and iterated dynamic programming (CIDP) provides the control policy at a fraction of the cost of traditional nonlinear state estimators and model predictive control solved in moving windows.
The original nonlinear ODEs are used for model predictions using accurate Runge-Kutta numerical integration, no linearization or Gaussian probability distributions are assumed and global optimum guarantees are derived from the properties of the IDP algorithm. The algorithms for cell filter and CIDP will be presented and simulation studies of CSTR system will be presented. Detailed comparison of accuracy and computational costs will be outlined with MHE and nonlinear programming based MPC.
 Ungarala S, Z. Chen and K. Li, Bayesian State Estimation of Nonlinear Systems Using Approximate Aggregate Markov Chains, Industrial and Engineering Chemistry Research, June 2006, In Print.
 Li K. and S. Ungarala, Optimal Control Using Cell-to-Cell Mapping and Dynamic Programming, AIChE Annual Meeting, Cincinnati, OH, Nov. 2005.