470643 Parsimonious Modeling Approaches for Batch Process Analysis

Wednesday, November 16, 2016: 8:49 AM
Monterey II (Hotel Nikko San Francisco)
Ricardo Rendall1, Bo Lu2, Ivan Castillo2, Swee-Teng Chin3, Leo H. Chiang2 and Marco Reis4, (1)Chemical Engineering, University of Coimbra, Coimbra, Portugal, (2)Analytical Technology Center, The Dow Chemical Company, Freeport, TX, (3)Analytical Tech Center, The Dow Chemical Company, Freeport, TX, (4)Department of Chemical Engineering, University of Coimbra, Coimbra, Portugal

Batch processes play a central role in semiconductors, chemical & petrochemical, food & beverage, and many other industries. These processes generate large amounts of data, which can be used to conduct process analysis & improvement, monitoring & abnormality management, and process control & optimization. However, the difficulties of dealing with data collected from batch processes are well-known. They arise as consequence of their non-stationarity, multi-stage nature and heterogeneity of information sources (raw materials, process, quality labs, etc.). This complexity has prompted researchers to develop methodologies with enough flexibility to handle the challenges raised by batch data. Frequently, these tools evolved from existing methodologies by preprocessing and reshaping the raw data structure (unfolding, synchronization, scaling) to a convenient format, and then adapting the algorithms to the usual conditions of batch processes (e.g., imputation of future observations for on-line use). Examples include the well-known and successful 2-way methodologies such as multi-way PCA [1] and PLS [2], 3-way methods [3] such as Tucker 3 and PARAFAC, and adaptive methods such as hierarchical PCA [4]. More recently, particular aspects not covered by the previous approaches were addressed, such as non-Gaussian distributions [5] and nonlinearities [6]. In most of these approaches, the computational complexity is proportional to the average length of the batch samples, implying that they do not scale well in certain industrial batch processes, namely those with long runs (e.g., fermentations) or with high sampling rates (e.g., microelectronics). Furthermore, these methodologies critically depend on successful batch alignment (or synchronization), which is another specialized task that imports complexity to the solutions. A thorough analysis of the technical literature confirms an overrepresentation of contributions on the high computational/methodological complexity side, whereas approaches with lower computational complexity levels remain vastly unexplored. Examples of possible lower complexity approaches include statistical pattern analysis [7] (SPA) and some versions of dynamic methods such as batch dynamic PCA [8] (BDPCA) and auto-regressive PCA[9] (ARPCA), depending on the preprocessing methodology employed.

In this article, we present a contribution with a lower characteristic complexity (i.e., with a lower number of parameters to be estimated from process data, simpler model structures and more straightforward procedures) and illustrate its application in several case studies, where it is compared with current methodologies. This approach describes the trajectory of process variables by a small number of features that capture the essence of their time evolution. These features are then used for process analysis, diagnosis or prediction, according to the application goal. Models build from profile features are more parsimonious and do not require preliminary synchronization and complex preprocessing tasks. On the other hand, some information is inevitably lost when using features instead of the more detailed information (the measurements).

This framework was applied to three cases studies covering two simulated scenarios and an industrial dataset. The results obtained indicate that implementing a lower complexity framework do not necessarily imply a reduction in performance. On the contrary, the feature-oriented models often outperform conventional methods due to their parsimonious and robust nature. Furthermore, in exploratory frameworks, features can be an effective alternative to uncover process disturbances since they are often related with general characteristics of the trajectory of the process variables, which can be more easily interpreted.

References

1. Nomikos, P. and J.F. MacGregor, Monitoring batch processes using multiway principal component analysis. AIChE Journal, 1994. 40(8): p. 1361-1375.

2. Nomikos, P. and J.F. MacGregor, Multi-way partial least squares in monitoring batch processes. Chemometrics and intelligent laboratory systems, 1995. 30(1): p. 97-108.

3. Louwerse, D. and A. Smilde, Multivariate statistical process control of batch processes based on three-way models. Chemical Engineering Science, 2000. 55(7): p. 1225-1235.

4. Rännar, S., J.F. MacGregor, and S. Wold, Adaptive batch monitoring using hierarchical PCA. Chemometrics and intelligent laboratory systems, 1998. 41(1): p. 73-81.

5. Yoo, C.K., et al., On-line monitoring of batch processes using multiway independent component analysis. Chemometrics and Intelligent Laboratory Systems, 2004. 71(2): p. 151-163.

6. Zhao, C.H., F.R. Gao, and F.L. Wang, Nonlinear batch process monitoring using phase-based kernel independent component analysis-principal component analysis. Industrial & Engineering Chemistry Research, 2009. 48: p. 9163-9174.

7. He, Q.P. and J. Wang, Statistics pattern analysis: A new process monitoring framework and its application to semiconductor batch processes. AIChE Journal, 2011. 57(1): p. 107-121.

8. Chen, J. and K.-C. Liu, On-line batch process monitoring using dynamic PCA and dynamic PLS models. Chemical Engineering Science, 2002. 57(1): p. 63-75.

9. Choi, S.W., J. Morris, and I.-B. Lee, Dynamic model-based batch process monitoring. Chemical Engineering Science, 2008. 63(3): p. 622-636.


Extended Abstract: File Not Uploaded