430666 Process Data, Is More Better? a Case Study to Improve the Chemical Manufacturing Operation

Tuesday, November 10, 2015: 2:45 PM
Salon F (Salt Lake Marriott Downtown at City Creek)
Swee-Teng Chin1, Ivan Castillo2, Anna McClung3, Matthew Mengel4, Erin Johnson4 and Leo H. Chiang1, (1)Analytical Tech Center, The Dow Chemical Company, Freeport, TX, (2)Analytical Technology Center, The Dow Chemical Company, Freeport, TX, (3)Global Ag Production Support, The Dow Chemical Company, Pittsburg, CA, (4)The Dow Chemical Company, Pittsburg, CA

Traditionally at Dow Chemical, multivariate analysis is often applied on process data for process troubleshooting, quality improvement and design of inferential sensors, to name a few. The case study involves multiple batch operation units with one quality measurement for each batch and the goal is to find the key process variables (x variables) that are highly correlated with the quality measurement (y variable). The ultimate purpose is to reduce variability in the y variable. The historical data available for analysis was sampled at every minute from the last two years (with data matrix of 4.7 million columns and 200 rows) but without any success in drawing any statistical conclusions. A big dataset does not guarantee the ability of extracting useful information successfully. The reason is because of lack of structure in the data due to sampling and process variability, which is a common problem of a big dataset. To overcome the challenge, a statistical design of experiment is performed in the plant with multiple measurements associated with each batch. Combination of both statistical linear mixed model and multivariate batch analysis reveal the major contributors to the variation in the y which eventually help the plant to achieve the goal.

Extended Abstract: File Not Uploaded
See more of this Session: Data Analysis and Big Data in Chemical Engineering
See more of this Group/Topical: Computing and Systems Technology Division