322659 Experiences in the Corporate-Wide Deployment of Advanced Modeling Technology: Making the Complicated Look Simple, But No Simpler Than That!

Monday, November 4, 2013: 4:15 PM
Plaza A (Hilton)
Salvador García-Muñoz, Process Modeling and Engineering Technology, Pfizer Worldwide R&D, Groton, CT and Paul Schmitz, Pfizer Worldwide R & D, Groton, CT

The development of modeling technology in an organization is typically in the hands of a small sub-group of subject matter experts. Once these models are mature enough, a natural step is the deployment of such models for the general audiences, who will in turn use these models as a tool to their job responsibilities.

In the worst of the cases, the deployment process for a model will involve training the end-user to manipulate the native tool in which the model was built on (e.g. gPROMS, Aspen Plus, Dynochem, Fluent, MATLAB, etc). This training process can be challenging due to the diversity of end-users in a large corporation. This also implies the need to install the native tool in the end-user’s computer and the potential consumption of an expensive license for the rather simple “use of a model” rather than the more challenging “development a model”. In better cases the deployment process with involve the development of interfaces built using event-driven languages like Visual Basic.

Either of these scenarios involves many complexities: i) the establishment of a mechanism such that the end user has access to the latest version of the model, ii) the proper training of the user in the native tool, iii) the sub-optimal use of expensive licenses, iv) the additional need of a framework to store/archive results from the use of the models, v) the need to maintain ad-hoc interfaces, among others.

In this talk we share our experiences implementing a web-based solution to achieve the corporate-wide deployment of models trying to solve some of the complexities before mentioned. Our talk addresses the necessary protocols and efforts involved in launching and maintenance of such a framework, including the different levels of documentation, the efforts to gain end-user participation and to properly manage the lifecycle of a model will be discussed.

Such a framework has also served as a trigger for the broader discussion from the business lines, as to where in the development cycle can (and should) these tools be applied, and to what level should decisions be made with the results from in-silico technology. Widening the audiences that can now access complex computational solutions has also brought new challenges regarding data-transfer (especially at the interface between discovery and development). It has also sparked the dialog between different communities of “modelers” that have been doing very different types of “modeling” of some form or another and the need to establish a system that addresses the multiple needs of a very diverse community of subject matter experts. We will also touch upon the ways this framework has changed the way the subject matter experts and the informatics personnel exchange information and respond to the needs of the end user.  Our intention is to spark the debate about the different ways such a platform could potentially change the form we transfer or share knowledge from company-to-company, company-to-vendor, company-to-academia, and company-to-agency.


Extended Abstract: File Not Uploaded