While PHA’s have been conducted within the industry for many years, there are still distinct differences between the quality and quantity of data generated within each session. Using data analytics and visualizations, the industry can move from inconsistent data quality to a place of quality awareness. This knowledge can reduce the risk exposure by helping remove inconsistencies between PHA data within the same operating company. Additionally, this can increase the percentage of critical scenarios captured during the session, reducing the chance of missing a high risk scenario and corresponding risk reducing recommendation. This idea can be adopted for a newly purchased facility or a newly built facility by having a guideline to start with for the baseline PHA.
To complete this, there is a series of transformations during data mining to allow it to be in a comparable state. The data must be generalized so the specifics such as tag numbers are removed, and comparisons can be drawn. The data is organized into subsections of process safety vulnerabilities, which summarizes the threats the facility is exposed to.
From this generalized data, the most critical vulnerabilities across multiple facilities can be extracted to be assessed in the next PHA. Along with the aid of a subject matter expert, it can also be determined if any vulnerabilities were not considered in the PHA, such as prior incidents, and the findings can be integrated for future work. This can create a more wholesome PHA with confidence that all critical scenarios have been analyzed.
See more of this Group/Topical: Global Congress on Process Safety