Fault diagnosis (FD) is the process of detecting and diagnosing the cause(s) for abnormalities in a large process. This typically involves analysis of a large number of mathematical relationships among all variables. A concomitant problem is one of selecting the measured variables that should participate in the analysis from a very large number of possibilities. This sensor placement problem can be described as an optimization problem where an objective function such as the cost of the sensor network is optimized while constrained to satisfying the mathematical relationships and some FD metrics. Major difficulties that sensor placement techniques face are that these large processes are described in terms of a large number of partial differential equations (PDEs) that renders the analysis computationally complex and further, the FD metrics are generally ambiguous. Therefore, different techniques have been developed to address such difficulties in sensor placement. Cause and effect modelling is an interesting technique that simplifies the large process as a graph with process variables as nodes and arcs representing the relationship between the variables. In these representations, faults are assumed as root nodes (nodes with only outputs) that are connected to the affected variables through arcs and meaningful FD metrics can be derived. The graph-based approaches provide a qualitative information of the faults. For quantitative information of the faults (such as extent of catalyst deactivation in a packed-bed reactor), a model- based approach is used for sensor placement where usually involves state estimation of a large system of linear or nonlinear partial differential algebraic equations (PDAEs) that adds significant computational expense. For large-scale systems or networks where both qualitative and quantitative information of the faults is desired, the sensor placement problem is computationally cumbersome mainly due to nested operations involving a large number of nodes. In addition, the sensors should be placed synergistically so that the measurement information obtained at one level due to the placed sensors can be leveraged at the other level thus maximizing the efficiency of the measurement network.
In this work, we present a graph partitioning technique that decomposes a large process into subsystems and integrates the solutions derived from these subsystems to obtain the solution for the original large system. In general, naturally partitionable systems will decompose into individual non-interacting subsystems. However, many large-scale networks will not possess such natural partitions because of recycles and feedback loops. In such situations, artificial partitions that serve certain diagnosis objectives need to be created. We will describe various approaches for generation of efficient partitions of the large-scale system. Each partition has its own level of granularity which could include PDAE models, graph models and so on. Apart from the set of faults (a subset of the original fault set) present in these partitions, we propose that the interactions from the other partitions be handled as pseudo-faults. Based on such decompositions, we derive sensor placement algorithms that provide a solution to the large-scale networked process by assimilating the solutions from the partitioned systems in multiple ways. The importance of these ideas for various areas such as smart grids, water networks and large-scale fault diagnosis will be described.
See more of this Group/Topical: Computing and Systems Technology Division