Dynamic multiple fault diagnosis (DMFD) is a challenging and difficult problem due to coupling effects of the states of components and imperfect test outcomes that manifest themselves as missed detections and false alarms. The objective of the DMFD problem is to determine the most likely temporal evolution of fault states, the one that best explains the observed test outcomes over time.Here, we discuss four formulations of the DMFD problem. These include the deterministic situation corresponding to a perfectlyobserved coupled Markov decision processes, to several partiallyobserved factorial hidden Markov models ranging from the case where the imperfect test outcomes are functions of tests only to the case where the test outcomes are functions of faults and tests, as well as the case where the false alarms are associated with the nominal (fault-free) case only. All these formulations are intractable NP-hard combinatorial optimization problems. We solve each of the DMFD problems by decomposing them into separable subproblems, one for each component state sequence.Our solution scheme can be viewed as a two-level coordinated solution framework for the DMFD problem. At the top (coordination) level, we update the Lagrange multipliers (coordination variables, dual variables) using the subgradient method. The top level facilitates coordination among each of the subproblems, and can thus reside in a vehicle-level diagnostic control unit. At the bottom level, we use a dynamic programming technique (specifically, the Viterbi decoding or Max-sum algorithm) to solve each of the subproblems. The key advantage of our approach is that it provides an approximate duality gap, which is a measure of suboptimality of the DMFD solution. Interestingly, the perfectly-observed DMFD problem leads to a dynamic set covering problem, which can be approximately solved via Lagrangian relaxation and Viterbi decoding. Computational results on real-world problems are presented.
One of the common ways to perform data-driven fault diagnosis is to employ statistical models, which can classify the data into nominal (healthy) and a fault class or distinguish among different fault classes. The former is termed fault (anomaly) detection, and the latter is termed fault isolation (classification, diagnosis). Traditionally, statistical classifiers are trained using data from faulty and nominal behaviors in a batch mode. However, it is difficult to anticipate, a priori, all the possible ways in which failures can occur, especially when a new vehicle model is introduced. Therefore, it is imperative that diagnostic algorithms adapt to new cases on an ongoing basis. In this paper, a unified methodology to incrementally learn new information from evolving databases is presented. The performance of adaptive (or incremental learning) classification techniques is discussed when: 1) the new data has the same fault classes and same features and 2) the new data has new fault classes, but with the same set of observed features. The proposed methodology is demonstrated on data sets derived from an automotive electronic throttle control subsystem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.