SUMMARYThe paper deals with the sensitivity optimization of detection filters in linear time-varying (LTV) systems which are subject to multiple simultaneous faults and disturbances. The robust fault detection filter design problem as a scaled H 1 filtering problem is considered. The effect of two different input scaling approaches to the optimization process is investigated. The objective is to provide the smallest scaled L 2 gain of the unknown input of the system that is guaranteed to be less than a prespecified level, i.e., to produce a filter with optimal disturbance suppression capability in such a way that sufficient sensitivity to failure modes should still be maintained. It is shown how to obtain bounds on the scaled L 2 gain by transforming the standard H 1 filtering problem into a convex feasibility problem, specifically, a structured, linear matrix inequality (LMI). Numerical examples demonstrating the effect of the scaled optimization with respect to conventional H 1 filtering is presented.