In this paper, we propose a new temporal filter design method based on minimum KL divergence criterion for robust recognition of noisy and reverberant speech. The main idea is to optimize the filter parameters by minimizing the KL divergence of two distributions, of which one is the feature distribution in the test environment, and another is the feature distribution represented by the acoustic model. The minimization of the KL divergence reduces the mismatch between the acoustic model and the test data. Experimental results on Aurora-5 task shows that the new filter design outperforms other filter design methods significantly in noisy and reverberant test conditions. In addition, the proposed filtering of feature trajectories is shown to be complementary to linear transformation of feature vectors, which is popular in feature processing.