Kernel techniques have been used in support vector machines (SVMs), feature spaces, etc. In kernel methods, the wellknown kernel trick is used to implicitly map the input data to a higher-dimensional feature space. If all terms can be written as a kernel function, one can then use data in higher-dimensional space without actually computing the higherdimensional features or knowing the mapping function Φ. In this paper, we address kernel distortion-invariant filters (DIFs). Standard DIFs are synthesized in a linear feature space (in the image or Fourier domain). They are fast since they use FFT-based correlations. If the data is mapped to a higher-dimensional feature space before filter synthesis and before performing correlations, kernel filters result and performance can be improved. Kernel versions of several DIFs (OTF, SDF, and Mace) have been presented in prior work. However, several key issues were ignored in all prior work. These include : the unrealistic assumption of centered data in tests, the significantly larger storage and on-line computation time required, and the proper type of energy minimization in filter synthesis to reduce false peaks is necessary when the filters are applied to target scenes and has yet to be done. In addition, prior kernel DIF work used test set data to select the value of the kernel parameter. In this paper, we analyze these issues, present supporting test results on two face databases, and present several improvements to prior kernel DIF work.