“…In [5], however, it has been pointed out that minimizing (8) does not guarantee high SIR for certain combined channel and shortener responses. To overcome this problem our contribution is to generalize a lag hopping version of SLAM, where the lag parameter in (8) is chosen at random to lie within the range v + 1, ...., L c , with equal probability of selecting anyone lag, to the case of selecting randomly, but uniquely, any number of lags between 1 and L c −v, so that on average the cost is identical to (5) when implemented in an adaptive learning algorithm.…”