“…To enable neighbor selection, a π 1 β ππππ regularizer is added to the objective to set the weights of those neighbors to be dropped to zero minππππ§π π(π) = βπ© π π β π Μβ2 + π 1 βπβ 2 + π 2 βπβ, π¦ β π k , π 1 , π 2 β π + (10) By adding the π 2 β ππππ regularizer and the π 1 β ππππ regularizer, not only a unique correlation vector could be obtained for each test sample, but also the most relevant neighbors among the k nearest neighbors of a test sample will be selected. In the field of statistical learning, objective function (10) is called an elastic net regression. Since least square loss function is convex, π 1 β ππππ regularizer and π 2 β ππππ regularizer are convex, and the summation of convex functions are convex, objective function (10) is convex.…”