“…This theorem, in its extended version [42], is the foundation for the majority of kernelbased methods for machine learning, including regression, radial-basis functions, and support-vector machines [23,44,48]. There is also a whole line of generalizations of the concept that involves reproducing kernel Banach spaces (RKBS) [55][56][57]. More recently, motivated by the success of 1 and total-variation regularization for compressed sensing [11,14,19], researchers have derived alternative representer theorems in order to explain the sparsifying effect of such penalties and their robustness to missing data [8,26,28,52].…”