We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using the concept of operator monotone index functions in minimax setting. Further we also address the minimum possible error for any learning algorithm.
We study the linear ill-posed inverse problem with noisy data in the statistical learning setting. Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale. We discuss the rates of convergence for the regularized solution under the prior assumptions and a certain link condition. We express the error in terms of certain distance functions. For regression functions with smoothness given in terms of source conditions the error bound can then be explicitly established.
Manifold regularization is an approach which exploits the geometry of the marginal distribution. The main goal of this paper is to analyze the convergence issues of such regularization algorithms in learning theory. We propose a more general multi-penalty framework and establish the optimal convergence rates under the general smoothness assumption. We study a theoretical analysis of the performance of the multi-penalty regularization over the reproducing kernel Hilbert space. We discuss the error estimates of the regularization schemes under some prior assumptions for the joint probability measure on the sample space. We analyze the convergence rates of learning algorithms measured in the norm in reproducing kernel Hilbert space and in the norm in Hilbert space of square-integrable functions. The convergence issues for the learning algorithms are discussed in probabilistic sense by exponential tail inequalities. In order to optimize the regularization functional, one of the crucial issue is to select regularization parameters to ensure good performance of the solution. We propose a new parameter choice rule "the penalty balancing principle" based on augmented Tikhonov regularization for the choice of regularization parameters. The superiority of multi-penalty regularization over single-penalty regularization is shown using the academic example and moon data set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.