We introduce a constructive approach for the least squares algorithms with generalizedK-norm regularization. Different from the previous studies, a stepping-stone function is constructed with some adjustable parameters in error decomposition. It makes the analysis flexible and may be extended to other algorithms. Based on projection technique for sample error and spectral theorem for integral operator in regularization error, we finally derive a learning rate.
In this paper, we consider the least squares regression algorithm with a generalized coefficient regularization term. A novel error decomposition involving a constructive stepping-stone function is introduced. By choosing appropriate parameters for the constructive function we finally derive a satisfactory learning rate under some condition for the goal function and capacity of the hypothesis space.
A standard assumption in the literature of learning theory is the samples which are drawn independently from an identical distribution with a uniform bounded output. This excludes the common case with Gaussian distribution. In this paper we extend these assumptions to a general case. To be precise, samples are drawn from a sequence of unbounded and non-identical probability distributions. By drift error analysis and Bennett inequality for the unbounded random variables, we derive a satisfactory learning rate for the ERM algorithm.
Convex risk minimization is a commonly used setting in learning theory. In this paper, we firstly give a perturbation analysis for such algorithms, and then we apply this result to differential private learning algorithms. Our analysis needs the objective functions to be strongly convex. This leads to an extension of our previous analysis to the non-differentiable loss functions, when constructing differential private algorithms. Finally, an error analysis is then provided to show the selection for the parameters.
Online learning algorithms are very attractive, in which iterations are applied efficiently instead of solving some optimization problems. In this paper, online learning with protecting privacy is considered. A perturbation term is added into the classical online algorithms to obtain the differential privacy property. Firstly the distribution for the perturbation term is deduced, and then an error analysis for the new algorithms is performed, which shows the convergence and learning rate. From the error analysis, a choice for the parameters for differential privacy can be found theoretically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.