In this paper, we propose a general method to devise maximum likelihood
penalized (regularized) algorithms with positivity constraints. Moreover, we
explain how to obtain ‘product forms’ of these algorithms. The algorithmic
method is based on Kuhn–Tucker first-order optimality conditions. Its application
domain is not restricted to the cases considered in this paper, but it can be
applied to any convex objective function with linear constraints. It is
specially adapted to the case of objective functions with a bounded domain,
which completely encloses the domain of the (linear) constraints. The
Poisson noise case typical of this last situation and the Gaussian additive
noise case are considered and they are associated with various forms of
regularization functions, mainly quadratic and entropy terms. The algorithms are
applied to the deconvolution of synthetic images blurred by a realistic point
spread function similar to that of Hubble Space Telescope operating in
the far-ultraviolet and corrupted by noise. The effect of the relaxation
on the convergence speed of the algorithms is analysed. The particular
behaviour of the algorithms corresponding to different forms of regularization
functions is described. We show that the ‘prior’ image is a key point in
the regularization and that the best results are obtained with Tikhonov
regularization with a Laplacian operator. The analysis of the Poisson process and
of a Gaussian additive noise leads to similar conclusions. We bring to
the fore the close relationship between Tikhonov regularization using
derivative operators, and regularization by a distance to a ‘default image’
introduced by Horne (Horne K 1985 Mon. Not. R. Astron. Soc. 213 129–41).