The linear minimum mean squared error (LMMSE) estimator is the best linear estimator for a Bayesian linear inverse problem with respect to the mean squared error. It arises as the solution operator to a Tikhonov-type regularized inverse problem with a particular quadratic discrepancy term and a particular quadratic regularization operator. To be able to evaluate the LMMSE estimator, one must know the forward operator and the first two statistical moments of both the prior and the noise. If such knowledge is not available, one may approximate the LMMSE estimator based on given samples. In this work, it is investigated, in a finite-dimensional setting, how many samples are needed to reliably approximate the LMMSE estimator, in the sense that, with high probability, the mean squared error of the approximation is smaller than a given multiple of the mean squared error of the LMMSE estimator.