We study parameter estimation in linear Gaussian covariance models, which are p-dimensional Gaussian models with linear constraints on the covariance matrix. Maximum likelihood estimation for this class of models leads to a non-convex optimization problem which typically has many local maxima. Using recent results on the asymptotic distribution of extreme eigenvalues of the Wishart distribution, we provide sufficient conditions for any hill-climbing method to converge to the global maximum. Although we are primarily interested in the case in which n > > p, the proofs of our results utilize large-sample asymptotic theory under the scheme n/p → γ > 1. Remarkably, our numerical simulations indicate that our results remain valid for p as small as 2. An important consequence of this analysis is that for sample sizes n 14p, maximum likelihood estimation for linear Gaussian covariance models behaves as if it were a convex optimization problem.1. Introduction. In many statistical analyses, the covariance matrix possesses a specific structure and must satisfy certain constraints. We refer to [Pou11] for a comprehensive review of covariance estimation in general and a discussion of numerous specific covariance matrix constraints. In this paper, we study Gaussian models with linear constraints on the covariance matrix. Simple examples of such models are correlation matrices or covariance matrices with prescribed zeros.Linear Gaussian covariance models appear in various applications. They were introduced to study repeated time series [And70] and are used in various engineering problems [ZC05,Die07]. In Section 2, we describe in detail Brownian motion tree models, a particular class of linear Gaussian covariance models, and their applications to phylogenetics and network tomography.To define Gaussian models with linear constraints on the covariance matrix, let S p be the set of symmetric p × p matrices considered as a subset of