Semidefinite programs (SDPs) are powerful theoretical tools that have been studied for over two decades, but their practical use remains limited due to computational difficulties in solving large-scale, realistic-sized problems. In this paper, we describe a modified interior-point method for the efficient solution of large-and-sparse low-rank SDPs, which finds applications in graph theory, approximation theory, control theory, sum-of-squares, etc. Given that the problem data is large-and-sparse, conjugate gradients (CG) can be used to avoid forming, storing, and factoring the large and fully-dense interior-point Hessian matrix, but the resulting convergence rate is usually slow due to ill-conditioning. Our central insight is that, for a rank-k, size-n SDP, the Hessian matrix is ill-conditioned only due to a rank-nk perturbation, which can be explicitly computed using a size-n eigendecomposition. We construct a preconditioner to "correct" the low-rank perturbation, thereby allowing preconditioned CG to solve the Hessian equation in a few tens of iterations. This modification is incorporated within SeDuMi, and used to reduce the solution time and memory requirements of large-scale matrix-completion problems by several orders of magnitude.Assumption 1 (Nondegeneracy). We assume:1) (Slater's condition) There exist X 0, y, and S 0, such that A i • X = b i and i y i A i + S = C. 2) (Strict complementarity) rank (X ) + rank (S ) = n.These are generic properties of SDPs, and are satisfied by almost all instances [10]. Note that Slater's condition is satisfied in solvers like SeDuMi [11] and MOSEK [12] using the homogenous self-dual embedding technique [13].We further assume that the data matrices A 1 , . . . , A m are structured in a way that allow certain matrix-implicit operations to be efficiently performed.Assumption 2 (Sparsity). Define the matrix A [vec A 1 , . . . , vec A m ]. We assume that matrix-vector products with A, A T and (A T A) −1 may each be applied in O(m) flops and memory.