Using a sparsity promoting convex penalty function on high-order linear prediction coefficients and residuals has shown to result in improved modeling of speech and other signals as this addresses the inherent limitations of standard linear prediction methods. However, this new formulation is computationally more demanding which may limit its use, in particular for embedded signal processing. This paper analyzes the algorithmic and computational aspects of the matrix structures associated with an alternating direction method of multipliers algorithm for solving the convex high-order sparse linear prediction problem. The paper also analyzes the inherent trade-off between accuracy and the objective measure of prediction gain and shows that a few iterations are sufficient to achieve similar results as computationally more expensive interior-point methods.