Abstract-Sparse approximation addresses the problem of approximately fitting a linear model with a solution having as few non-zero components as possible. While most sparse estimation algorithms rely on suboptimal formulations, this work studies the performance of exact optimization of 0-norm-based problems through Mixed-Integer Programs (MIPs). Nine different sparse optimization problems are formulated based on 1, 2 or ∞ data misfit measures, and involving whether constrained or penalized formulations. For each problem, MIP reformulations allow exact optimization, with optimality proof, for moderate-size yet difficult sparse estimation problems. Algorithmic efficiency of all formulations is evaluated on sparse deconvolution problems. This study promotes error-constrained minimization of the 0 norm as the most efficient choice when associated with 1 and ∞ misfits, while the 2 misfit is more efficiently optimized with sparsity-constrained and sparsity-penalized problems. Then, exact 0-norm optimization is shown to outperform classical methods in terms of solution quality, both for over-and underdetermined problems. Finally, numerical simulations emphasize the relevance of the different p fitting possibilities as a function of the noise statistical distribution. Such exact approaches are shown to be an efficient alternative, in moderate dimension, to classical (suboptimal) sparse approximation algorithms with 2 data misfit. They also provide an algorithmic solution to less common sparse optimization problems based on 1 and ∞ misfits. For each formulation, simulated test problems are proposed where optima have been successfully computed. Data and optimal solutions are made available as potential benchmarks for evaluating other sparse approximation methods.