Fast matrix multiplication algorithms may be useful, provided that their running time is good in practice. Particularly, the leading coefcient of their arithmetic complexity needs to be small. Many subcubic algorithms have large leading coe cients, rendering them impractical. Karstadt and Schwartz (SPAA'17, JACM'20) demonstrated how to reduce these coe cients by sparsifying an algorithm's bilinear operator. Unfortunately, the problem of nding optimal sparsi cations is NP-Hard.We obtain three new methods to this end, and apply them to existing fast matrix multiplication algorithms, thus improving their leading coe cients. ese methods have an exponential worst case running time, but run fast in practice and improve the performance of many fast matrix multiplication algorithms. Two of the methods are guaranteed to produce leading coe cients that, under some assumptions, are optimal.