We refactor the universal law for the tensor product to express matrix multiplication as the product M · N of two matrices M and N thus making possible to use such matrix product to encode and transform algorithms performing matrix multiplication using techniques from linear algebra. We explore such possibility and show two stepwise refinements transforming the composition M · N into the Naïve and Strassen's matrix multiplication algorithms.The inspection of the stepwise transformation of the composition of matrices M ·N into the Naïve matrix multiplication algorithm evidences that the steps of the transformation correspond to apply Gaussian elimination to the columns of M and to the lines of N therefore providing explicit evidence on why "Gaussian elimination is not optimal", the aphorism serving as the title to the succinct paper introducing Strassen's matrix multiplication algorithm.Although the end results are equations involving matrix products, our exposition builds upon previous works on the category of matrices (and the related category of finite vector spaces) which we extend by showing: why the direct sum (⊕, 0) monoid is not closed, a biproduct encoding of Gaussian elimination, and how to further apply it in the derivation of linear algebra algorithms.