Sparse matrix operations are widely used in computational science and engineering applications such as quantum chemistry and finite element analysis, as well as modern machine learning scenarios such as social network and compressed deep neural networks. The University of California, Berkeley in the famous article 'A View of the Parallel Computing Landscape', Asanovic et al. ( 2009) listed sparse matrix computations as one of the most important parallel computing patterns. In recent decades, how to use massively parallel computing platforms for highly scalable, highly performant, and highly practical sparse matrix computations has been a challenging open problem.We have eight invited papers selected for this special issue based on a peer-review procedure, which cover several different aspects that related to architecture, algorithms and applications of high performance sparse matrix computations mentioned above.The first part of the special issue focuses on exploring new architectural and compilation techniques for matrix computations. The two papers propose a fast matrix multiplication architecture on field programmable gate array (FPGA), and several compilation optimizations to sparse tensor algebra, respectively.