Intelligent applications such as digital recognition have problems in the process of training data and model inference, such as idle resources, low efficiency, high power consumption, etc. This paper mainly solves the problem by optimizing the LSTM model. The solution of the optimization model mainly includes the improvement of the matrix vector multiplication of the LSTM model and the improvement of the pruning algorithm. The improvement of the matrix vector multiplication of the LSTM model can reduce the calculation amount of the operation unit, Improve the recognition speed of the digital recognition system. The improvement of the pruning algorithm can reduce the resource consumption of the model parameters in storage, and can recognize more pictures and improve the recognition rate. The results of LSTM network implementation through training show that the speed of the system is about 250 times that of digital recognition using only the microcontroller core, and about 7.5 times that of general CPU.