With the impact of the COVID‐19 epidemic, the demand for masked face recognition technology has increased. In the process of masked face recognition, some problems such as less feature information and poor robustness to the environment are obvious. The current masked face recognition model is not quantified enough for feature extraction, there are large errors for faces with high similarity, and the categories cannot be clustered during the detection process, resulting in poor classification of masks, which cannot be well adapted to changes in multiple environments. To solve current problems, this paper designs a new masked face recognition model, taking improved Single Shot Multibox Detector (SSD) model as a face detector, and replaces the input layer VGG16 of SSD with Deep Residual Network (ResNet) to increase the receptive field. In order to better adapt to the network, we adjust the convolution kernel size of ResNet. In addition, we fine‐tune the Xception network by designing a new fully connected layer, and reduce the training cycle. The weights of the three input samples including anchor, positive and negative are shared and clustered together with triplet network to improve recognition accuracy. Meanwhile, this paper adjusts alpha parameter in triplet loss. A higher value of alpha can improve the accuracy of model recognition. We further adopt a small trick to classify and predict face feature vectors using multi‐layer perceptron (MLP), and a total of 60 neural nodes are set in the three neural layers of MLP to get higher classification accuracy. Moreover, three datasets of MFDD, RMFRD and SMFRD are fused to obtain high‐quality images in different scenes, and we also add data augmentation and face alignment methods for processing, effectively reducing the interference of the external environment in the process of model recognition. According to the experimental results, the accuracy of masked face recognition reaches 98.3%, it achieves better results compared with other mainstream models. In addition, the hyper‐parameters tuning experiment is carried out to improve the utilization of computing resources, which shows better results than the indicators of different networks.