Deep learning models have become state-of-theart in many areas, ranging from computer vision to marine and agriculture research. However, concerns have been raised regarding the transparency of their decisions, especially in the image domain. In this regard, Explainable Artificial Intelligence has been gaining popularity in recent years. The ProtoPNet model, which breaks down an image into prototypes and uses evidence gathered from the prototypes to classify an image, represents an appealing approach. Still, questions regarding its effectiveness arise when the application domain changes from real-world natural images to gray-scale medical images. This work explores the applicability of prototypical part learning in medical imaging by experimenting with ProtoPNet on a breast masses classification task. The two aspects we considered to evaluate the applicability of this approach were the classification capabilities and the validity of explanations. We looked for the optimal model's hyperparameters configuration by operating a random search. We trained the model in a five-fold CV supervised framework, with mammogram images cropped around the lesions and ground-truth labels of benign/malignant masses. Then, we compared the performance metrics of ProtoPNet to that of the corresponding base architecture, which was ResNet18, trained under the same framework. In addition, an experienced radiologist provided a clinical viewpoint on the quality of the learned prototypes, the patch activations, and the global explanations. We achieved a Recall of 0.769 and area under the receiver operating characteristic curve of 0.719 in our experiments. Even though our findings are non-optimal for entering the clinical practice yet, the radiologist found ProtoPNet's explanations very intuitive, reporting a high level of satisfaction. Therefore, we believe that prototypical part learning offers a reasonable and promising trade-off between classification performance and the quality of the related explanation.