Deep neural networks (DNN's) have become essential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. However, DNN's are vulnerable to human-imperceptible adversarial distortion/noise patterns that can detrimentally impact safetycritical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learning approach, Si m-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototypebased architecture of Si m-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN's methods were also considered in the analysis. Results have shown that the proposed Si m-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Si m-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.