Deep learning models are widely used in healthcare systems. However, deep learning models are vulnerable to attacks themselves. Significantly, due to the black-box nature of the deep learning model, it is challenging to detect attacks. Furthermore, due to data sensitivity, such adversarial attacks in healthcare systems are considered potential security and privacy threats. In this paper, we provide a comprehensive analysis of adversarial attacks on medical image analysis, including two adversary methods, FGSM and PGD, applied to an entire image or partial image. The partial attack comes in various sizes, either the individual or combinational format of attack. We use three medical datasets to examine the impact of the model's accuracy and robustness. Finally, we provide a complete implementation of the attacks and discuss the results. Our results indicate the weakness and robustness of four deep learning models and exhibit how varying perturbations stimulate model behaviour regarding the specific area and critical features.