Expertise for auditing AI systems in medical domain is only now being accumulated. Conformity assessment procedures will require AI systems: 1) to be transparent, 2) not to rely decisions solely on algorithms, or iii) to include safety assurance cases in the documentation to facilitate technical audit. We are interested here in obtaining transparency in the case of machine learning (ML) applied to classification of retina conditions. High performance metrics achieved using ML has become common practice. However, in the medical domain, algorithmic decisions need to be sustained by explanations. We aim at building a support tool for ophthalmologists able to: i) explain algorithmic decision to the human agent by automatically extracting rules from the ML learned models; ii) include the ophthalmologist in the loop by formalising expert rules and including the expert knowledge in the argumentation machinery; iii) build safety cases by creating assurance argument patterns for each diagnosis. Methods: For the learning task, we used a dataset consisting of 699 OCT images: 126 Normal class, 210 with Diabetic Retinopathy (DR) and 363 with Age Related Macular Degeneration (AMD). The dataset contains patients from the Ophthalmology Department of the County Emergency Hospital of Cluj-Napoca. All ethical norms and procedures, including anonymisation, have been performed. We applied three machine learning algorithms: decision tree A.