“…Typically, a large number of images are required for training an algorithm in order for it to distinguish between images with small but potentially significant differences in features. 8 Compared with previous studies, instead of training the algorithm using a large number of images, Thanellas et al 6 trained it with pixel-level data instead of slice-level-annotated images to adapt their study for a smaller sample size. 9 As a result, the algorithm classifies every single pixel in axial images presented to it and predicts the presence of SAH at a patient level and slice level.…”