IMPORTANCE Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. OBJECTIVE To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. DESIGN, SETTING, AND PARTICIPANTS In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. MAIN OUTCOMES AND MEASUREMENTS Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. RESULTS Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive Յ12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. CONCLUSIONS AND RELEVANCE While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine (continued)
Automated classification of medical images is an increasingly important tool for physicians in their daily activities. However, due to its computational complexity, this task is one of the major current challenges in the field of content-based image retrieval (CBIR). In this paper, a medical image classification approach is proposed. This method is composed of two main phases. The first step consists of a pre-processing, where a texture and shape based features vector is extracted. Also, a feature selection approach was applied by using a Genetic Algorithm (GA). The proposed GA uses a kNN based classification error as fitness function, which enables the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. In the second phase, a classification process is achieved by using random Forest classifier and a supervised multi-class classifier based on the support vector machine (SVM) for classifying X-ray images.
The increase number of medical image stored and saved every day presents a unique opportunity for contentbased medical image retrieval (CBMIR) systems. In this paper, we propose contentbased medical image retrieval for annotating liver CT scans images in order to generate a structured report. For that, we have used the Bidimentional Empirical Mode Decomposition (BEMD), and then we have applied Gabor wavelet transform to extract the mean and the standard deviation as features descriptors. Finally, a proposed similarity distance was employed to retrieve the most similar training images to the image query, and a majority voting scheme was used to select the answers for an unannotated image. We have used the IMAGECLEF 2015 annotation dataset and the obtained score was 88.9%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.