A B S T R A C TBreast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time-and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.
Accurately identifying and categorizing cancer structures/sub-types in histological images is an important clinical task involving a considerable workload and a specific subspecialty of pathologists. Digitizing pathology is a current trend that provides large amounts of visual data allowing a faster and more precise diagnosis through the development of automatic image analysis techniques. Recent studies have shown promising results for the automatic analysis of cancer tissue by using deep learning strategies that automatically extract and organize the discriminative information from the data. This paper explores deep learning methods for the automatic analysis of Hematoxylin and Eosin stained histological images of breast cancer and lymphoma. In particular, a deep learning approach is proposed for two different use cases: the detection of invasive ductal carcinoma in breast histological images and the classification of lymphoma subtypes. Both use cases have been addressed by adopting a residual convolutional neural network that is part of a convolutional autoencoder network (i.e., FusionNet). The performances have been evaluated on the public datasets of digital histological images and have been compared with those obtained by using different deep neural networks (UNet and ResNet). Additionally, comparisons with the state of the art have been considered, in accordance with different deep learning approaches. The experimental results show an improvement of 5.06% in F-measure score for the detection task and an improvement of 1.09% in the accuracy measure for the classification task. INDEX TERMS Histological images, deep learning, multi-classification, detection.
New technological advances in automated mi- croscopy have given rise to large volumes of data, which have made human-based analysis infeasible, heightening the need for automatic systems for high-throughput microscopy applications. In particular, in the field of fluorescence microscopy, automatic tools for image analysis are making an essential contribution in order to increase the statistical power of the cell analysis process. The development of these automatic systems is a difficult task due to both the diversification of the staining patterns and the local variability of the images. In this paper, we present an unsupervised approach for automatic cell segmentation and counting, namely CSC, in high-throughput microscopy images. The segmentation is performed by dividing the whole image into square patches that undergo a gray level clustering followed by an adaptive thresholding. Subsequently, the cell labeling is obtained by detecting the centers of the cells, using both distance transform and curvature analysis, and by applying a region growing process. The advantages of CSC are manifold. The foreground detection process works on gray levels rather than on individual pixels, so it proves to be very efficient. Moreover, the combination of distance transform and curvature analysis makes the counting process very robust to clustered cells. A further strength of the CSC method is the limited number of parameters that must be tuned. Indeed, two different versions of the method have been considered, CSC-7 and CSC-3, depending on the number of parameters to be tuned. The CSC method has been tested on several publicly available image datasets of real and synthetic images. Results in terms of standard metrics and spatially-aware measures show that CSC outperforms the current state of the art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.