Background Bone marrow cytology is required to make a hematological diagnosis, influencing critical clinical decision points in hematology. However, bone marrow cytology is tedious, limited to experienced reference centers and associated with inter-observer variability. This may lead to a delayed or incorrect diagnosis, leaving an unmet need for innovative supporting technologies. Methods We develop an end-to-end deep learning-based system for automated bone marrow cytology. Starting with a bone marrow aspirate digital whole slide image, our system rapidly and automatically detects suitable regions for cytology, and subsequently identifies and classifies all bone marrow cells in each region. This collective cytomorphological information is captured in a representation called Histogram of Cell Types (HCT) quantifying bone marrow cell class probability distribution and acting as a cytological patient fingerprint. Results Our system achieves high accuracy in region detection (0.97 accuracy and 0.99 ROC AUC), and cell detection and cell classification (0.75 mean average precision, 0.78 average F1-score, Log-average miss rate of 0.31). Conclusions HCT has potential to eventually support more efficient and accurate diagnosis in hematology, supporting AI-enabled computational pathology.
Chest radiography has become the modality of choice for diagnosing pneumonia. However, analyzing chest X-ray images may be tedious, time-consuming and requiring expert knowledge that might not be available in less-developed regions. therefore, computer-aided diagnosis systems are needed. Recently, many classification systems based on deep learning have been proposed. Despite their success, the high development cost for deep networks is still a hurdle for deployment. Deep transfer learning (or simply transfer learning) has the merit of reducing the development cost by borrowing architectures from trained models followed by slight fine-tuning of some layers. Nevertheless, whether deep transfer learning is effective over training from scratch in the medical setting remains a research question for many applications. In this work, we investigate the use of deep transfer learning to classify pneumonia among chest Xray images. Experimental results demonstrated that, with slight fine-tuning, deep transfer learning brings performance advantage over training from scratch. Three models, ResNet-50, Inception V3 and DensetNet121, were trained separately through transfer learning and from scratch. The former can achieve a 4.1% to 52.5% larger area under the curve (AUC) than those obtained by the latter, suggesting the effectiveness of deep transfer learning for classifying pneumonia in chest X-ray images.
Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. This study shows that there are tissue source site (tss) specific patterns of TCGA images that could be used to identify contributing institutions without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has discovered such tss specific patterns within digital slides to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
. Significance: The firefly enzyme luciferase has been used in a wide range of biological assays, including bioluminescence imaging of adenosine triphosphate (ATP). The biosensor Syn-ATP utilizes subcellular targeting of luciferase to nerve terminals for optical measurement of ATP in this compartment. Manual analysis of Syn-ATP signals is challenging due to signal heterogeneity and cellular motion in long imaging sessions. Here, we have leveraged machine learning tools to develop a method for analysis of bioluminescence images. Aim: Our goal was to create a semiautomated pipeline for analysis of bioluminescence imaging to improve measurements of ATP content in nerve terminals. Approach: We developed an image analysis pipeline that applies machine learning toolkits to distinguish neurons from background signals and excludes neural cell bodies, while also incorporating user input. Results: Side-by-side comparison of manual and semiautomated image analysis demonstrated that the latter improves precision and accuracy of ATP measurements. Conclusions: Our method streamlines data analysis and reduces user-introduced bias, thus enhancing the reproducibility and reliability of quantitative ATP imaging in nerve terminals.
Appearing traces of bias in deep networks is a serious reliability issue which can play a significant role in ethics and generalization related concerns. Recent studies report that the deep features extracted from the histopathology images of The Cancer Genome Atlas (TCGA), the largest publicly available archive, are surprisingly able to accurately classify the whole slide images (WSIs) based on their acquisition site while these features are extracted to primarily discriminate cancer types. This is clear evidence that the utilized Deep Neural Networks (DNNs) unexpectedly detect the specific patterns of the source site, i.e, the hospital of origin, rather than histomorphologic patterns, a biased behavior resulting in degraded trust and generalization. This observation motivated us to propose a method to alleviate the destructive impact of hospital bias through a novel feature selection process. To this effect, we have proposed an evolutionary strategy to select a small set of optimal features to not only accurately represent the histological patterns of tissue samples but also to eliminate the features contributing to internal bias toward the institution. The defined objective function for an optimal subset selection of features is to minimize the accuracy of the model to classify the source institutions which is basically defined as a bias indicator. By the conducted experiments, the selected features extracted by the state-of-the-art network trained on TCGA images (i.e., the KimiaNet), considerably decreased the institutional bias, while improving the quality of features to discriminate the cancer types. In addition, the selected features could significantly improve the results of external validation compared to the entire set of features which has been negatively affected by bias. The proposed scheme is a model-independent approach which can be employed when it is possible to define a bias indicator as a participating objective in a feature selection process; even with unknown bias sources.
Background Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. Methods 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide’s acquisition site, and also for slide representation in image search. Results DenseNet’s deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet’s deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. Summary This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.