Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.
The microscopic image of a specimen in the absence of staining appears colorless and textureless. Therefore, microscopic inspection of tissue requires chemical staining to create contrast. Hematoxylin and eosin (H&E) is the most widely used chemical staining technique in histopathology. However, such staining creates obstacles for automated image analysis systems. Due to different chemical formulations, different scanners, section thickness, and lab protocols, similar tissues can greatly differ in appearance. This huge variability is one of the main challenges in designing robust and resilient automated image analysis systems. Moreover, staining process is time consuming and its chemical effects deform structures of specimens. In this work, we develop a method to virtually stain unstained specimens. Our method utilizes dimension reduction and conditional adversarial generative networks (cGANs) which build highly non-linear mappings between input and output images. Conditional GANs ability to handle very complex functions and high dimensional data enables transforming unstained hyperspectral tissue image to their H&E equivalent which comprises highly diversified appearance. In the long term, such virtual digital H&E staining could automate some of the tasks in the diagnostic pathology workflow which could be used to speed up the sample processing time, reduce costs, prevent adverse effects of chemical stains on tissue specimens, reduce observer variability, and increase objectivity in disease diagnosis.
Objective:The purposes of this study were to investigate: 1) the effect of placement of region-of-interest (ROI) for texture analysis of subchondral bone in knee radiographs, and 2) the ability of several texture descriptors to distinguish between the knees with and without radiographic osteoarthritis (OA). Design:Bilateral posterior-anterior knee radiographs were analyzed from the baseline of Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) datasets. A fully automatic method to locate the most informative region from subchondral bone using adaptive segmentation was developed. We used an oversegmentation strategy for partitioning knee images into the compact regions that follow natural texture boundaries. Local Binary Patterns (LBP), Fractal Dimension (FD), Haralick features, Shannon entropy, and Histogram of Oriented Gradients (HOG) methods were computed within the standard ROI and within the proposed adaptive ROIs. Subsequently, we built logistic regression models to identify and compare the performances of each texture descriptor and each ROI placement method using 5-fold cross validation setting. Importantly, we also investigated the generalizability of our approach by training the models on OAI and testing them on MOST dataset. We used area under the receiver operating characteristic (ROC) curve (AUC) and average precision (AP) obtained from the precision-recall (PR) curve to compare the results. Results:We found that the adaptive ROI improves the classification performance (OA vs. non-OA) over the commonly-used standard ROI (up to 9% percent increase in AUC). We also observed that, from all texture parameters, LBP yielded the best performance in all settings with the best AUC of 0.840 [0.825, 0.852] and associated AP of 0.804 [0.786, 0.820]. Conclusion:Compared to the current state-of-the-art approaches, our results suggest that the proposed adaptive ROI approach in texture analysis of subchondral bone can increase the diagnostic performance for detecting the presence of radiographic OA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.