hest radiography, one of the most common diagnostic imaging tests in medicine, is used for screening, diagnostic work-ups, and monitoring of various thoracic diseases (1,2). One of its major objectives is detection of pulmonary nodules because pulmonary nodules are often the initial radiologic manifestation of lung cancers (1,2). However, to date, pulmonary nodule detection on chest radiographs has not been completely satisfactory, with a reported sensitivity ranging between 36%-84%, varying widely according to the tumor size and study population (2-6). Indeed, chest radiography has been shown to be prone to many reading errors with low interobserver and intraobserver agreements because of its limited spatial resolution, noise from overlapping anatomic structures, and the variable perceptual ability of radiologists. Recent work shows that 19%-26% of lung cancers visible on chest radiographs were in fact missed at their first readings (6,7). Of course, hindsight is always perfect when one knows where to look. For this reason, there has been increasing dependency on chest CT images over chest radiographs in pulmonary nodule detection. However, even low-dose CT scans require approximately 50-100 times higher radiation dose than single-view chest radiographic examinations (8,9)
Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a realworld scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs.The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI.The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth.This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task.
Recent advances of deep learning have achieved remarkable performances in various challenging computer vision tasks. Especially in object localization, deep convolutional neural networks outperform traditional approaches based on extraction of data/task-driven features instead of handcrafted features. Although location information of regionof-interests (ROIs) gives good prior for object localization, it requires heavy annotation efforts from human resources. Thus a weakly supervised framework for object localization is introduced. The term "weakly" means that this framework only uses image-level labeled datasets to train a network. With the help of transfer learning which adopts weight parameters of a pre-trained network, the weakly supervised learning framework for object localization performs well because the pre-trained network already has well-trained class-specific features. However, those approaches cannot be used for some applications which do not have pre-trained networks or well-localized large scale images. Medical image analysis is a representative among those applications because it is impossible to obtain such pre-trained networks. In this work, we present a "fully" weakly supervised framework for object localization ("semi"-weakly is the counterpart which uses pretrained filters for weakly supervised localization) named as self-transfer learning (STL). It jointly optimizes both classification and localization networks simultaneously. By controlling a supervision level of the localization network, STL helps the localization network focus on correct ROIs without any types of priors. We evaluate the proposed STL framework using two medical image datasets, chest X-rays and mammograms, and achieve signiticantly better localization performance compared to previous weakly supervised approaches.
Abstract. We present a unified framework to predict tumor proliferation scores from breast histopathology whole slide images. Our system offers a fully automated solution to predicting both a molecular data-based, and a mitosis counting-based tumor proliferation score. The framework integrates three modules, each fine-tuned to maximize the overall performance: An image processing component for handling whole slide images, a deep learning based mitosis detection network, and a proliferation scores prediction module. We have achieved 0.567 quadratic weighted Cohen's kappa in mitosis counting-based score prediction and 0.652 F1-score in mitosis detection. On Spearman's correlation coefficient, which evaluates predictive accuracy on the molecular data based score, the system obtained 0.6171. Our approach won first place in all of the three tasks in Tumor Proliferation Assessment Challenge 2016 which is MIC-CAI grand challenge.
We introduce an accurate lung segmentation model for chest radiographs based on deep convolutional neural networks. Our model is based on atrous convolutional layers to increase the field-of-view of filters efficiently. To improve segmentation performances further, we also propose a multi-stage training strategy, network-wise training, which the current stage network is fed with both input images and the outputs from pre-stage network. It is shown that this strategy has an ability to reduce falsely predicted labels and produce smooth boundaries of lung fields. We evaluate the proposed model on a common benchmark dataset, JSRT, and achieve the state-of-the-art segmentation performances with much fewer model parameters.
Virtual metrology (VM) technology is an efficient and effective method of online and wafer-to-wafer process monitoring. It is realized by constructing a prediction model between real-time equipment sensor data and the quality characteristics of wafers that should be measured. The most commonly employed prediction method for VM is a neural network (NN) approach due to its flexibility and fast computation time. However, it can easily suffer from the overfitting problem and is affected by naturally occurring potential outlying observations contained in given data. Moreover, it does not provide prediction intervals for future observations that can be used to detect abnormal process problems. In this paper, an advanced prediction model for VM is developed to resolve these issues. The proposed method is a robust regression model based on relevance vector machine. The proposed method can reduce the effect of outliers by using a weight strategy. Given a prior distribution of weights, it is shown that the weight values can be determined in a probabilistic way and computed automatically during training. We employ the variational inference method to estimate the posterior distribution over model parameters. Therefore, no validation data set is needed to control the model complexity. That is, the complexity of our proposed method can be self-adjusted in the model training phase. Based on the posterior distribution, we can obtain not only point estimates but useful statistical information such as probabilistic intervals which provide us some useful information about the current status of a manufacturing process. If the actual metrology value falls outside of the intervals, it can be a signal which alerts engineers to the need for preventive maintenance or VM model adjustment. The real plasma etching process of semiconductor manufacturing is presented as a case study to compare the predictive performance of our proposed method with that of conventional VM prediction models. The experimental results demonstrate that the proposed method can improve VM prediction accuracy compared to other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.