Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.
Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector labels. Our system interleaves between unsupervised learning (e.g., latent Dirichlet allocation, recurrent neural net language models) on document-and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demonstrated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning "datahungry" obstacle in the medical domain.
Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. "Deep learning" methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ-or bodypart-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using Conv-Nets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.
Despite tremendous progress in computer vision, there has not been an attempt for machine learning on very large-scale medical image databases. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's Picture Archiving and Communication System. With natural language processing, we mine a collection of representative ∼216K two-dimensional key images selected by clinicians for diagnostic reference, and match the images with their descriptions in an automated manner. Our system interleaves between unsupervised learning and supervised learning on document-and sentence-level text collections, to generate semantic labels and to predict them given an image. Given an image of a patient scan, semantic topics in radiology levels are predicted, and associated key-words are generated. Also, a number of frequent disease types are detected as present or absent, to provide more specific interpretation of a patient scan. This shows the potential of largescale learning and prediction in electronic patient records available in most modern clinical institutions.
Background Although there is evidence linking smoking and heart failure (HF), the association between lifetime smoking exposure and HF in older adults and the strength of this association among current and past smokers is not well known. Methods We examined the association between smoking status, pack-years of exposure and incident HF risk in 2, 125 participants of the Health, Aging, and Body Composition Study (age 73. 6±2. 9 years; 69. 7% females; 54. 2% whites)using proportional hazard models. Results At inception, 54. 8% of participants were non-smokers, 34. 8% were past smokers, and 10. 4% were current smokers. During follow-up (median, 9. 4 years), HF incidence was 11. 4 per 1000 person-years in non-smokers, 15. 2 in past smokers (hazard ratio [HR] vs. non-smokers 1. 33; 95% confidence interval [CI] 1. 01, 1. 76; p=0. 045), and 21. 9 in current smokers (HR 1. 93; 95% CI 1. 30, 2. 84; p=0. 001). After adjusting for HF risk factors, incident coronary events, and competing risk for death, a dose-effect association between pack-years of exposure and HF risk was observed (HR 1. 09; 95% CI 1. 05, 1. 14; p<0. 001 per 10 pack-years). HF risk was not modulated by pack-years of exposure in current smokers. In past smokers, HR for HF was 1. 05 (95% CI, 0. 64, 1. 72) for 1–11 pack-years; 1. 23 (95% CI, 0. 82, 1. 83) for 12–35 pack-years; and 1. 64 (95% CI, 1. 11, 2. 42) for >35 pack-years of exposure in fully adjusted models (p<0. 001 for trend) compared to non-smokers. Conclusions In older adults, both current and past cigarette smoking increase HF risk. In current smokers, this risk is high irrespective of pack-years of exposure, whereas in past smokers there was a dose-effect association.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.