Purpose
Radiation therapy (RT) is a common treatment option for head and neck (HaN) cancer. An important step involved in RT planning is the delineation of organs‐at‐risks (OARs) based on HaN computed tomography (CT). However, manually delineating OARs is time‐consuming as each slice of CT images needs to be individually examined and a typical CT consists of hundreds of slices. Automating OARs segmentation has the benefit of both reducing the time and improving the quality of RT planning. Existing anatomy autosegmentation algorithms use primarily atlas‐based methods, which require sophisticated atlas creation and cannot adequately account for anatomy variations among patients. In this work, we propose an end‐to‐end, atlas‐free three‐dimensional (3D) convolutional deep learning framework for fast and fully automated whole‐volume HaN anatomy segmentation.
Methods
Our deep learning model, called AnatomyNet, segments OARs from head and neck CT images in an end‐to‐end fashion, receiving whole‐volume HaN CT images as input and generating masks of all OARs of interest in one shot. AnatomyNet is built upon the popular 3D U‐net architecture, but extends it in three important ways: (a) a new encoding scheme to allow autosegmentation on whole‐volume CT images instead of local patches or subsets of slices, (b) incorporating 3D squeeze‐and‐excitation residual blocks in encoding layers for better feature representation, and (c) a new loss function combining Dice scores and focal loss to facilitate the training of the neural model. These features are designed to address two main challenges in deep learning‐based HaN segmentation: (a) segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and (b) training with inconsistent data annotations with missing ground truth for some anatomical structures.
Results
We collected 261 HaN CT images to train AnatomyNet and used MICCAI Head and Neck Auto Segmentation Challenge 2015 as a benchmark dataset to evaluate the performance of AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state‐of‐the‐art results from the MICCAI 2015 competition, AnatomyNet increases Dice similarity coefficient by 3.3% on average. AnatomyNet takes about 0.12 s to fully segment a head and neck CT image of dimension 178 × 302 × 225, significantly faster than previous methods. In addition, the model is able to process whole‐volume CT images and delineate all OARs in one pass, requiring little pre‐ or postprocessing.
Conclusion
Deep learning models offer a feasible solution to the problem of delineating OARs from CT images. We demonstrate that our proposed model can improve segmentation accuracy and simplify the autosegmentation pipeline. With this method, it is possible to delineate OARs of a head and neck CT within a fraction of a second.
In this proof-of-concept study, we have demonstrated the feasibility of using 3D printed tissue-mimicking phantoms to quantitatively assess the post-TAVR aortic root strain in vitro. A novel indicator of the post-TAVR annular strain unevenness, the annular bulge index, outperformed the other established variables and achieved a high level of accuracy in predicting post-TAVR PVL, in terms of its occurrence, severity, and location.
Coronary computed tomography angiography (CTA) allows coronary artery visualization and the detection of coronary stenoses. In addition; it has been suggested as a novel, noninvasive modality for coronary atherosclerotic plaque detection, characterization, and quantification. Emerging data show that coronary CTA-based semiquantitative plaque characterization and quantification are sufficiently reproducible for clinical purposes, and fully quantitative approaches may be appropriate for use in clinical trials. Furthermore, several lines of investigation have validated plaque imaging by coronary CTA against other imaging modalities such as intravascular ultrasound/"virtual histology" and optical coherence tomography, and there are emerging data using biochemical modalities such as near-infrared spectroscopy. Finally, clinical validation in patients with acute coronary syndrome and in the outpatient setting has shown incremental value of CTA-based plaque characterization for the prediction of major cardiovascular events. With recent developments in image acquisition and reconstruction technologies, coronary CTA can be performed with relatively low radiation exposure. With further technological innovation and clinical research, coronary CTA may become an important tool in the quest to identify vulnerable plaques and the at-risk patient.
This is the first validation that standardized, 3-dimensional, quantitative measurements of coronary plaque correlate with IVUS/VH. Mean differences are small, whereas limits of agreement are wide. Low-density noncalcified plaque correlates with necrotic core plus fibrofatty tissue on IVUS/VH.
Real-time road traffic congestion monitoring is an important and challenging problem. Most existing monitoring approaches require the deployment of infrastructure sensors or large-scale probe vehicles. Their installation is often expensive and temporal-spatial coverage is limited. Probe vehicle data are oftentimes noisy on urban arterials, and therefore insufficient to provide accurate congestion estimation. This paper presents a novel social-media based approach to traffic congestion monitoring, in which pedestrians, drivers, and passengers are treated as human sensors and their posted tweets in Twitter as observations of nearby ongoing traffic conditions. There are three technical challenges for road traffic monitoring based on Twitter, namely: 1) language ambiguity in the usage of trafficrelated terms; 2) uncertainty and low resolution of geographic location mentions; and 3) interactions between traffic-related events such as accidents and congestion. We propose a topic modeling based language model to address the first challenge and a collaborative inference model based on probabilistic soft logic (PSL) to address the second and third challenges. We present a unified statistical framework that combines those two models based on hinge loss Markov random fields (HLMRFs). In order to address the computational challenges incurred by the non-analytical integral of latent variables (factors) and the MAP estimation of a large number of location-dependent traffic congestion variables, we propose a fast approximate inference algorithm based on maximization expectation (ME) and the alternating directed method of multipliers (ADMM). Extensive evaluations over a variety of metrics on real world Twitter and INRIX probe speed datasets in two U.S. major cities demonstrate the efficiency and effectiveness of our proposed approach.
In this work, we propose to resolve the issue existing in current deep learning based organ segmentation systems that they often produce results that do not capture the overall shape of the target organ and often lack smoothness. Since there is a rigorous mapping between the Signed Distance Map (SDM) calculated from object boundary contours and the binary segmentation map, we exploit the feasibility of learning the SDM directly from medical scans. By converting the segmentation task into predicting an SDM, we show that our proposed method retains superior segmentation performance and has better smoothness and continuity in shape. To leverage the complementary information in traditional segmentation training, we introduce an approximated Heaviside function to train the model by predicting SDMs and segmentation maps simultaneously. We validate our proposed models by conducting extensive experiments on a hippocampus segmentation dataset and the public MICCAI 2015 Head and Neck Auto Segmentation Challenge dataset with multiple organs. While our carefully designed backbone 3D segmentation network improves the Dice coefficient by more than 5% compared to current state-of-the-arts, the proposed model with SDM learning produces smoother segmentation results with smaller Hausdorff distance and average surface distance, thus proving the effectiveness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.