Purpose
Metabolic phenotyping has provided important biomarker findings, which, unfortunately, are rarely replicated across different sample sets due to the variations from different analytical and clinical protocols used in the studies. To date, very few metabolic hallmarks in a given cancer type have been confirmed and validated by use of a metabolomic approach and other clinical modalities. Here, we report a metabolomics study to identify potential metabolite biomarkers of colorectal cancer with potential theranostic value.
Experimental Design
Gas chromatography–time-of-flight mass spectrometry (GC–TOFMS)–based metabolomics was used to analyze 376 surgical specimens, which were collected from four independent cohorts of patients with colorectal cancer at three hospitals located in China and City of Hope Comprehensive Cancer Center in the United States. Differential metabolites were identified and evaluated as potential prognostic markers. A targeted transcriptomic analysis of 29 colorectal cancer and 27 adjacent nontumor tissues was applied to analyze the gene expression levels for key enzymes associated with these shared metabolites.
Results
A panel of 15 significantly altered metabolites was identified, which demonstrates the ability to predict the rate of recurrence and survival for patients after surgery and chemotherapy. The targeted transcriptomic analysis suggests that the differential expression of these metabolites is due to robust metabolic adaptations in cancer cells to increased oxidative stress as well as demand for energy, and macromolecular substrates for cell growth and proliferation.
Conclusions
These patients with colorectal cancer, despite their varied genetic background, mutations, pathologic stages, and geographic locations, shared a metabolic signature that is of great prognostic and therapeutic potential.
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Because of the poor lighting conditions at night time, visible images are often fused with corresponding infrared (IR) images for context enhancement of the scenes in night vision. In this paper, we present a novel night-vision context enhancement algorithm through IR and visible image fusion with the guided filter. First, to enhance the visibility of poorly illuminated details in the visible image before the fusion, an adaptive enhancement method is developed by incorporating the processes of dynamic range compression and contrast restoration based on the guided filter. Then, a hybrid multi-scale decomposition based on the guided filter is introduced to inject the IR image information into the visible image through a multi-scale fusion approach. Moreover, a perceptual-based regularization parameter selection method is used to determine the relative amount of the injected IR spectral features by comparing the perceptual saliency of the IR and visible image information. This fusion method can successfully transfer the important IR image information into the fused image, and simultaneously preserve the details and background scenery in the input visible image. Experimental results show that the proposed algorithm is able to achieve better context enhancement results in night vision.
Object detection in optical remote sensing images is an important and challenging task. In recent years, the methods based on convolutional neural networks have made good progress. However, due to the large variation in object scale, aspect ratio, and arbitrary orientation, the detection performance is difficult to be further improved. In this paper, we discuss the role of discriminative features in object detection, and then propose a Critical Feature Capturing Network (CFC-Net) to improve detection accuracy from three aspects: building powerful feature representation, refining preset anchors, and optimizing label assignment. Specifically, we first decouple the classification and regression features, and then construct robust critical features adapted to the respective tasks through the Polarization Attention Module (PAM). With the extracted discriminative regression features, the Rotation Anchor Refinement Module (R-ARM) performs localization refinement on preset horizontal anchors to obtain superior rotation anchors. Next, the Dynamic Anchor Learning (DAL) strategy is given to adaptively select high-quality anchors based on their ability to capture critical features. The proposed framework creates more powerful semantic representations for objects in remote sensing images and achieves high-performance real-time object detection. Experimental results on three remote sensing datasets including HRSC2016, DOTA, and UCAS-AOD show that our method achieves superior detection performance compared with many state-of-the-art approaches. Code and models are available at https://github.com/ming71/CFC-Net.
This letter presents a novel method based on wavelet fusion for change detection in synthetic aperture radar (SAR) images. The proposed approach is applied to generate the difference image (DI) by using complementary information from mean-ratio and log-ratio images. To restrain the background (unchanged areas) information and enhance the information of changed regions in the fused DI, fusion rules based on weight averaging and minimum standard deviation are chosen to fuse the wavelet coefficients for low-and high-frequency bands, respectively. Experiments on real SAR images confirm that the proposed approach does better than the mean-ratio, log-ratio, and Rayleigh-distribution-ratio operators.Index Terms-Change detection, log-ratio image, mean-ratio image, synthetic aperture radar (SAR) image, wavelet fusion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.