Photoacoustic tomography has emerged as a promising alternative to MRI and X-ray scans in the clinical setting due to its ability to afford high-resolution images at depths in the cm range. However, its utility has not been established in the basic research arena owing to a lack of analyte-specific photoacoustic probes. To this end, we have developed acoustogenic probes for copper(II)-1 and -2 (APC-1 and APC-2, a water-soluble congener) for the chemoselective visualization of Cu(II), a metal ion which plays a crucial role in chronic neurological disorders such as Alzheimer's disease. To detect Cu(II), we have equipped both APCs with a 2-picolinic ester sensing module that is readily hydrolyzed in the presence of Cu(II) but not by other divalent metal ions. Additionally, we designed APC-1 and APC-2 explicitly for ratiometric photoacoustic imaging by using an aza-BODIPY dye scaffold exhibiting two spectrally resolved NIR absorbance bands which correspond to the 2-picolinic ester capped and uncapped phenoxide forms. The normalized ratiometric turn-on responses for APC-1 and APC-2 were 89- and 101-fold, respectively.
Tip gauge, tip length, temperature, and time substantially affect RF lesion size.
Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact.
Accurate reporting of causes of death on death certificates is essential to formulate appropriate disease control, prevention and emergency response by national health-protection institutions such as Center for disease prevention and control (CDC). In this study, we utilize knowledge from publicly available expert-formulated rules for the cause of death to determine the extent of discordance in the death certificates in national mortality data with the expert knowledge base. We also report the most commonly occurring invalid causal pairs which physicians put in the death certificates. We use sequence rule mining to find patterns that are most frequent on death certificates and compare them with the rules from the expert knowledge based. Based on our results, 20.1% of the common patterns derived from entries into death certificates were discordant. The most probable causes of these discordance or invalid rules are missing steps and non-specific ICD-10 codes on the death certificates. Revision (ICD-10) which contains 22 chapters covering 2,046 categories of diseases [3, 4]. Despite the pressing need for high quality cause of death information, challenges such as lack of adequate knowledge and practice still exist for the accurate filling of death certificates. These challenges lead to death certificates of uncertain quality. Studies have
Cardiac allograft rejection is one major limitation for long-term survival for patients with heart transplants. The endomyocardial biopsy is one gold standard to screen heart rejection for patients that have heart transplantation. However, manual identification of heart rejection is expensive and time-consuming. With the development of imaging processing techniques and machine learning tools, automatic prediction of heart rejection using whole-slide images is one promising approach to improve the care of patients with heart transplants. In this paper, we first develop a histopathological whole-slide image processing pipeline to extract features automatically. Then, we construct deep neural networks with and without regularization and dropout to classify the patients into nonrejection and rejection respectively. Our results show that neural networks with regularization and dropout can significantly reduce overfitting and achieve more stable accuracies.
The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.
Automated processing of digital histopathology slides has the potential to streamline patient care and provide new tools for cancer classification and grading. Before automatic analysis is possible, quality control procedures are applied to ensure that each image can be read consistently. One important quality control step is color normalization of the slide image, which adjusts for color variances (batch-effects) caused by differences in stain preparation and image acquisition equipment. Color batch-effects affect color-based features and reduce the performance of supervised color segmentation algorithms on images acquired separately. To identify an optimal normalization technique for histopathological color segmentation applications, five color normalization algorithms were compared in this study using 204 images from four image batches. Among the normalization methods, two global color normalization methods normalized colors from all stain simultaneously and three stain color normalization methods normalized colors from individual stains extracted using color deconvolution. Stain color normalization methods performed significantly better than global color normalization methods in 11 of 12 cross-batch experiments (p<0.05). Specifically, the stain color normalization method using k-means clustering was found to be the best choice because of high stain segmentation accuracy and low computational complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.