Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.
Purpose MRI-based cell tracking has emerged as a useful tool for identifying the location of transplanted cells, and even their migration. Magnetically labeled cells appear as dark contrast in T2*- weighted MRI, with sensitivities of individual cells. One key hurdle to the widespread use of MRI-based cell tracking is the inability to determine the number of transplanted cells based on this contrast feature. In the case of single cell detection, manual enumeration of spots in 3D MRI in principle is possible; however, it is a tedious and time-consuming task that is prone to subjectivity and inaccuracy on a large scale. This research presents the first comprehensive study on how a computer based intelligent, automatic and accurate cell quantification approach can be designed for spot detection in MRI scans. Methods Magnetically labeled mesenchymal stem cells (MSCs) were transplanted into rats using an intracardiac injection, accomplishing single cell seeding in the brain. T2*- weighted MRI of these rat brains were performed where labeled MSCs appeared as spots. Using machine learning and computer vision paradigms, approaches were designed to systematically explore the possibility of automatic detection of these spots in MRI. Experiments were validated against known in vitro scenarios. Results Using the proposed deep convolutional neural network (CNN) architecture, an in vivo accuracy up to 97.3% and in vitro accuracy of up to 99.8% was achieved for automated spot detection in MRI data. Conclusion The proposed approach for automatic quantification of MRI-based cell tracking will facilitate the use of MRI in large scale cell therapy studies.
Researchers in the areas of regenerative medicine and tissue engineering have great interests in understanding the relationship of different sets of culturing conditions and applied mechanical stimuli to the behavior of mesenchymal stem cells (MSCs). However, it is challenging to design a tool to perform automatic cell image analysis due to the diverse morphologies of MSCs. Therefore, as a primary step towards developing the tool, we propose a novel approach for accurate cell image segmentation. We collected three MSC datasets cultured on different surfaces and exposed to diverse mechanical stimuli. By analyzing existing approaches on our data, we choose to substantially extend binarization-based extraction of alignment score (BEAS) approach by extracting novel discriminating features and developing an adaptive threshold estimation model. Experimental results on our data shows our approach is superior to seven conventional techniques. We also define three quantitative measures to analyze the characteristics of images in our datasets. To the best of our knowledge, this is the first study that applied automatic segmentation to live MSC cultured on different surfaces with applied stimuli.
Due to recent advances in cell-based therapies, non-invasive monitoring of in vivo cells in MRI is gaining enormous interest. However, to date, the monitoring and analysis process is conducted manually and is extremely tedious, especially in the clinical arena. Therefore, this paper proposes a novel computer vision-based learning approach that creates superpixel-based 3D models for candidate spots in MRI, extracts a novel set of superfern features, and utilizes a partition-based Bayesian classifier ensemble to distinguish spots from non-spots. Unlike traditional ferns that utilize pixel-based differences, superferns exploit superpixel averages in computing difference-based features despite the absence of any order in superpixel arrangement. To evaluate the proposed approach, we develop the first labeled database with a total of more than 16 thousand labels on five in vivo and four in vitro MRI scans. Experimental results show the superiority of our approach in comparison to the two most relevant baselines. To the best of our knowledge, this is the first study to utilize a learning-based methodology for in vivo cell detection in MRI.
Cercospora leaf spot (CLS) is the most serious disease in sugar beet plants that significantly reduces the sugar yield throughout the world. Therefore the current focus of the researchers in agricultural domain is to find sugar beet cultivars that are highly resistant to CLS. To measure their resistance, CLS is manually observed and rated in a large variety of sugar beet by different human experts over a period of a few months. Unfortunately, this procedure is laborious and subjective. Therefore, we propose a novel computer vision system, CLS Rater, to automatically and accurately rate CLS of plant images in the real field to the "USDA scale" of 0 to 10. Given a set of plant images captured by a tractor-mounted camera, CLS Rater extracts multiscale superpixels, where in each scale a novel histogram of importances feature representation is proposed to encode both the within-superpixel local and across-superpixel global appearance variations. These features at different superpixel scales are then fused for learning a bagging M5P regressor that estimates the rating for each plant image. We test our system on the field data collected over a period of two months under different day lighting and weather conditions. Experimental results show CLS Rater to be highly consistent with a rating error of 0.65, which demonstrates higher consistency than the rating standard deviation of 1.31 by the human experts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.