For actively developing tissues, a computational platform capable of automatically registering, segmenting and tracking cells is very critical to obtaining high-throughput and quantitative measurements of a range of cell behaviors, and can lead to a better understanding of the underlying dynamics of morphogenesis. In this work, we present an automated landmarkbased registration method to register shoot apical meristem of Arabidopsis Thaliana images obtained through the Confocal Laser Scanning Microscopy technique. The proposed landmark-based registration method uses local graph-based approach to automatically find corresponding landmark pairs. The registration algorithm combined with an existing tracking method is tested on multiple datasets and it significantly improves the accuracy of cell lineages and division statistics.
Technologically advanced imaging techniques have allowed us to generate and study the internal part of a tissue over time by capturing serial optical images that contain spatio-temporal slices of hundreds of tightly packed cells. Image registration of such live-imaging datasets of developing multicelluar tissues is one of the essential components of all image analysis pipelines. In this paper, we present a fully automated 4D(X-Y-Z-T) registration method of live imaging stacks that takes care of both temporal and spatial misalignments. We present a novel landmark selection methodology where the shape features of individual cells are not of high quality and highly distinguishable. The proposed registration method finds the best image slice correspondence from consecutive image stacks to account for vertical growth in the tissue and the discrepancy in the choice of the starting focal point. Then, it uses local graph-based approach to automatically find corresponding landmark pairs, and finally the registration parameters are used to register the entire image stack. The proposed registration algorithm combined with an existing tracking method is tested on multiple image stacks of tightly packed cells of Arabidopsis shoot apical meristem and the results show that it significantly improves the accuracy of cell lineages and division statistics.
Pattern formation in developmental fields involves precise spatial arrangement of different cell types in a dynamic landscape wherein cells exhibit a variety of behaviors, such as cell division, cell expansion, and cell migration [Reddy (Curr Opin Plant Biol 11:88-931, 2008) and Meyerowitz (Cell 88:299-3082, 2007)]. The information is exchanged between multiple cell layers through cell-cell communication processes to regulate gene expression and cell behaviors in specifying distinct cell types. Therefore, a quantitative and dynamic understanding of the spatial and temporal organization of gene expression and cell behavioral patterns within multilayered and actively growing developmental fields is crucial to model the process of development. The quantification of spatiotemporal dynamics of cell behaviors requires computational tools in image analysis, statistical modeling, pattern recognition, machine learning, and dynamical system identification. Here, we give a brief account of recently developed methods in analyzing both local and global growth patterns in Arabidopsis shoot apical meristems. The computational toolkit can be used to gain new insights into causal relationships among cell growth, cell division, changes in gene expression patterns, and organ development by analyzing various mutants that affect these processes. This may allow us to develop function space models that capture variations in several growth parameters both at local/single-cell level and at global/organ level. In the long run, this may enable clustering of molecular pathways that mediate distinct cell behaviors.
Sensor technology that captures information from a user’s neck region can enable a range of new possibilities, including less intrusive mobile software interfaces. In this work, we investigate the feasibility of using a single inexpensive flex sensor mounted at the neck to capture information about head gestures, about mouth movements, and about the presence of audible speech. Different sensor sizes and various sensor positions on the neck are experimentally evaluated. With data collected from experiments performed on the finalized prototype, a classification accuracy of 91% in differentiating common head gestures, a classification accuracy of 63% in differentiating mouth movements, and a classification accuracy of 83% in speech detection are achieved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.