In response to colonization by rhizobia bacteria, legumes are able to form nitrogen-fixing nodules in their roots, allowing the plants to grow efficiently in nitrogen-depleted environments. Legumes utilize a complex, long-distance signaling pathway to regulate nodulation that involves signals in both roots and shoots. We measured the transcriptional response to treatment with rhizobia in both the shoots and roots of Medicago truncatula over a 72-h time course. To detect temporal shifts in gene expression, we developed GeneShift, a novel computational statistics and machine learning workflow that addresses the time series replicate the averaging issue for detecting gene expression pattern shifts under different conditions. We identified both known and novel genes that are regulated dynamically in both tissues during early nodulation including leginsulin, defensins, root transporters, nodulin-related, and circadian clock genes. We validated over 70% of the expression patterns that GeneShift discovered using an independent M. truncatula RNA-Seq study. GeneShift facilitated the discovery of condition-specific temporally differentially expressed genes in the symbiotic nodulation biological system. In principle, GeneShift should work for time-series gene expression profiling studies from other systems.
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder characterized by challenges in social communication as well as repetitive or restrictive behaviors. Many genetic associations with ASD have been identified, but most associations occur in a fraction of the ASD population. Here, we searched for eQTL-associated DNA variants with significantly different allele distributions between ASD-affected and control. Thirty significant DNA variants associated with 174 tissue-specific eQTLs from ASD individuals in the SPARK project were identified. Several significant variants fell within brain-specific regulatory regions or had been associated with a significant change in gene expression in the brain. These eQTLs are a new class of biomarkers that could control the myriad of brain and non-brain phenotypic traits seen in ASD-affected individuals.
<div class="section abstract"><div class="htmlview paragraph">Image segmentation has historically been a technique for analyzing terrain for military autonomous vehicles. One of the weaknesses of image segmentation from camera data is that it lacks depth information, and it can be affected by environment lighting. Light detection and ranging (LiDAR) is an emerging technology in image segmentation that is able to estimate distances to the objects it detects. One advantage of LiDAR is the ability to gather accurate distances regardless of day, night, shadows, or glare. This study examines LiDAR and camera image segmentation fusion to improve an advanced driver-assistance systems (ADAS) algorithm for off-road autonomous military vehicles. The volume of points generated by LiDAR provides the vehicle with distance and spatial data surrounding the vehicle. Processing these point clouds with semantic segmentation is a computationally intensive process requiring fusion of camera and LiDAR data so that the neural network can process depth and image data simultaneously. We create fused depth images by using a projection method from the LiDAR onto the images to create depth images (RGB-Depth). A neural network is trained to segment the fused data from RELLIS-3D, which is a multi-modal data set for off road robotics. This data set contains both LiDAR point clouds and corresponding RGB images for training the neural network. The labels from the data set are grouped as objects, traversable terrain, non-traversable terrain, and sky to balance underrepresented classes. Results on a modified version of DeepLabv3+ with a ResNet-18 backbone achieves an overall accuracy of 93.989 percent.</div></div>
<div class="section abstract"><div class="htmlview paragraph">Semantic segmentation is an integral component in many autonomous vehicle systems used for tasks like path identification and scene understanding. Autonomous vehicles must make decisions quickly enough so they can react to their surroundings, therefore, they must be able to segment the environment at high speeds. There has been a fair amount of research on semantic segmentation, but most of this research focuses on achieving higher accuracy, using the mean intersection over union (mIoU) metric rather than higher inference speed. More so, most of these semantic segmentation models are trained and evaluated on urban areas instead of off-road environments. Because of this there is a lack of knowledge in semantic segmentation models for use in off-road unmanned ground vehicles. In this research, SwiftNet, a semantic segmentation deep learning model designed for high inference speed and accuracy on images with large dimensions, was implemented and evaluated for inference speed of semantic segmentation of off-road environments. SwiftNet was pre-trained on the ImageNet dataset, then trained on 70% of the labeled images from the Rellis-3D dataset. Rellis-3D is an extensive off-road dataset designed for semantic segmentation, containing 6234 labeled 1920x1200 images. SwiftNet was evaluated using the remaining 30% of images from the Rellis-3D dataset and achieved an average inference speed of 24 frames per second (FPS) and an mIoU score 73.8% on a Titan RTX GPU.</div></div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.