With the rapid development of modern genotyping technology, it is becoming commonplace to genotype densely spaced genetic markers such as single nucleotide polymorphisms (SNPs) along the genome. This development has inspired a strong interest in using multiple markers located in the target region for the detection of association. We introduce a principal components (PCs) regression method for candidate gene association studies where multiple SNPs from the candidate region tend to be correlated. In this approach, the total variance in the original genotype scores is decomposed into parts that correspond to uncorrelated PCs. The PCs with the largest variances are then used as regressors in a multiple regression. Simulation studies suggest that this approach can have higher power than some popular methods. An application to CHI3L2 gene expression data confirms a significant association between CHI3L2 gene expression level and SNPs from this gene that has been previously reported by others.
BedroomsLiving RoomsOffices Bathrooms Figure 1. Synthetic virtual scenes generated by our method. Our model can generate a large variety of such scenes, as well as complete partial scenes, in under two seconds per scene. This performance is enabled by a pipeline of multiple deep convolutional generative models which analyze a top-down representation of the scene. AbstractWe present a new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models. Our method operates on a top-down image-based representation, and inserts objects iteratively into the scene by predicting their category, location, orientation and size with separate neural network modules. Our pipeline naturally supports automatic completion of partial scenes, as well as synthesis of complete scenes. Our method is significantly faster than the previous image-based method and generates result that outperforms state-of-the-art generative scene models in terms of faithfulness to training data and perceived visual quality.
This paper presents a novel garbage pickup robot which operates on the grass. The robot is able to detect the garbage accurately and autonomously by using a deep neural network for garbage recognition. In addition, with the ground segmentation using a deep neural network, a novel navigation strategy is proposed to guide the robot to move around. With the garbage recognition and automatic navigation functions, the robot can clean garbage on the ground in places like parks or schools efficiently and autonomously. Experimental results show that the garbage recognition accuracy can reach as high as 95%, and even without path planning, the navigation strategy can reach almost the same cleaning efficiency with traditional methods. Thus, the proposed robot can serve as a good assistance to relieve dustman's physical labor on garbage cleaning tasks. He has author over 80 refereed international journal papers covering topics of artificial intelligence, multimedia communication, and human computer interface. He has authored and co-edited over 10 books, and held over 50 patents.Dr. Lian is on the editor board of several refereed international journals.Zhaoxiang Liu received the B.S. degree and Ph.D. degree from the College of Information and Electrical Engineering,
In this paper, we propose a transformer-based architecture, called two-stage transformer neural network (TSTNN) for end-to-end speech denoising in the time domain. The proposed model is composed of an encoder, a two-stage transformer module (TSTM), a masking module and a decoder. The encoder maps input noisy speech into feature representation. The TSTM exploits four stacked two-stage transformer blocks to efficiently extract local and global information from the encoder output stage by stage. The masking module creates a mask which will be multiplied with the encoder output. Finally, the decoder uses the masked encoder feature to reconstruct the enhanced speech. Experimental results on the benchmark dataset show that the TSTNN outperforms most state-of-theart models in time or frequency domain while having significantly lower model complexity.
White matter hyperintensity (WMH) is associated with various aging and neurodegenerative diseases. In this paper, we proposed and validated a fully automatic system which integrates classical image processing and deep neural network for segmenting WHM from fluid attenuation inversion recovery (FLAIR) and T1 magnetic resonance (MR) images. In this system, a novel skip connection U-net (SC U-net) was proposed. In addition, an atlas-based method was introduced in the preprocessing stage to remove non-brain tissues (namely skull-stripping) and thus to improve the segmentation accuracy. Effectiveness of the proposed system was validated on a dataset of 60 paired images based on cross-scanner validation. Our experimental results revealed the effectiveness of the skull-stripping strategy. More importantly, compared to two existing state-of-the-art methods for segmenting WHM, including a U-net-like method and another deep learning method, the proposed SC U-net had a faster convergence, a lower loss and a higher segmentation accuracy. Both quantitative and qualitative analyses (via visual examinations) revealed the superior performance of our proposed SC U-net. The mean dice score of the proposed SC U-net was 78.36% which was much higher than those of a U-net-like method (74.99%) and an alternative deep learning method (74.80%). The software environment and model of the proposed system were made publicly accessible at Dockerhub.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.