BackgroundWith the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance.ResultsTo address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy.ConclusionsBased on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.
Recent popularity of consumer-grade virtual reality devices, such as the Oculus Rift and the HTC Vive, has enabled household users to experience highly immersive virtual environments. We take advantage of the commercial availability of these devices to provide an immersive and novel virtual reality training approach, designed to teach individuals how to survive earthquakes, in common indoor environments. Our approach makes use of virtual environments realistically populated with furniture objects for training. During a training, a virtual earthquake is simulated. The user navigates in, and manipulates with, the virtual environments to avoid getting hurt, while learning the observation and self-protection skills to survive an earthquake. We demonstrated our approach for common scene types such as offices, living rooms and dining rooms. To test the effectiveness of our approach, we conducted an evaluation by asking users to train in several rooms of a given scene type and then test in a new room of the same type. Evaluation results show that our virtual reality training approach is effective, with the participants who are trained by our approach performing better, on average, than those trained by alternative approaches in terms of the capabilities to avoid physical damage and to detect potentially dangerous objects.
The segmentation of skin lesions in dermoscopic images is a fundamental step in automated computer-aided diagnosis of melanoma. Conventional segmentation methods, however, have difficulties when the lesion borders are indistinct and when contrast between the lesion and the surrounding skin is low. They also perform poorly when there is a heterogeneous background or a lesion that touches the image boundaries; this then results in under- and oversegmentation of the skin lesion. We suggest that saliency detection using the reconstruction errors derived from a sparse representation model coupled with a novel background detection can more accurately discriminate the lesion from surrounding regions. We further propose a Bayesian framework that better delineates the shape and boundaries of the lesion. We also evaluated our approach on two public datasets comprising 1100 dermoscopic images and compared it to other conventional and state-of-the-art unsupervised (i.e., no training required) lesion segmentation methods, as well as the state-of-the-art unsupervised saliency detection methods. Our results show that our approach is more accurate and robust in segmenting lesions compared to other methods. We also discuss the general extension of our framework as a saliency optimization algorithm for lesion segmentation.
This study aimed to discuss the research efforts in developing virtual reality (VR) technology for different training applications. To begin with, we describe how VR training experiences are typically created and delivered using the current software and hardware. We then discuss the challenges and solutions of applying VR training to different application domains, such as first responder training, medical training, military training, workforce training, and education. Furthermore, we discuss the common assessment tests and evaluation methods used to validate VR training effectiveness. We conclude the article by discussing possible future directions to leverage VR technology advances for developing novel training experiences.
In recent saliency detection research, many graph-based algorithms have applied boundary priors as background queries, which may generate completely "reversed" saliency maps if the salient objects are on the image boundaries. Moreover, these algorithms usually depend heavily on pre-processed superpixel segmentation, which may lead to notable degradation in image detail features. In this paper, a novel saliency detection method is proposed to overcome the above issues. First, we propose a saliency reversion correction process, which locates and removes the boundary-adjacent foreground superpixels, and thereby increases the accuracy and robustness of the boundary prior-based saliency estimations. Second, we propose a regularized random walk ranking model, which introduces prior saliency estimation to every pixel in the image by taking both region and pixel image features into account, thus leading to pixel-detailed and superpixel-independent saliency maps. Experiments are conducted on four well-recognized data sets; the results indicate the superiority of our proposed method against 14 state-of-the-art methods, and demonstrate its general extensibility as a saliency optimization algorithm. We further evaluate our method on a new data set comprised of images that we define as boundary adjacent object saliency, on which our method performs better than the comparison methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.