Interaction with a computer has been the center of innovation ever since the advent of input devices. From simple punch cards to keyboards, there are number of novel ways of interaction with computers which influence the user experience. Communicating using gestures is perhaps one of the most natural ways of interaction. Gesture recognition as a tool for interpreting signs constitutes a pivotal area in gesture recognition research where accuracy of the algorithm and the ease of usability determine the effectiveness of the algorithm or system. Introducing gesture based interaction in Virtual reality applications has not only helped solve problems which were commonly reported in traditional Virtual Reality systems, but also gives user a more natural and enriching experience. This paper concentrates on comparison of different systems and identifying their similarities, differences, advantages and demerits which can play a key role in designing a system using such technologies.
Detecting divergence between oncogenic tumors plays a pivotal role in cancer diagnosis and therapy. This research work was focused on designing a computational strategy to predict the class of lung cancer tumors from the structural and physicochemical properties (1497 attributes) of protein sequences obtained from genes defined by microarray analysis. The proposed methodology involved the use of hybrid feature selection techniques (gain ratio and correlation based subset evaluators with Incremental Feature Selection) followed by Bayesian Network prediction to discriminate lung cancer tumors as Small Cell Lung Cancer (SCLC), Non-Small Cell Lung Cancer (NSCLC) and the COMMON classes. Moreover, this methodology eliminated the need for extensive data cleansing strategies on the protein properties and revealed the optimal and minimal set of features that contributed to lung cancer tumor classification with an improved accuracy compared to previous work. We also attempted to predict via supervised clustering the possible clusters in the lung tumor data. Our results revealed that supervised clustering algorithms exhibited poor performance in differentiating the lung tumor classes. Hybrid feature selection identified the distribution of solvent accessibility, polarizability and hydrophobicity as the highest ranked features with Incremental feature selection and Bayesian Network prediction generating the optimal Jack-knife cross validation accuracy of 87.6%. Precise categorization of oncogenic genes causing SCLC and NSCLC based on the structural and physicochemical properties of their protein sequences is expected to unravel the functionality of proteins that are essential in maintaining the genomic integrity of a cell and also act as an informative source for drug design, targeting essential protein properties and their composition that are found to exist in lung cancer tumors.
Software defect prediction using classification algorithms was advocated by many researchers.Moreover the classifier ensemble can effectively improve classification performance compared to a single classifier. The research on defect prediction using classifier ensemble methods are motivated since they have not been fully exploited.Software defects leads to failure of many defense systems. A comparative study of various classification methods was performed to classify software defects. The methods include Random Tree, Random Forest, Bayesian Network, Naive Bayes, K-Nearest Neighbour and Instance Based Classifier.Random Forest algorithm was found to give more accurate prediction than other classifiers.To enhance the classification accuracy the new algorithm "Improved Random Forest" is proposed. It works by incorporating best feature selection algorithm with the Random Forest to gives better accurracy. Correlation based Feature Subset Selection algorithm selects the optimal subset of features. The optimal features are fed as a part of Random Forest classification to give better accuracy in software defect prediction. The six optimal subset of features were selected for PC1 dataset. The features are selected by the CFS and utilized by Random Forest to improve the accuracy of existing Random Forest. The experiments were carried on public-NASA datasets of PROMISE repository. KeywordsSoftware Defect Prediction, Feature Selection, Classification, Classifier Evaluation. INTRODUCTIONData mining is the task of investigating data from various perspectives and organizing the data into relevant and meaningful information [1]. There are numerous data mining algorithms such as classification, regression, association, clustering, etc,. used in software quality analysis. This paper uses Feature Selection and classification approach for the prediction of defective software [2]. Feature selection is the method of deciding on a subset of important and relevant features for building reliable learning models. It makes training and utilizing a classifier more efficient by reducing the size of the effective training set. Moreover feature selection often increases classification accuracy by removing noisy features. Classification approach divides the data samples into target classes. For example, software module can be categorized into "defective" or "non-defective" using classification approaches. Defect in a software module occurs due to source code error that further produces wrong output and leads to poor quality software products. Defective software modules are also responsible for high development and maintenance cost and customer dissatisfaction. The NASA Space Network, also referred to as the Tracking and Data Relay Satellite System consists of nine on-orbit telecommunications satellites stationed at geo-synchronous stationary positions.In this paper, we apply classification algorithms on publicly available datasets of the NASA PROMISE repository in order to classify the software modules as defective/non-defective. The da...
Web Usage mining is a technique used to identify the user needs from the web log. Discovering hidden patterns from the logs is an upcoming research area. Association rules play an important role in many web mining applications to detect interesting patterns. However, it generates enormous rules that cause researchers to spend ample time and expertise to discover the really interesting ones. This paper works on the server logs from the MSNBC dataset for the month of September 1999. This research aims at predicting the probable subsequent page in the usage of web pages listed in this data based on their navigating behaviour by using Apriori prefix tree (PT) algorithm. The generated rules were ranked based on the support, confidence and lift evaluation measures. The final predictions revealed that the interestingness of pages mainly depended on the support and lift measure whereas confidence assumed a uniform value among all the pages. It proved that the system guaranteed 100% confidence with the support of 1.3E−05. It revealed that the pages such as Front page, On-air, News, Sports and BBS attracted more interested subsequent users compared to Travel, MSN-News and MSN-Sports which were of less interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.