Abstract-The paper provides a new deterministic Q-learning with a presumed knowledge about the distance from the current state to both the next state and the goal. This knowledge is efficiently used to update the entries in the Q-table once only by utilizing four derived properties of the Q-learning, instead of repeatedly updating them like the classical Q-learning. Naturally, the proposed algorithm has an insignificantly small timecomplexity in comparison to its classical counterpart. Further, the proposed algorithm stores the Q-value for the best possible action at a state, and thus saves significant storage. Experiments undertaken on simulated maze and real platforms confirm that the Q-table obtained by the proposed Q-learning when used for path-planning application of mobile robots outperforms both the classical and extended Q-learning with respect to three metrics: traversal time, number of states traversed, and 90 o turns required. Reduction in 90 o turnings minimizes the energy consumption, and thus has importance in robotics literature.
Brain Computer interfaces (BCI) has immense potentials to improve human lifestyle including that of the disabled. BCI has possible applications in the next generation human-computer, human-robot and prosthetic/assistive devices for rehabilitation. The dataset used for this study has been obtained from the BCI competition-II 2003 databank provided by the University of Technology, Graz. After pre-processing of the signals from their electrodes (C3 & C4), the wavelet coefficients, Power Spectral Density of the alpha and the central beta band and the average power of the respective bands have been employed as features for classification. This paper presents a comparative study of different classification methods including linear discriminant analysis (LDA), Quadratic discriminant analysis (QDA), k-nearest neighbor (KNN) algorithm, linear support vector machine (SVM), radial basis function (RBF) SVM and naive Bayesian classifiers algorithms in differentiating the raw EEG data obtained, into their associative left/right hand movements. Performance of left/right hand classification is studied using both original features and reduced features. The feature reduction here has been performed using Principal component Analysis (PCA). It is as observed that RBF kernelised SVM classifier indicates the highest performance accuracy of 82.14% with both original and reduced feature set. However, experimental results further envisage that all the other classification techniques provide better classification accuracy for reduced data set in comparison to the original data. It is also noted that the KNN classifier improves the classification accuracy by 5% when reduced features are used instead of the original.
In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.
Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using - he well-known interval approach
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.