Electronic commerce (e-commerce) is an increasingly popular trend in modern economy concomitant with the development of the Internet. E-commerce has developed considerably, making Vietnam one of the fastest growing markets in the world. However, its growth rate has not matched its potential, leading to the question how online retailers could improve their practices and thus contribute to the sustainable development of emerging markets such as Vietnam. Therefore, with the goal of providing online retailers with many methods to improve their online shopping service, this study examined the direct and indirect influence of the dimensions of online shopping convenience on repurchase intention through customer-perceived value. A survey of 230 Vietnamese customers was conducted to test the theoretical model. A structural equation model was used for data analysis. The results determined that the five dimensions of online shopping convenience are: access, search, evaluation, transaction, and possession/post-purchase convenience. All dimensions have a direct impact on perceived value and repurchase intention. The results also show the important role of perceived value when a factor both directly influences repurchase intention and mediates the relationship between convenience and repurchase intention.
We present the evaluation of two well-known, low-cost consumer-grade EEG devices: the Emotiv EPOC and the Neurosky MindWave. Problems with using the consumer-grade EEG devices (BCI illiteracy, poor technical characteristics, and adverse EEG artefacts) are discussed. The experimental evaluation of the devices, performed with 10 subjects asked to perform concentration/relaxation and blinking recognition tasks, is given. The results of statistical analysis show that both devices exhibit high variability and non-normality of attention and meditation data, which makes each of them difficult to use as an input to control tasks. BCI illiteracy may be a significant problem, as well as setting up of the proper environment of the experiment. The results of blinking recognition show that using the Neurosky device means recognition accuracy is less than 50%, while the Emotiv device has achieved a recognition accuracy of more than 75%; for tasks that require concentration and relaxation of subjects, the Emotiv EPOC device has performed better (as measured by the recognition accuracy) by ∼9%. Therefore, the Emotiv EPOC device may be more suitable for control tasks using the attention/meditation level or eye blinking than the Neurosky MindWave device.
Virtual reality exposure therapy (VRET) can have a significant impact towards assessing and potentially treating various anxiety disorders. One of the main strengths of VRET systems is that they provide an opportunity for a psychologist to interact with virtual 3D environments and change therapy scenarios according to the individual patient’s needs. However, to do this efficiently the patient’s anxiety level should be tracked throughout the VRET session. Therefore, in order to fully use all advantages provided by the VRET system, a mental stress detection system is needed. The patient’s physiological signals can be collected with wearable biofeedback sensors. Signals like blood volume pressure (BVP), galvanic skin response (GSR), and skin temperature can be processed and used to train the anxiety level classification models. In this paper, we combine VRET with mental stress detection and highlight potential uses of this kind of VRET system. We discuss and present a framework for anxiety level recognition, which is a part of our developed cloud-based VRET system. Physiological signals of 30 participants were collected during VRET-based public speaking anxiety treatment sessions. The acquired data were used to train a four-level anxiety recognition model (where each level of ‘low’, ‘mild’, ‘moderate’, and ‘high’ refer to the levels of anxiety rather than to separate classes of the anxiety disorder). We achieved an 80.1% cross-subject accuracy (using leave-one-subject-out cross-validation) and 86.3% accuracy (using 10 × 10 fold cross-validation) with the signal fusion-based support vector machine (SVM) classifier.
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
This study investigates the processing of voice signals for detecting Parkinson's disease. This disease is one of the neurological disorders that affect people in the world most. The approach evaluates the use of eighteen feature extraction techniques and four machine learning methods to classify data obtained from sustained phonation and speech tasks. Phonation relates to the vowel /a/ voicing task and speech to the pronunciation of a short sentence in Lithuanian language. The audio tasks were recorded using two microphone channels from acoustic cardioid (AC) and a smartphone (SP), thus allowing to evaluate the performance for different types of microphones. Five metrics were employed to analyze the classification performance: Equal Error Rate (EER) and Area Under Curve (AUC) measures from Detection Error Tradeoff (DET) and Receiver Operating Characteristic curves, Accuracy, Specificity, and Sensitivity. We compare this approach with other approaches that use the same data set. We show that the task of phonation was more efficient than speech tasks in the detection of disease. The best performance for the AC channel achieved an accuracy of 94.55%, AUC 0.87, and EER 19.01%. When using the SP channel, we have achieved an accuracy of 92.94%, AUC 0.92, and EER 14.15%.
Using gestures can help people with certain disabilities in communicating with other people. This paper proposes a lightweight model based on YOLO (You Only Look Once) v3 and DarkNet-53 convolutional neural networks for gesture recognition without additional preprocessing, image filtering, and enhancement of images. The proposed model achieved high accuracy even in a complex environment, and it successfully detected gestures even in low-resolution picture mode. The proposed model was evaluated on a labeled dataset of hand gestures in both Pascal VOC and YOLO format. We achieved better results by extracting features from the hand and recognized hand gestures of our proposed YOLOv3 based model with accuracy, precision, recall, and an F-1 score of 97.68, 94.88, 98.66, and 96.70%, respectively. Further, we compared our model with Single Shot Detector (SSD) and Visual Geometry Group (VGG16), which achieved an accuracy between 82 and 85%. The trained model can be used for real-time detection, both for static hand images and dynamic gestures recorded on a video.
The internet of things (IoT) aims to extend the internet to real-world objects, connecting smart and sensing devices into a global network infrastructure by connecting physical and virtual objects. The IoT has the potential to increase the quality of life of inhabitants and users of intelligent ambient assisted living (AAL) environments. The paper overviews and discusses the IoT technologies and their foreseen impacts and challenges for the AAL domain. The results of this review are summarized as the IoT based gerontechnology acceptance model for the assisted living domain. The model focuses on the acceptance of new technologies by older people and underscores the need for the adoption of the IoT for the AAL domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.