This paper proposes a novel prosthetic hand control method that incorporates spatial information of target objects obtained with a RGB-D sensor into a myoelectric control procedure. The RGB-D sensor provides not only two-dimensional (2D) color information but also depth information as spatial cues on target objects, and these pieces of information are used to classify objects in terms of shape features. The shape features are then used to determine an appropriate grasp strategy/motion for control of a prosthetic hand. This paper uses a two-channel image format for classification, which contains grayscale and depth information of objects, and the image data is classified with a deep convolutional neural network (DCNN). Compared with previous studies based only on 2D color images, it is expected that the spatial information would improve classification accuracy, and consequently better grasping decision and prosthetic control can be achieved. In this study, a dataset of image pairs, consisting of grayscale images and their corresponding depth images, has been created to validate the proposed method. This database includes images of simple three-dimensional (3D) solid objects from six categories, namely, triangular prism, triangular pyramid, quadrangular prism, rectangular pyramid, cone, and cylinder. Image classification experiments were conducted with this database. The experimental results indicate that spatial information possesses high potential in classifying shape features of objects.
Blood samples are easily damaged in traditional bloodstain detection and identification. In complex scenes with interfering objects, bloodstain identification may be inaccurate, with low detection rates and false-positive results. In order to meet these challenges, we propose a bloodstain detection and identification method based on hyperspectral imaging and mixed convolutional neural networks, which enables fast and efficient non-destructive identification of bloodstains. In this study, we apply visible/nearinfrared reflectance hyperspectral imaging in the 380-1000 nm spectral region to analyze the shape, structure, and biochemical characteristics of bloodstains. Hyperspectral images of bloodstains on different substrates and six bloodstain analogs are experimentally obtained. The acquired spectral pixels are pre-processed by Principal Component Analysis (PCA). For bloodstains and different bloodstain analogs, regions of interest are selected from each substance to obtain pixels, which are further used in convolutional neural network (CNN) modeling. After the mixed CNN modeling is completed, pixels are selected from the hyperspectral images as a test set for bloodstains and bloodstain analogs. Finally, the bloodstain recognition ability of the mixed 2D-3D CNN model is evaluated by analyzing the kappa coefficient and classification accuracy. The experimental results show that the accuracy of the constructed CNN bloodstain identification model reaches 95.4%. Compared with other methods, the bloodstain identification method proposed in this study has higher efficiency and accuracy in complex scenes. The results of this study will provide a reference for the future development of the bloodstain online detection system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.