Inspection of rice seeds is a crucial task for plant nurseries and farmers since it ensures seed quality when growing seedlings. Conventionally, this process is performed by expert inspectors who manually screen large samples of rice seeds to identify their species and assess the cleanness of the batch. In the quest to automate the screening process through machine vision, a variety of approaches utilise appearance-based features extracted from RGB images while others utilise the spectral information acquired using Hyperspectral Imaging (HSI) systems. Most of the literature on this topic benchmarks the performance of new discrimination models using only a small number of species. Hence, it is unclear whether or not model performance variance confirms the effectiveness of proposed algorithms and features, or if it can be simply attributed to the inter-class/intra-class variations of the dataset itself. In this paper, a novel method to automatically screen and classify rice seed samples is proposed using a combination of spatial and spectral features, extracted from high resolution RGB and hyperspectral images. The proposed system is evaluated using a large dataset of 8,640 rice seeds sampled from a variety of 90 different species. The dataset is made publicly available to facilitate robust comparison and benchmarking of other existing and newly proposed techniques going forward. The proposed algorithm is evaluated on this large dataset and the experimental results show the effectiveness of the algorithm to eliminate impure species by combining spatial features extracted from high spatial resolution images and spectral features from hyperspectral data cubes. INDEX TERMS Hyperspectral imaging, rice seed variety, spatio-temporal feature fusion. JINCHANG REN (Senior Member, IEEE) received the B.E. degree in computer software, the M.Eng. degree in image processing, and the D.Eng. degree in computer vision from Northwestern Polytechnical University, Xi'an, China, and the Ph.D. degree in electronic imaging and media communication from the
This research was funded by Vietnam Ministry of Science and Technology under grant number DTDLCN-16/18 ''Automated Respiration Symptoms monitoring and Abnormal Human Activity Detection Using the Internet of Things''. ABSTRACT Recently, the recent advancement of deep learning with the capacity to perform automatic highlevel feature extraction has achieved promising performance for sensor-based human activity recognition (HAR). Among different deep learning methods, Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) have been widely adopted. However, scalar outputs and pooling in CNN only allow to get the invariance but not the equivariance. The capsule networks (CapsNet) with the vector output and routing by agreement is able to capture the equivariance. In this paper, we propose a method for recognizing human activity from wearable sensors based on a capsule network named SensCapsNet. The architecture of SensCapsNet is designed to be suitable for spatial-temporal data coming from wearable sensors. Experimental results show that the proposed network outperforms CNN and LSTM methods. The performance of the proposed CapsNet architecture is assessed by altering dynamic routing between capsule layers. The proposed SensCapsNet yields improved accuracy values of 77.7% and 70.5% for 1 routing on two testing datasets in comparison with the baseline methods based on CNN and LSTM that yields the F1score of 67.7% and 69.2% for the first dataset and 65.3% and 67.6% for the second dataset respectively. Moreover, even several human activity datasets are available, privacy invasion and obtrusive concerns have not been carefully taken in to consideration in dataset building. Toward to build a non-obstructive sensing based human activity recognition method, in this paper, a dataset named 19NonSens is designed and collected from twelve subjects wearing e-Shoes and a smart watch to perform 19 activities under multiple contexts. This dataset will be made publicity available. Finally, thanks to the promising results obtained by the proposed method, we develop a life logging application which achieves a real-time computation and the accuracy rate greater than 80% for 5 common upper body activities. INDEX TERMS Human activity recognition, capsule net, wearable sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.