Fatigue is a common state of mankind characterized by a reduction in the level of consciousness and alertness. Therefore, the recognition of fatigue and sleepiness has become indispensable in many alertness-dependent situations, such as when driving vehicles on public roads, performing demanding tasks in the workplace, or monitoring intensive care unit patients. This study proposes a method based on novel multi-feature fusion to detect fatigue and sleepiness by using traditional image processing and heart rate variability (HRV). The proposed method performs initial feature extraction using InceptionV3 (a convolutional neural network (CNN)), following which the second decision is made by a long short-term memory network (LSTM) using the features collected by InceptionV3 to process the sequence of video data for recognition. The LSTM provides coherent and precise sequence recognition that avoids static distortions. Then, the final decision is made by the blood volume pulse vector (PBV) method after the features are fused. Because fatigue recognition is usually employed to monitor driver fatigue, we verified the feasibility of our method by testing its ability to successfully recognize driver fatigue. Following the experiments, we compared the different steps in the proposed method with those in existing methods. We selected four other methods to perform the comparison tests and used the same videos for training networks. In comparison with state-of-the-art methods, our method in its entirety achieved an average increase of 5% in terms of both accuracy and stability.INDEX TERMS fatigue driving, LSTM network, convolutional neural network, blood volume pulse vector, blood volume pulse, heart rate variability.
Wide field small aperture telescopes are working horses for fast sky surveying. Transient discovery is one of their main tasks. Classification of candidate transient images between real sources and artifacts with high accuracy is an important step for transient discovery. In this paper, we propose two transient classification methods based on neural networks. The first method uses the convolutional neural network without pooling layers to classify transient images with low sampling rate. The second method assumes transient images as one dimensional signals and is based on recurrent neural networks with long short term memory and leaky ReLu activation function in each detection layer. Testing with real observation data, we find that although these two methods can both achieve more than 94% classification accuracy, they have different classification properties for different targets. Based on this result, we propose to use the ensemble learning method to further increase the classification accuracy to more than 97%.
Hospitals need to invest a lot of manpower to manually input the contents of medical invoices (nearly 300,000,000 medical invoices a year) into the medical system. In order to help the hospital save money and stabilize work efficiency, this paper designed a system to complete the complicated work using a Gaussian blur and smoothing–convolutional neural network combined with a recurrent neural network (GBS-CR) method. Gaussian blur and smoothing (GBS) is a novel preprocessing method that can fix the breakpoint font in medical invoices. The combination of convolutional neural network (CNN) and recurrent neural network (RNN) was used to raise the recognition rate of the breakpoint font in medical invoices. RNN was designed to be the semantic revision module. In the aspect of image preprocessing, Gaussian blur and smoothing were used to fix the breakpoint font. In the period of making the self-built dataset, a certain proportion of the breakpoint font (the font of breakpoint is 3, the original font is 7) was added, in this paper, so as to optimize the Alexnet–Adam–CNN (AA-CNN) model, which is more suitable for the recognition of the breakpoint font than the traditional CNN model. In terms of the identification methods, we not only adopted the optimized AA-CNN for identification, but also combined RNN to carry out the semantic revisions of the identified results of CNN, meanwhile further improving the recognition rate of the medical invoices. The experimental results show that compared with the state-of-art invoice recognition method, the method presented in this paper has an average increase of 10 to 15 percentage points in recognition rate.
Coronary artery calcification affects the arteries that supply the heart with blood, and percutaneous coronary intervention (PCI) is a direct and effective surgery to alleviate this symptom. In this paper, we propose a framework to judge if a patient requires surgery, based on cardiac computerized tomography scans. We adopt generative adversarial network to segment the calcified areas from slices. This architecture provides an environment for the generator to perform joint learning from ground truth images and the high-resolution discriminator. We use images reconstructed using two types of filters to test our method. An F1 score of 96.1% and 85.0% was achieved for the soft and sharp filters. In addition, we explored different recurrent neural networks for making the final decision. Including long short-term memory, which was ultimately used to deal with the calcium score normalized by the age and score threshold. Using the soft reconstruction image as the input, the whole framework achieved an accuracy of 76.6%. These results certify that our method can precisely locate lesion in artery, and make a reasonable risk assessment for PCI.INDEX TERMS Generative adversarial network, low-dose cardiac CT, recurrent neural network, percutaneous coronary intervention, coronary calcium scoring.
Wide field small aperture telescopes (WFSATs) are commonly used for fast sky survey. Telescope arrays composed by several WFSATs are capable to scan sky several times per night. Huge amount of data would be obtained by them and these data need to be processed immediately. In this paper, we propose ARGUS (Astronomical taRGets detection framework for Unified telescopes) for real-time transit detection. The ARGUS uses a deep learning based astronomical detection algorithm implemented in embedded devices in each WFSATs to detect astronomical targets. The position and probability of a detection being an astronomical targets will be sent to a trained ensemble learning algorithm to output information of celestial sources. After matching these sources with star catalog, ARGUS will directly output type and positions of transient candidates. We use simulated data to test the performance of ARGUS and find that ARGUS can increase the performance of WFSATs in transient detection tasks robustly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.