Obstructive sleep apnea-hypopnea syndrome (OSAHS) is extremely harmful to the human body and may cause neurological dysfunction and endocrine dysfunction, resulting in damage to multiple organs and multiple systems throughout the body and negatively affecting the cardiovascular, kidney, and mental systems. Clinically, doctors usually use standard PSG (Polysomnography) to assist diagnosis. PSG determines whether a person has apnea syndrome with multidimensional data such as brain waves, heart rate, and blood oxygen saturation. In this paper, we have presented a method of recognizing OSAHS, which is convenient for patients to monitor themselves in daily life to avoid delayed treatment. Firstly, we theoretically analyzed the difference between the snoring sounds of normal people and OSAHS patients in the time and frequency domains. Secondly, the snoring sounds related to apnea events and the nonapnea related snoring sounds were classified by deep learning, and then, the severity of OSAHS symptoms had been recognized. In the algorithm proposed in this paper, the snoring data features are extracted through the three feature extraction methods, which are MFCC, LPCC, and LPMFCC. Moreover, we adopted CNN and LSTM for classification. The experimental results show that the MFCC feature extraction method and the LSTM model have the highest accuracy rate which was 87% when it is adopted for binary-classification of snoring data. Moreover, the AHI value of the patient can be obtained by the algorithm system which can determine the severity degree of OSAHS.
Burn is a common traumatic disease with high morbidity and mortality. The treatment of burns requires accurate and reliable diagnosis of burn wounds and burn depth, which can save lives in some cases. However, due to the complexity of burn wounds, the early diagnosis of burns lacks accuracy and difference. Therefore, we use deep learning technology to automate and standardize burn diagnosis to reduce human errors and improve burn diagnosis. First, the burn dataset with detailed burn area segmentation and burn depth labelling is created. Then, an end-to-end framework based on deep learning method for advanced burn area segmentation and burn depth diagnosis is proposed. The framework is firstly used to segment the burn area in the burn images. On this basis, the calculation of the percentage of the burn area in the total body surface area (TBSA) can be realized by extending the network output structure and the labels of the burn dataset. Then, the framework is used to segment multiple burn depth areas. Finally, the network achieves the best result with IOU of 0.8467 for the segmentation of burn and no burn area. And for multiple burn depth areas segmentation, the best average IOU is 0.5144.
Diabetic retinopathy (DR) is one of the most common complications of diabetes and the main cause of blindness. The progression of the disease can be prevented by early diagnosis of DR. Due to differences in the distribution of medical conditions and low labor efficiency, the best time for diagnosis and treatment was missed, which results in impaired vision. Using neural network models to classify and diagnose DR can improve efficiency and reduce costs. In this work, an improved loss function and three hybrid model structures Hybrid-a, Hybrid-f, and Hybrid-c were proposed to improve the performance of DR classification models. EfficientNetB4, EfficientNetB5, NASNetLarge, Xception, and InceptionResNetV2 CNNs were chosen as the basic models. These basic models were trained using enhance cross-entropy loss and cross-entropy loss, respectively. The output of the basic models was used to train the hybrid model structures. Experiments showed that enhance cross-entropy loss can effectively accelerate the training process of the basic models and improve the performance of the models under various evaluation metrics. The proposed hybrid model structures can also improve DR classification performance. Compared with the best-performing results in the basic models, the accuracy of DR classification was improved from 85.44% to 86.34%, the sensitivity was improved from 98.48% to 98.77%, the specificity was improved from 71.82% to 74.76%, the precision was improved from 90.27% to 91.37%, and the F1 score was improved from 93.62% to 93.9% by using hybrid model structures.
Thyroid nodule lesions are one of the most common lesions of the thyroid; the incidence rate has been the highest in the past thirty years. X-ray computed tomography (CT) plays an increasingly important role in the diagnosis of thyroid diseases. Nonetheless, as a result of the artifact and high complexity of thyroid CT image, the traditional machine learning method cannot be applied to CT image processing. In this paper, an end-to-end thyroid nodule automatic recognition and classification system is designed based on CNN. An improved Eff-Unet segmentation network is used to segment thyroid nodules as ROI. The image processing algorithm optimizes the ROI region and divides the nodules. A low-level and high-level feature fusion classification network CNN-F is proposed to classify the benign and malignant nodules. After each module is connected in series with the algorithm, the automatic classification of each nodule can be realized. Experimental results demonstrate that the proposed end-to-end thyroid nodule automatic recognition and classification system has excellent performance in diagnosing thyroid diseases. In the test set, the segmentation IOU reaches 0.855, and the classification output accuracy reaches 85.92%.
Abstract. Accurate identification of radar operation modes is the important premise of threat level assessment and interference decision. But parameters overlap of PDW between different radar working modes seriously affects the recognition accuracy. A discrete process neural network (DPNN) based on particle swarm optimization (PSO) training is proposed to realize radar working modes recognition. Firstly, radar syntactic modeling method is proposed to extract radar phrases as operation modes character description. Then, the appropriate DPNN structure is built to and trained via PSO. Finally, radar working mode recognition of unknown radar phrases is realized by the finished DPNN. Different from traditional machine learning method based on single sampling of radar signals, this method achieve recognition according to accumulation of radar pulse sequence, and make the best use of time series change law of radar signals. The simulation results show that compared with traditional machine learning method, such as LSSVM, BPNN, working modes recognition rate of the novel method increases significantly under the condition of serious parameter overlap.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.