Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.
Entropy images, representing the complexity of original fundus photographs, may strengthen the contrast between diabetic retinopathy (DR) lesions and unaffected areas. The aim of this study is to compare the detection performance for severe DR between original fundus photographs and entropy images by deep learning. A sample of 21,123 interpretable fundus photographs obtained from a publicly available data set was expanded to 33,000 images by rotating and flipping. All photographs were transformed into entropy images using block size 9 and downsized to a standard resolution of 100 × 100 pixels. The stages of DR are classified into 5 grades based on the International Clinical Diabetic Retinopathy Disease Severity Scale: Grade 0 (no DR), Grade 1 (mild nonproliferative DR), Grade 2 (moderate nonproliferative DR), Grade 3 (severe nonproliferative DR), and Grade 4 (proliferative DR). Of these 33,000 photographs, 30,000 images were randomly selected as the training set, and the remaining 3,000 images were used as the testing set. Both the original fundus photographs and the entropy images were used as the inputs of convolutional neural network (CNN), and the results of detecting referable DR (Grades 2–4) as the outputs from the two data sets were compared. The detection accuracy, sensitivity, and specificity of using the original fundus photographs data set were 81.80%, 68.36%, 89.87%, respectively, for the entropy images data set, and the figures significantly increased to 86.10%, 73.24%, and 93.81%, respectively (all p values <0.001). The entropy image quantifies the amount of information in the fundus photograph and efficiently accelerates the generating of feature maps in the CNN. The research results draw the conclusion that transformed entropy imaging of fundus photographs can increase the machinery detection accuracy, sensitivity, and specificity of referable DR for the deep learning-based system.
Three-dimensional printing is a versatile technique to generate large quantities of a wide variety of shapes and sizes of polymer. The aim of this study is to develop functionalized 3D printed poly(lactic acid) (PLA) scaffolds and use a mussel-inspired surface coating and Xu Duan (XD) immobilization to regulate cell adhesion, proliferation and differentiation of human bone-marrow mesenchymal stem cells (hBMSCs). We prepared PLA scaffolds and coated with polydopamine (PDA). The chemical composition and surface properties of PLA/PDA/XD were characterized by XPS. PLA/PDA/XD controlled hBMSCs’ responses in several ways. Firstly, adhesion and proliferation of hBMSCs cultured on PLA/PDA/XD were significantly enhanced relative to those on PLA. In addition, the focal adhesion kinase (FAK) expression of cells was increased and promoted cell attachment depended on the XD content. In osteogenesis assay, the osteogenesis markers of hBMSCs cultured on PLA/PDA/XD were significantly higher than seen in those cultured on a pure PLA/PDA scaffolds. Moreover, hBMSCs cultured on PLA/PDA/XD showed up-regulation of the ang-1 and vWF proteins associated with angiogenic differentiation. Our results demonstrate that the bio-inspired coating synthetic PLA polymer can be used as a simple technique to render the surfaces of synthetic scaffolds active, thus enabling them to direct the specific responses of hBMSCs.
Non-uniform blind deblurring of dynamic scenes has always been a challenging problem in image processing because of the diverse of blurring sources. Traditional methods based on energy minimization cannot make accurate kernel estimation. It leads to that some high frequency details cannot be fully recovered. Recently, many methods based on convolution neural networks (CNNs) have been proposed to improve the overall performance. Followed by this trend, we first propose a two-stage deblurring module to recover the blur images of dynamic scenes based on high frequency residual image learning. The first stage performs initial deburring with the blur kernel estimated by the salient structure. The second stage calculates the difference of input image and initially deblurred image, referred to as residual image, and adopt an encoder-decoder network to refine the residual image. Finally, we can combine the refined residual image with the input blurred image to obtain the latent image. To increase deblurring performance, we further propose a coarse-to-fine framework based on the deblurring module. It performs the deblurring module many times in a multi-scale manner which can gradually restore the sharp edge details of different scales. Experiments conducted on three benchmark datasets demonstrate the proposed method achieves competitive performance of state-of-art methods.INDEX TERMS Image deblurring, dynamic blur, non-uniform blind deblurring, deep learning.
Within Internet of Things (IoT) sensors, the challenge is how to dig out the potentially valuable information from the collected data to support decision making. This paper proposes a method based on machine learning to predict long cycle maintenance time of wind turbines for efficient management in the power company. Long cycle maintenance time prediction makes the power company operate wind turbines as cost-effectively as possible to maximize the profit. Sensor data including operation data, maintenance time data, and event codes are collected from 31 wind turbines in two wind farms. Data aggregation is performed to filter out some errors and get significant information from the data. Then, the hybrid network is built to train the predictive model based on the convolutional neural network (CNN) and support vector machine (SVM). The experimental results show that the prediction of the proposed method reaches high accuracy, which helps drive up the efficiency of wind turbine maintenance.
AIMTo examine the accuracy of machine learning to relate particulate matter (PM) 2.5 and PM10 concentrations to upper respiratory tract infections (URIs).METHODSDaily nationwide and regional outdoor PM2.5 and PM10 concentrations collected over 30 consecutive days obtained from the Taiwan Environment Protection Administration were the inputs for machine learning, using multilayer perceptron (MLP), to relate to the subsequent one-week outpatient visits for URIs. The URI data were obtained from the Centers for Disease Control datasets in Taiwan between 2009 and 2016. The testing used the middle month dataset of each season (January, April, July and October), and the training used the other months’ datasets. The weekly URI cases were classified by tertile as high, moderate, and low volumes.RESULTSBoth PM concentrations and URI cases peak in winter and spring. In the nationwide data analysis, MLP machine learning can accurately relate the URI volumes of the elderly (89.05% and 88.32%, respectively) and the overall population (81.75% and 83.21%, respectively) with the PM2.5 and PM10 concentrations. In the regional data analyses, greater accuracy is found for PM2.5 than for PM10 for the elderly, particularly in the Central region (78.10% and 74.45%, respectively), whereas greater accuracy is found for PM10 than for PM2.5 for the overall population, particularly in the Northern region (73.19% and 63.04%, respectively).CONCLUSIONShort-term PM2.5 and PM10 concentrations were accurately related to the subsequent occurrence of URIs by using machine learning. Our findings suggested that the effects of PM2.5 and PM10 on URI may differ by age, and the mechanism needs further evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.