The reliable detection of water and ice over road surfaces is an important issue in improving traffic safety and reducing costs for the maintenance of routs, especially during winter. A low cost capacitive sensor for the estimation of road conditions is studied. A simulation model was developed to investigate the capacitance of the sensor when air, water, or ice are covering its surface and to assess the effect of the variation of environmental temperature, or of the thickness of water or ice. An algorithm for the estimation of the state of the sensor (dry, wet, or icy) was developed based on the results of the simulations, which indicated that the time derivative of the estimated capacitance provided optimal information. Accuracy and reliability of the estimates provided by the sensor were assessed in laboratory experiments, placing more sensors in a climatic chamber and investigating the estimated state of the sensors and the timing of the identification of wet-icy or icy-wet transitions. Reliable estimates were obtained by all the sensors, with a dispersion of the transition times of the order of a few minutes. The sensor was also investigated in field. Two sensors (one of which was bituminized) were embedded in a road pavement to monitor continuously the road surface condition for a month. Both sensors provided indications in line with the environmental conditions, identifying properly the icy condition and indicating the wet state of the road both during rain and fog. Thus, the sensor is suggested as a feasible tool for monitoring road conditions to support information systems improving security and efficient maintenance of roads during winter.
The manufacturing of nanomaterials by an electrospinning process requires accurate and meticulous inspection of Scanning Electron Microscope (SEM) images of the electrospun nanofiber, to ensure no structural defects are produced. The possible presence of anomalies is known to make the nanofibrous material useless in the practical application of any nanotechnology. Hence, automatic monitoring and quality control of nanomaterials has become an important challenge in the context of Industry 4.0. In this paper, we propose a novel automatic classification system for homogenous (anomaly-free) and nonhomogenous (with defects) nanofibers avoiding the processing of the redundant full SEM image. Specifically, the image to be analyzed is partitioned into sub-images (nanopatches) that are then used as input to a hybrid unsupervised and supervised machine learning system. An Autoencoder (AE) is first trained with unsupervised learning to generate a code representing the input image with a number of relevant features. Next, a Multilayer Perceptron (MLP), trained with supervised learning, uses the extracted features to classify non-homogenous nanofiber (NH-NF) and homogenous nanofiber (H-NF) materials. The resulting novel AE-MLP system is shown to outperform other standard machine learning models and recent state-of-the-art techniques, reporting accuracy rates up to 92.5%. In addition, our proposed approach achieves significant model complexity reduction with respect to other deep learning strategies such as Convolutional Neural Networks (CNN). The promising performance achieved in this benchmark study will stimulate the application of our proposed framework in a range of challenging industrial manufacturing tasks.
Wireless sensor networks (WSN) take on an invaluable technology in many applications. Their prevalence, however, is threatened by a number of technical difficulties, especially the shortage of energy in sensors. To mitigate this problem, we propose a smart reduction in data communication by sensors. Indeed, in case we have a solution to this end, the components of a sensor, including its radio, can be turned off most of the time without noticeable influence on network operation. Thus, reducing the acquired data, the sensors can be idle for longer and power can be saved. The main idea in devising such a solution is to minimize the correlation between the data communicated. In order to reduce the measurements, we present a data prediction method based on neural networks which performs an adaptive, data-driven, and non-uniform sampling. Evidently, the amount of possible reduction in required samples is bounded by the extent to which the sensed data is stationary. The proposed method is validated on simulated and experimental data. The results show that it leads to a considerable reduction of the number of samples required (and hence also a power saving) while still providing a good approximation of the data.
<p>A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information. During EEG acquisition, artifacts are induced due to involuntary eye movements or eye blink, casting adverse effects on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy. One of the designed SAE models outperforms the performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study, may provide a contribution towards the state of the art performance on the problem of EEG based eye state classification.</p>
Background Acute Kidney Injury (AKI), a frequent complication of pateints in the Intensive Care Unit (ICU), is associated with a high mortality rate. Early prediction of AKI is essential in order to trigger the use of preventive care actions. Methods The aim of this study was to ascertain the accuracy of two mathematical analysis models in obtaining a predictive score for AKI development. A deep learning model based on a urine output trends was compared with a logistic regression analysis for AKI prediction in stages 2 and 3 (defined as the simultaneous increase of serum creatinine and decrease of urine output, according to the Acute Kidney Injury Network (AKIN) guidelines). Two retrospective datasets including 35,573 ICU patients were analyzed. Urine output data were used to train and test the logistic regression and the deep learning model. Results The deep learning model defined an area under the curve (AUC) of 0.89 (± 0.01), sensitivity = 0.8 and specificity = 0.84, which was higher than the logistic regression analysis. The deep learning model was able to predict 88% of AKI cases more than 12 h before their onset: for every 6 patients identified as being at risk of AKI by the deep learning model, 5 experienced the event. On the contrary, for every 12 patients not considered to be at risk by the model, 2 developed AKI. Conclusion In conclusion, by using urine output trends, deep learning analysis was able to predict AKI episodes more than 12 h in advance, and with a higher accuracy than the classical urine output thresholds. We suggest that this algorithm could be integrated in the ICU setting to better manage, and potentially prevent, AKI episodes. Graphic abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.