Cardiotocography is defined as the recording of fetal heart rate and uterine contractions and is widely used during labor as a screening tool to determine fetal wellbeing. The visual interpretation of the cardiotocography signals by the practitioners, following common guidelines, is subject to a high interobserver variability, and the efficiency of cardiotocography monitoring is still debated. Since the 1990s, researchers and practitioners work on designing reliable computer‐aided systems to assist practitioners in cardiotocography interpretation during labor. Several systems are integrated in the monitoring devices, mostly based on the guidelines, but they have not clearly demonstrated yet their usefulness. In the last decade, the availability of large clinical databases as well as the emergence of machine learning and deep learning methods in healthcare has led to a surge of studies applying those methods to cardiotocography signals analysis. The state‐of‐the‐art systems perform well to detect fetal hypoxia when evaluated on retrospective cohorts, but several challenges remain to be tackled before they can be used in clinical practice. First, the development and sharing of large, open and anonymized multicentric databases of perinatal and cardiotocography data during labor is required to build more accurate systems. Also, the systems must produce interpretable indicators along with the prediction of the risk of fetal hypoxia in order to be appropriated and trusted by practitioners. Finally, common standards should be built and agreed on to evaluate and compare those systems on retrospective cohorts and to validate their use in clinical practice.
The use of low-cost sensors in air quality monitoring networks is still a much-debated topic among practitioners: they are much cheaper than traditional air quality monitoring stations set up by public authorities (a few hundred dollars compared to a few dozens of thousand dollars) at the cost of a lower accuracy and robustness. This paper presents a case study of using low-cost sensors measurements in an air quality prediction engine. The engine predicts jointly PM 2.5 and PM 10 (the particles whose diameters are below 2.5 µm and 10 µm respectively) concentrations in the United States at a very high resolution in the range of a few dozens of meters.It is fed with the measurements provided by official air quality monitoring stations, the measurements provided by a network of low-cost sensors across the country, and traffic estimates. We show that the use of low-cost sensors' measurements improves the engine's accuracy very significantly. In particular, we derive a strong link between the density of low-cost sensors and the predictions' accuracy: the more low-cost sensors are in an area, the more accurate are the predictions. As an illustration, in areas with the highest density of low-cost sensors, the low-cost sensors' measurements bring a 25% and 15% improvement in PM 2.5 and PM 10 predictions' accuracy respectively. In cities with the most low-cost sensors like Los Angeles and San Francisco, this improvement in the predictions' accuracy is very clearly reflected in air quality maps.An other strong conclusion is that in some areas with a high density of low-cost sensors, the engine performs better when fed with low-cost sensors' measurements only than when fed with official monitoring stations' measurements only: this suggests that an air quality monitoring network composed of low-cost sensors is effective in monitoring air quality. This is a very important result, as such a monitoring network is much cheaper to set up. CCS CONCEPTS• Applied computing → Environmental sciences; • Computing methodologies → Neural networks.
This paper presents an engine able to forecast jointly the concentrations of the main pollutants harming people's health: nitrogen dioxyde (NO 2 ), ozone (O 3 ) and particulate matter (PM 2.5 and PM 10 , which are respectively the particles whose diameters are below 2.5 µm and 10 µm respectively).The forecasts are performed on a regular grid (the results presented in the paper are produced with a 0.5 • resolution grid over Europe and the United States) with a neural network whose architecture includes convolutional LSTM blocks. The engine is fed with the most recent air quality monitoring stations measures available, weather forecasts as well as air quality physical and chemical model (AQPCM) outputs. The engine can be used to produce air quality forecasts with long time horizons, and the experiments presented in this paper show that the 4 days forecasts beat very significantly simple benchmarks.A valuable advantage of the engine is that it does not need much computing power: the forecasts can be built in a few minutes on a standard GPU. Thus, they can be updated very frequently, as soon as new air quality measures are available (generally every hour), which is not the case of AQPCMs traditionally used for air quality forecasting.The engine described in this paper relies on the same principles as a prediction engine deployed and used by Plume Labs in several products aiming at providing air quality data to individuals and businesses. CCS CONCEPTS• Applied computing → Environmental sciences; • Computing methodologies → Neural networks.
This paper presents an engine able to forecast jointly the concentrations of the main pollutants harming people's health: nitrogen dioxide (NO 2 ), ozone (O 3 ) and particulate matter (PM 2.5 and PM 10 , which are respectively the particles whose diameters are below 2.5 𝜇𝑚 and 10 𝜇𝑚 respectively). The engine is fed with air quality monitoring stations' measurements, weather forecasts, physical models' outputs and traffic estimates to produce forecasts up to 24 hours. The forecasts are produced with several spatial resolutions, from a few dozens of meters to dozens of kilometers, fitting several use-cases needing air quality data.We introduce the Scale-Unit block, which enables to integrate seamlessly all available inputs at a given resolution to return forecasts at the same resolution. Then, the engine is based on a U-Net architecture built with several of those blocks, giving it the ability to process inputs and to output predictions at different resolutions.We have implemented and evaluated the engine on the largest cities in Europe and the United States, and it clearly outperforms other prediction methods. In particular, the out-of-sample accuracy remains high, meaning that the engine can be used in cities which are not included in the training dataset. A valuable advantage of the engine is that it does not need much computing power: the forecasts can be built in a few minutes on a standard CPU. Thus, they can be updated very frequently, as soon as new air quality monitoring stations' measurements are available (generally every hour), which is not the case of physical models traditionally used for air quality forecasting. CCS CONCEPTS• Applied computing → Environmental sciences; • Computing methodologies → Neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.