Spatiotemporally continuous global river discharge estimates across the full spectrum of stream orders are vital to a range of hydrologic applications, yet they remain poorly constrained. Here we present a carefully designed modeling effort (Variable Infiltration Capacity land surface model and Routing Application for Parallel computatIon of Discharge river routing model) to estimate global river discharge at very high resolutions. The precipitation forcing is from a recently published 0.1° global product that optimally merged gauge‐, reanalysis‐, and satellite‐based data. To constrain runoff simulations, we use a set of machine learning‐derived, global runoff characteristics maps (i.e., runoff at various exceedance probability percentiles) for grid‐by‐grid model calibration and bias correction. To support spaceborne discharge studies, the river flowlines are defined at their true geometry and location as much as possible—approximately 2.94 million vector flowlines (median length 6.8 km) and unit catchments are derived from a high‐accuracy global digital elevation model at 3‐arcsec resolution (~90 m), which serves as the underlying hydrography for river routing. Our 35‐year daily and monthly model simulations are evaluated against over 14,000 gauges globally. Among them, 35% (64%) have a percentage bias within ±20% (±50%), and 29% (62%) have a monthly Kling‐Gupta Efficiency ≥0.6 (0.2), showing data robustness at the scale the model is assessed. This reconstructed discharge record can be used as a priori information for the Surface Water and Ocean Topography satellite mission's discharge product, thus named “Global Reach‐level A priori Discharge Estimates for Surface Water and Ocean Topography”. It can also be used in other hydrologic applications requiring spatially explicit estimates of global river flows.
With the development of technologies, such as big data, cloud computing, and the Internet of Things (IoT), digital twin is being applied in industry as a precision simulation technology from concept to practice. Further, simulation plays a very important role in the healthcare field, especially in research on medical pathway planning, medical resource allocation, medical activity prediction, etc. By combining digital twin and healthcare, there will be a new and efficient way to provide more accurate and fast services for elderly healthcare. However, how to achieve personal health management throughout the entire lifecycle of elderly patients, and how to converge the medical physical world and the virtual world to realize real smart healthcare, are still two key challenges in the era of precision medicine. In this paper, a framework of the cloud healthcare system is proposed based on digital twin healthcare (CloudDTH). This is a novel, generalized, and extensible framework in the cloud environment for monitoring, diagnosing and predicting aspects of the health of individuals using, for example, wearable medical devices, toward the goal of personal health management, especially for the elderly. CloudDTH aims to achieve interaction and convergence between medical physical and virtual spaces. Accordingly, a novel concept of digital twin healthcare (DTH) is proposed and discussed, and a DTH model is implemented. Next, a reference framework of CloudDTH based on DTH is constructed, and its key enabling technologies are explored. Finally, the feasibility of some application scenarios and a case study for real-time supervision are demonstrated.INDEX TERMS Digital twin, elderly healthcare, personal health management, cloud computing, precision medicine, interaction, convergence. I. INTRODUCTIONAccording to the latest statistics from the United Nations Department of Economic and Social Affairs, the elderly population is forecasted to be 2.1 billion in 2050, with the aging population in the developing regions growing faster than in the developed regions [1]. In the aging society of the future, it is projected that nearly 50% of medical resources will be The associate editor coordinating the review of this manuscript and approving it for publication was Tai-Hoon Kim.
The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
Evaluating generative adversarial networks (GANs) is inherently challenging. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the problem of how to evaluate the evaluation metrics. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. With a series of carefully designed experiments, we comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbor (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far they are from learning the target distribution.
Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007–2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks‐Climate Data Record (PERSIANN‐CDR). First, the BMA weights were optimized using the expectation‐maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root‐mean‐square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one‐outlier removed (OOR). Error analysis between BMA and the state‐of‐the‐art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.
Conventional basin‐by‐basin approaches to calibrate hydrologic models are limited to gauged basins and typically result in spatially discontinuous parameter fields. Moreover, the consequent low calibration density in space falls seriously behind the need from present‐day applications like high resolution river hydrodynamic modeling. In this study we calibrated three key parameters of the Variable Infiltration Capacity (VIC) model at every 1/8° grid‐cell using machine learning‐based maps of four streamflow characteristics for the conterminous United States (CONUS), with a total of 52,663 grid‐cells. This new calibration approach, as an alternative to parameter regionalization, applied to ungauged regions too. A key difference made here is that we tried to regionalize physical variables (streamflow characteristics) instead of model parameters whose behavior may often be less well understood. The resulting parameter fields no longer presented any spatial discontinuities and the patterns corresponded well with climate characteristics, such as aridity and runoff ratio. The calibrated parameters were evaluated against observed streamflow from 704/648 (calibration/validation period) small‐to‐medium‐sized catchments used to derive the streamflow characteristics, 3941/3809 (calibration/validation period) small‐to‐medium‐sized catchments not used to derive the streamflow characteristics as well as five large basins. Comparisons indicated marked improvements in bias and Nash‐Sutcliffe efficiency. Model performance was still poor in arid and semiarid regions, which is mostly due to both model structural and forcing deficiencies. Although the performance gain was limited by the relative small number of parameters to calibrate, the study and results here served as a proof‐of‐concept for a new promising approach for fine‐scale hydrologic model calibrations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.