<p>Standardized processing of eddy covariance data is important for studies combining data from multiple sites, for validating remote sensing measurements as well as runs of ecosystem and climate models, and for applications relying on these flux data to create derived products like upscaled fluxes, among other examples. However, maintaining consistency within the software used for this processing while allowing for evolution of this code across research networks presents novel challenges in software development. The introduction of the ONEFlux (Open Network-Enabled Flux) eddy covariance data processing pipeline, originally developed within a collaboration of the AmeriFlux Management Project, the European Fluxes Database, and the ICOS Ecosystem Thematic Centre, supported the creation of consistently processed global eddy covariance data products. In particular, ONEFlux codes were used to generate the FLUXNET2015 dataset, which is widely adopted by thousands of eddy covariance data users in their work in research, ranging from soil microbiology to large scale drought effects, and also education, from basic plant biology all the way to global climate change. We are now more thoroughly instrumenting the code, and the code development process, to better address these challenges, efforts which we will describe in this presentation. In particular, we are seeking to improve software development practices to allow for more streamlined collaboration on expanding and contributing to the codebase. For instance, we are adopting planned release cycles for code updates, designing more detailed ways to incorporate and evaluate new modules, introducing data-centric testing and continuous integration, improving code performance, and adopting several other software engineering best practices more widely in the development workflows. The main goal of these changes is to lower the barriers for running ONEFlux by regional networks processing their data, while at the same time better supporting contributions from the community into the codebase. This will be critical to continue the current use of ONEFlux to generate updated versions of flux datasets by regional networks, the components of new global products.</p>
In this paper, we present SoDa, an irradiancebased synthetic Solar Data generation tool to generate realistic sub-minute solar photovoltaic (PV) output power time series, that emulate the weather pattern for a certain geographical location. Our tool relies on the National Solar Radiation Database (NSRDB) to obtain irradiance and weather data patterns for the site. Irradiance is mapped onto a PV model estimate of a solar plant's 30-min power output, based on the configuration of the panel. The working hypothesis to generate high-resolution (e.g. 1 second) solar data is that the conditional distribution of the time series of solar power output given the cloud density is the same for different locations. We therefore propose a stochastic model with a switching behavior due to different weather regimes as provided by the cloud type label in the NSRDB, and train our stochastic model parameters for the cloudy states on the high-resolution solar power measurements from a Phasor Measurement Unit (PMU). In the paper we introduce the stochastic model, and the methodology used for the training of its parameters. The numerical results show that our tool creates synthetic solar time series at high resolutions that are statistically representative of the measured solar power and illustrate how to make use of the tool to create synthetic data for arbitrary sites in the footprint covered by the NSRDB.
In this work, we introduce Log(v) 3LPF, a linear power flow solver for unbalanced three-phase distribution systems. Log(v) 3LPF uses a logarithmic transform of the voltage phasor to linearize the AC power flow equations around the balanced case. We incorporate the modeling of ZIP loads, transformers, capacitor banks, switches and their corresponding controls and express the network equations in matrix-vector form. With scalability in mind, special attention is given to the computation of the inverse of the system admittance matrix, Ybus. We use the Sherman-Morrison-Woodbury identity for an efficient computation of the inverse of a rank-k corrected matrix and compare the performance of this method with traditional LU decomposition methods in terms of FLOPS. We showcase the solver for a variety of network sizes, ranging from tens to thousands of nodes, and compare the Log(v) 3LPF with commercial-grade software, such as OpenDSS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.