Abstract. The meso-scale chemistry-transport model CHIMERE is used to assess our understanding of major sources and formation processes leading to a fairly large amount of organic aerosols – OA, including primary OA (POA) and secondary OA (SOA) – observed in Mexico City during the MILAGRO field project (March 2006). Chemical analyses of submicron aerosols from aerosol mass spectrometers (AMS) indicate that organic particles found in the Mexico City basin contain a large fraction of oxygenated organic species (OOA) which have strong correspondence with SOA, and that their production actively continues downwind of the city. The SOA formation is modeled here by the one-step oxidation of anthropogenic (i.e. aromatics, alkanes), biogenic (i.e. monoterpenes and isoprene), and biomass-burning SOA precursors and their partitioning into both organic and aqueous phases. Conservative assumptions are made for uncertain parameters to maximize the amount of SOA produced by the model. The near-surface model evaluation shows that predicted OA correlates reasonably well with measurements during the campaign, however it remains a factor of 2 lower than the measured total OA. Fairly good agreement is found between predicted and observed POA within the city suggesting that anthropogenic and biomass burning emissions are reasonably captured. Consistent with previous studies in Mexico City, large discrepancies are encountered for SOA, with a factor of 2–10 model underestimate. When only anthropogenic SOA precursors were considered, the model was able to reproduce within a factor of two the sharp increase in OOA concentrations during the late morning at both urban and near-urban locations but the discrepancy increases rapidly later in the day, consistent with previous results, and is especially obvious when the column-integrated SOA mass is considered instead of the surface concentration. The increase in the missing SOA mass in the afternoon coincides with the sharp drop in POA suggesting a tendency of the model to excessively evaporate the freshly formed SOA. Predicted SOA concentrations in our base case were extremely low when photochemistry was not active, especially overnight, as the SOA formed in the previous day was mostly quickly advected away from the basin. These nighttime discrepancies were not significantly reduced when greatly enhanced partitioning to the aerosol phase was assumed. Model sensitivity results suggest that observed nighttime OOA concentrations are strongly influenced by a regional background SOA (~1.5 μg/m3) of biogenic origin which is transported from the coastal mountain ranges into the Mexico City basin. The presence of biogenic SOA in Mexico City was confirmed by SOA tracer-derived estimates that have reported 1.14 (±0.22) μg/m3 of biogenic SOA at T0, and 1.35 (±0.24) μg/m3 at T1, which are of the same order as the model. Consistent with other recent studies, we find that biogenic SOA does not appear to be underestimated significantly by traditional models, in strong contrast to what is observed for anthropogenic pollution. The relative contribution of biogenic SOA to predicted monthly mean SOA levels (traditional approach) is estimated to be more than 30% within the city and up to 65% at the regional scale which may help explain the significant amount of modern carbon in the aerosols inside the city during low biomass burning periods. The anthropogenic emissions of isoprene and its nighttime oxidation by NO3 were also found to enhance the SOA mean concentrations within the city by an additional 15%. Our results confirm the large underestimation of the SOA production by traditional models in polluted regions (estimated as 10–20 tons within the Mexico City metropolitan area during the daily peak), and emphasize for the first time the role of biogenic precursors in this region, indicating that they cannot be neglected in urban modeling studies.
Two new postprocessing methods are proposed to reduce numerical weather prediction’s systematic and random errors. The first method consists of running a postprocessing algorithm inspired by the Kalman filter (KF) through an ordered set of analog forecasts rather than a sequence of forecasts in time (ANKF). The analog of a forecast for a given location and time is defined as a past prediction that matches selected features of the current forecast. The second method is the weighted average of the observations that verified when the 10 best analogs were valid (AN). ANKF and AN are tested for 10-m wind speed predictions from the Weather Research and Forecasting (WRF) model, with observations from 400 surface stations over the western United States for a 6-month period. Both AN and ANKF predict drastic changes in forecast error (e.g., associated with rapid weather regime changes), a feature lacking in KF and a 7-day running-mean correction (7-Day). The AN almost eliminates the bias of the raw prediction (Raw), while ANKF drastically reduces it with values slightly worse than KF. Both analog-based methods are also able to reduce random errors, therefore improving the predictive skill of Raw. The AN is consistently the best, with average improvements of 10%, 20%, 25%, and 35% with respect to ANKF, KF, 7-Day, and Raw, as measured by centered root-mean-square error, and of 5%, 20%, 25%, and 40%, as measured by rank correlation. Moreover, being a prediction based solely on observations, AN results in an efficient downscaling procedure that eliminates representativeness discrepancies between observations and predictions.
Investigating the characteristics of modelforecast errors using various statistical and object-oriented methods is necessary for providing useful guidance to endusers and model developers as well. To this end, the random and systematic errors (i.e., biases) of the 2-m temperature and 10-m wind predictions of the NCAR-AirDat weather research and forecasting (WRF)-based real-time four-dimensional data assimilation (RTFDDA) and forecasting system are analyzed. This system has been running operationally over a contiguous United States (CONUS) domain at a 4-km grid spacing with four forecast cycles daily from June 2009 to September 2010. In the result an exceptionally useful forecast dataset was generated and used for studying the error properties of the model forecasts, in terms of both a longer time period and a broader coverage of geographic regions than previously studied. Spatiotemporal characteristics of the errors are investigated based on the 24-h forecasts between June 2009 and April 2010, and the 72-h forecasts between May and September 2010. It was found that the biases of both wind and temperature forecasts vary greatly seasonally and diurnally, with dependency on the forecast length, station elevation, geographical location, and meteorological conditions. The temperature showed systematic cold biases during the daytime at all station elevations and warm biases during the nighttime above 1,000 m above sea level (ASL), while below 600 m ASL cold biases occurred during the nighttime. The forecasts of surface wind speed exhibited strong positive biases during the nighttime, while the negative biases were observed in the spring and summer afternoons. The surface wind speed was mostly over-predicted except for the stations located between 1,000 and 2,100 m ASL, for which negative biases were identified for most forecast cycles. The highest wind-speed errors were found over the high terrain and near sea-level stations. The wind-direction errors were relatively large at the high-terrain elevation in the Rocky and Appalachian mountain ranges and the western coastal areas and the error structure exhibited notable diurnal variability.
ABSTRACT:The Diebold-Mariano test for predictive accuracy has been used widely and adapted for economic forecasts, but has not seen much activity in weather forecast verification. The technique is applied to both simulated verification sets as well as weather data at eight stations in Utah, and a loss function based on dynamic time warping (DTW) is used. Results of the simulation experiment show that the DTW technique can be useful if timing errors are the concern. Real test cases demonstrate the difficulty in automating some of the more advanced methods proposed here, but also show the utility in even the most basic test, which is an improvement over similar tests that do not account for temporal and/or contemporaneous correlation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.