We propose a new goodness-of-fit test for the Rayleigh distribution which is based on a distributional fixed-point property of the Stein characterization. The limiting null distribution of the test is derived and the consistency against fixed alternatives is also shown. The results of a finite-sample comparison is presented, where we compare the power performance of the new test to a variety of other tests. In addition to existing tests for the Rayleigh distribution we also exploit the link between the exponential and Rayleigh distributions. This allows us to include some powerful tests developed specifically for the exponential distribution in the comparison. It is found that the new test outperforms competing tests for many of the alternative distributions. Interestingly, the highest estimated power, against all alternative distributions considered, is obtained by one of the tests specifically developed for the Rayleigh distribution and not by any of the exponentiality tests based on the transformed data. The use of the new test is illustrated on a real-world COVID-19 data set.
The need to model proportional data is common in a range of disciplines however, due to its bimodal nature, U- or J-shaped data present a particular challenge. In this study, two parsimonious mixture models are proposed to accurately characterise this proportional U- and J-shaped data. The proposed models are applied to loss given default data, an application area where specific importance is attached to the accuracy with which the mean is estimated, due to its linear relationship with a bank’s regulatory capital. In addition to using standard information criteria, the degree to which bias reduction in the estimation of the distributional mean can be achieved is used as a measure of model performance. The proposed models outperform the benchmark model with reference to the information criteria and yield a reduction in the distance between the empirical and distributional means. Given the special characteristics of the dataset, where a high proportion of observations are close to zero, a methodology for choosing a rounding threshold in an objective manner is developed as part of the data preparation stage. It is shown how the application of this rounding threshold can reduce bias in moment estimation regardless of the model choice.
This work presents an experimental and modelling evaluation of the preferential oxidation of CO (CO PROX) from a H2-rich gas stream typically produced from fossil fuels and ultimately intended for hydrogen fuel cell applications. A microchannel reactor containing a washcoated 8.5 wt.% Ru/Al2O3 catalyst was used to preferentially oxidise CO to form CO2 in a gas stream containing (by vol.%): 1.4% CO, 10% CO2, 18% N2, 68.6% H2, and 2% added O2. CO concentrations in the product gas were as low as 42 ppm (99.7% CO conversion) at reaction temperatures in the range 120–140 °C and space velocities in the range 65.2–97.8 NL gcat−1 h−1. For these conditions, less than 4% of the H2 feed was consumed via its oxidation and reverse water-gas shift. Furthermore, a computational fluid dynamic (CFD) model describing the microchannel reactor for CO PROX was developed. With kinetic parameter estimation and goodness of fit calculations, it was determined that the model described the reactor with a confidence interval far greater than 95%. In the temperature range 100–200 °C, the model yielded CO PROX reaction rate profiles, with associated mass transport properties, within the axial dimension of the microchannels––not quantifiable during the experimental investigation. This work demonstrates that microchannel reactor technology, supporting an active catalyst for CO PROX, is well suited for CO abatement in a H2-rich gas stream at moderate reaction temperatures and high space velocities.
Modelling the outcome after loan default is receiving increasing attention, and survival analysis is particularly suitable for this purpose due to the likely presence of censoring in the data. In this study, we suggest that the time to loan write-off may be influenced by latent competing risks, as well as by common, unobservable drivers, such as the state of the economy. We therefore expand on the promotion time cure model and include a parametric frailty parameter to account for common, unobservable factors and for possible observable covariates not included in the model. We opt for a parametric model due to its interpretability and analytical tractability, which are desirable properties in bank risk management. Both a gamma and inverse Gaussian frailty parameter are considered for the univariate case, and we also consider a shared frailty model. A Monte Carlo study demonstrates that the parameter estimation of the models is reliable, after which they are fitted to a real-world dataset in respect of large corporate loans in the US. The results show that a more flexible hazard function is possible by including a frailty parameter. Furthermore, the shared frailty model shows potential to capture dependence in write-off times within industry groups.
A technique known as calibration is often used when a given option pricing model is fitted to observed financial data. This entails choosing the parameters of the model so as to minimise some discrepancy measure between the observed option prices and the prices calculated under the model in question. This procedure does not take the historical values of the underlying asset into account. In this paper, the density function of the log-returns obtained using the calibration procedure is compared to a density estimate of the observed historical log-returns. Three models within the class of geometric Lévy process models are fitted to observed data; the Black-Scholes model as well as the geometric normal inverse Gaussian and Meixner process models. The numerical results obtained show a surprisingly large discrepancy between the resulting densities when using the latter two models. An adaptation of the calibration methodology is also proposed based on both option price data and the observed historical log-returns of the underlying asset. The implementation of this methodology limits the discrepancy between the densities in question.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.