Electrophilic cyclizations of α,β-alkynic hydrazones by molecular iodine were investigated for the synthesis of 4-iodopyrazoles. α,β-Alkynic hydrazones were readily prepared by the reactions of hydrazines with propargyl aldehydes and ketones. When treated with molecular iodine in the presence of sodium bicarbonate, α,β-alkynic hydrazones underwent electrophilic cyclization to afford 4-iodopyrazoles in good to high yields. Iodocyclization was general for a wide range of α,β-alkynic hydrazones and tolerated the presence of aliphatic, aromatic, heteroaromatic, and ferrocenyl moieties with electron-withdrawing and electron-donating substituents.
Homogeneity testing is one of the most important analyses in climate‐related studies as it underpins the reliability of any inferences. The effects not directly related with climate are identified and removed from the meteorological variables, and then the obtained homogeneous variables are used to present an enhanced picture of the current situation and produce realistic forecasts based upon the variables. In this study, we investigate the performances of well‐known homogeneity tests and introduce some tests that are not usually used for testing homogeneity via Monte Carlo simulation. We generate data using a normally distributed temperature variable. We consider both absolute and relative homogeneity tests (RHTest). Although relative tests are the best performing homogeneity tests, their results highly depend on the quality of the reference series. Consequently, they have to be used with at least one of the absolute tests in order to detect possible inhomogeneities in the reference series. Our results show that the relative tests which are standard normal homogeneity test, F‐test for structural breaks with reference series, multiple change point detection method, namely and the RHTest have the best detection rates whereas the absolute tests, namely, Kruskal–Wallis, Pettitt and Friedman tests have the worst performances. The best performing absolute test is F‐test for structural breaks.
Bootstrapping is a computer-intensive statistical method which treats the data set as a population and draws samples from it with replacement. This resampling method has wide application areas especially in mathematically intractable problems. In this study, it is used to obtain the empirical distributions of the parameters to determine whether they are statistically significant or not in a special case of nonparametric regression, conic multivariate adaptive regression splines (CMARS), a statistical machine learning algorithm. CMARS is the modified version of the well-known nonparametric regression model, multivariate adaptive regression splines (MARS), which uses conic quadratic optimization. CMARS is at least as complex as MARS even though it performs better with respect to several criteria. To achieve a better performance of CMARS with a less complex model, three different bootstrapping regression methods, namely, random-X, fixed-X and wild bootstrap are applied on four data sets with different size and scale. Then, the performances of the models are compared using various criteria including accuracy, precision, complexity, stability, robustness and computational efficiency. The results imply that bootstrap methods give more precise parameter estimates although they are computationally inefficient and that among all, random-X resampling produces better models, particularly for medium size and scale data sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.