The menstrual cycle is an essential life rhythm governed by interacting levels of progesterone, estradiol, follicular stimulating, and luteinizing hormones. To study metabolic changes, biofluids were collected at four timepoints in the menstrual cycle from 34 healthy, premenopausal women. Serum hormones, urinary luteinizing hormone and self-reported menstrual cycle timing were used for a 5-phase cycle classification. Plasma and urine were analyzed using LC-MS and GC-MS for metabolomics and lipidomics; serum for clinical chemistries; and plasma for B vitamins using HPLC-FLD. Of 397 metabolites and micronutrients tested, 208 were significantly (p < 0.05) changed and 71 reached the FDR 0.20 threshold showing rhythmicity in neurotransmitter precursors, glutathione metabolism, the urea cycle, 4-pyridoxic acid, and 25-OH vitamin D. In total, 39 amino acids and derivatives and 18 lipid species decreased (FDR < 0.20) in the luteal phase, possibly indicative of an anabolic state during the progesterone peak and recovery during menstruation and the follicular phase. The reduced metabolite levels observed may represent a time of vulnerability to hormone related health issues such as PMS and PMDD, in the setting of a healthy, rhythmic state. These results provide a foundation for further research on cyclic differences in nutrient-related metabolites and may form the basis of novel nutrition strategies for women.
In this paper we combine two important extensions of ordinary least squares regression: regularization and optimal scaling. Optimal scaling (sometimes also called optimal scoring) has originally been developed for categorical data, and the process finds quantifications for the categories that are optimal for the regression model in the sense that they maximize the multiple correlation. Although the optimal scaling method was developed initially for variables with a limited number of categories, optimal transformations of continuous variables are a special case. We will consider a variety of transformation types; typically we use step functions for categorical variables, and smooth (spline) functions for continuous variables. Both types of functions can be restricted to be monotonic, preserving the ordinal information in the data.In addition to optimal scaling, three popular regularization methods will be considered: Ridge regression, the Lasso, and the Elastic Net. The resulting method will be called ROS Regression (Regularized Optimal Scaling Regression. We will show that the basic OS algorithm provides straightforward and efficient estimation of the regularized regression coefficients, automatically gives the Group Lasso and Blockwise Sparse Regression, and extends them with monotonicity properties. We will also show that Optimal Scaling linearizes nonlinear relationships between predictors and outcome, and improves upon the condition of the predictor correlation matrix, increasing (on average) the conditional independence of the predictors. Alternative options for regularization of either regression coefficients or category quantifications are mentioned.Extended examples are provided.The original Lasso algorithm uses a quadratic programming strategy that is complex and computationally demanding; hence it is not feasible for large values of P , and moreover, it can not be used when P > N . Since the Lasso paper, various less complex and/or more efficient lasso algorithms were proposed. For example, Osborne, Presnell, and Turlach (2000a) developed a homotopy method that can handle P > N predictors, but it is still computationally demanding when P is large. The same method was discussed in Efron et al. 2004 in a different framework, and became known as the LARS-Lasso. These methods provide efficient algorithms to find the entire Lasso regularization path. The "Grafting" algorithm of Perkins, Lacker, and Theiler (2003), the "Pathseeker" algorithm of Friedman and Popescu (2004), and the "boosting" algorithm of Zhao and Yu (2004) are gradient descent algorithms that can deal with P > N predictors in a computationally less demanding way.However, in the P > N case, none of these Lasso algorithms can select more than N predictors.The Elastic Net algorithm that is based on the LARS-Lasso algorithm is capable of selecting more than N predictors due to the use of the additional Ridge penalty. All these methods apply only to linear regression.In this paper, we show how to implement Ridge, Lasso, and Elastic Net penalties i...
Background Collateral effects of antibiotic resistance occur when resistance to one antibiotic agent leads to increased resistance or increased sensitivity to a second agent, known respectively as collateral resistance (CR) and collateral sensitivity (CS). Collateral effects are relevant to limit impact of antibiotic resistance in design of antibiotic treatments. However, methods to detect antibiotic collateral effects in clinical population surveillance data of antibiotic resistance are lacking. Objectives To develop a methodology to quantify collateral effect directionality and effect size from large-scale antimicrobial resistance population surveillance data. Methods We propose a methodology to quantify and test collateral effects in clinical surveillance data based on a conditional t-test. Our methodology was evaluated using MIC data for 419 Escherichia coli strains, containing MIC data for 20 antibiotics, which were obtained from the Pathosystems Resource Integration Center (PATRIC) database. Results We demonstrate that the proposed approach identifies several antibiotic combinations that show symmetrical or non-symmetrical CR and CS. For several of these combinations, collateral effects were previously confirmed in experimental studies. We furthermore provide insight into the power of our method for multiple collateral effect sizes and MIC distributions. Conclusions Our proposed approach is of relevance as a tool for analysis of large-scale population surveillance studies to provide broad systematic identification of collateral effects related to antibiotic resistance, and is made available to the community as an R package. This method can help mapping CS and CR, which could guide combination therapy and prescribing in the future.
Metabolomics is emerging as an important field in life sciences. However, a weakness of current mass spectrometry (MS) based metabolomics platforms is the time-consuming analysis and the occurrence of severe matrix effects in complex mixtures. To overcome this problem, we have developed an automated and fast fractionation module coupled online to MS. The fractionation is realized by the implementation of three consecutive high performance solid-phase extraction columns consisting of a reversed phase, mixed-mode anion exchange, and mixed-mode cation exchange sorbent chemistry. The different chemistries resulted in an efficient interaction with a wide range of metabolites based on polarity, charge, and allocation of important matrix interferences like salts and phospholipids. The use of short columns and direct solvent switches allowed for fast screening (3 min per polarity). In total, 50 commonly reported diagnostic or explorative biomarkers were validated with a limit of quantification that was comparable with conventional LC–MS(/MS). In comparison with a flow injection analysis without fractionation, ion suppression decreased from 89% to 25%, and the sensitivity was 21 times higher. The validated method was used to investigate the effects of circadian rhythm and food intake on several metabolite classes. The significant diurnal changes that were observed stress the importance of standardized sampling times and fasting states when metabolite biomarkers are used. Our method demonstrates a fast approach for global profiling of the metabolome. This brings metabolomics one step closer to implementation into the clinic.
In resampling methods, such as bootstrapping or cross validation, a very similar computational problem (usually an optimization procedure) is solved over and over again for a set of very similar data sets. If it is computationally burdensome to solve this computational problem once, the whole resampling method can become unfeasible. However, because the computational problems and data sets are so similar, the speed of the resampling method may be increased by taking advantage of these similarities in method and data. As a generic solution, we propose to learn the relation between the resampled data sets and their corresponding optima. Using this learned knowledge, we are then able to predict the optima associated with new resampled data sets. First, these predicted optima are used as starting values for the optimization process. Once the predictions become accurate enough, the optimization process may even be omitted completely, thereby greatly decreasing the computational burden. The suggested method is validated using two simple problems (where the results can be verified analytically) and two real-life problems (i.e., the bootstrap of a mixed model and a generalized extreme value distribution). The proposed method led on average to a tenfold increase in speed of the resampling method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.