Principal components are a well established tool in dimension reduction. The extension to principal curves allows for general smooth curves which pass through the middle of a multidimensional data cloud. In this paper local principal curves are introduced, which are based on the localization of principal component analysis. The proposed algorithm is able to identify closed curves as well as multiple curves which may or may not be connected. For the evaluation of the performance of principal curves as tool for data reduction a measure of coverage is suggested. By use of simulated and real data sets the approach is compared to various alternative concepts of principal curves.
Blood lactate markers are used as summary measures of the underlying model of an athlete's blood lactate response to increasing work rate. Exercise physiologists use these endurance markers, typically corresponding to a work rate in the region of high curvature in the lactate curve, to predict and compare endurance ability. A short theoretical background of the commonly used markers is given and algorithms provided for their calculation. To date, no free software exists that allows the sports scientist to calculate these markers. In this paper, software is introduced for precisely this purpose that will calculate a variety of lactate markers for an individual athlete, an athlete at different instants (e.g. across a season), and simultaneously for a squad.
For speed-flow data, which are intensively discussed in transportation science, common nonparametric regression models of the type y D m.x/ C noise turn out to be inadequate since simple functional models cannot capture the essential relationship between the predictor and response. Instead a more general setting is required, allowing for multifunctions rather than functions. The tool proposed is conditional modes estimation which, in the form of local modes, yields several branches that correspond to the local modes. A simple algorithm for computing the branches is derived. This is based on a conditional mean shift algorithm and is shown to work well in the application that is considered.
yliveir D w r¡ % nd iin e kD to hen nd riguer sD w nuel nd eins uryD iliz eth nd uigD edro nd othk mmD u i @PHITA 9 eroEin) ted regression models for r di tionEindu ed hromosome err tion d t X omp r tive studyF9D fiometri l journ lFD SV @PAF ppF PSWEPUWFFurther information on publisher's website:httpXGGdxFdoiForgGIHFIHHPG imjFPHIRHHPQQPublisher's copyright statement:his is the epted version of the following rti leX yliveir D w r¡ % D iin e kD to henD riguer sD w nuelD eins uryD iliz ethD uigD edro nd othk mmD u i @PHITA eroEin) ted regression models for r di tionEindu ed hromosome err tion d t X omp r tive studyF fiometri l journ lD SV@PAX PSWEPUWD whi h h s een pu lished in (n l form t httpXGGdxFdoiForgGIHFIHHPG imjFPHIRHHPQQF his rti le m y e used for nonE ommer i l purposes in ord n e ith ileyE gr erms nd gonditions for selfE r hivingF Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Within the field of cytogenetic biodosimetry, Poisson regression is the classical approach for modelling the number of chromosome aberrations as a function of radiation dose. However, it is common to find data that exhibit overdispersion. In practice, the assumption of equidispersion may be violated due to unobserved heterogeneity in the cell population, which will render the variance of observed aberration counts larger than their mean, and/or the frequency of zero counts greater than expected for the Poisson distribution. This phenomenon is observable for both full and partial body exposure, but more pronounced for the latter. In this work, different methodologies for analysing cytogenetic chromosomal aberrations datasets are compared, with special focus on zero-inflated Poisson and zero-inflated negative binomial models. A score test for testing for zero-inflation in Poisson regression models under the identity link is also developed.
We propose weighted repeated median filters and smoothers for robust non-parametric regression in general and for robust signal extraction from time series in particular. The proposed methods allow to remove outlying sequences and to preserve discontinuities (shifts) in the underlying regression function (the signal) in the presence of local linear trends. Suitable weighting of the observations according to their distances in the design space reduces the bias arising from non-linearities. It also allows to improve the efficiency of (unweighted) repeated median filters using larger bandwidths, keeping their properties for distinguishing between outlier sequences and long-term shifts. Robust smoothers based on weighted L 1 -regression are included for the reason of comparison.
Purpose: Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Materials and methods: Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises.Results: The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Conclusions: Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization. ARTICLE HISTORY
Over the last decade, the γ–H2AX focus assay, which exploits the phosphorylation of the H2AX histone following DNA double–strand–breaks, has made considerable progress towards acceptance as a reliable biomarker for exposure to ionizing radiation. While the existing literature has convincingly demonstrated a dose–response effect, and also presented approaches to dose estimation based on appropriately defined calibration curves, a more widespread practical use is still hampered by a certain lack of discussion and agreement on the specific dose–response modelling and uncertainty quantification strategies, as well as by the unavailability of implementations. This manuscript intends to fill these gaps, by stating explicitly the statistical models and techniques required for calibration curve estimation and subsequent dose estimation. Accompanying this article, a web applet has been produced which implements the discussed methods.
The analysis of high–dimensional data is usually challenging since many standard modelling approaches tend to break down due to the so–called “curse of dimensionality”. Dimension reduction techniques, which reduce the data set (explicitly or implicitly) to a smaller number of variables, make the data analysis more efficient and are furthermore useful for visualization purposes. However, most dimension reduction techniques require fixing the intrinsic dimension of the low-dimensional subspace in advance. The intrinsic dimension can be estimated by fractal dimension estimation methods, which exploit the intrinsic geometry of a data set. The most popular concept from this family of methods is the correlation dimension, which requires estimation of the correlation integral for a ball of radius tending to 0. In this paper we propose approaches to approximate the correlation integral in this limit. Experimental results on real world and simulated data are used to demonstrate the algorithms and compare to other methodology. A simulation study which verifies the effectiveness of the proposed methods is also provided
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.