The authors consider the problem of estimating the density g of independent and identically distributed variables X i , from a sample 21,. . . , 2, such that 2; = Xi + QE; for i = 1,. . . , n, and E is noise independent of X, with QE having a known distribution. They present a model selection procedure allowing one to construct an adaptive estimator of g and to find nonasymptotic risk bounds. The estimator achieves the minimax rate of convergence, in most cases where lower bounds are available. A simulation study gives an illustration of the good practical performance of the method. Deconvolution adaptative de densite par contraste penaliseRbumt! : Les auteurs considkrent le probltme de dkonvolution, c'est-&-dire de I'estimation de la densitt de variables altatoires identiquement distributes X,, ii partir de I'observation de Zi, oh 2; = X , + QE, pour i = 1,. . . , n, et oh les erreurs QE; sont de densit6 connue. Par une proc6dure de selection de modtles qui permet d'obtenir des bornes de risque non asymptotiques, ils construisent un estimateur adaptatif de la densitt des X ; . L'estimateur atteint automatiquement la vitesse minimax dans la plupart des cas, que les erreurs ou la densit6 & estimer soient peu ou trhs dgulitres. Une ttude par simulation illustre les bonnes performances pratiques de la mtthode.
Given an n-sample from some unknown density f on [0, 1], it is easy to construct an histogram of the data based on some given partition of [0, 1], but not so much is known about an optimal choice of the partition, especially when the set of data is not large, even if one restricts to partitions into intervals of equal length. Existing methods are either rules of thumbs or based on asymptotic considerations and often involve some smoothness properties of f . Our purpose in this paper is to give a fully automatic and simple method to choose the number of bins of the partition from the data. It is based on a nonasymptotic evaluation of the performances of penalized maximum likelihood estimators in some exponential families due to Castellan and heavy simulations which allowed us to optimize the form of the penalty function. These simulations show that the method works quite well for sample sizes as small as 25.
We consider a one-dimensional diffusion process (Xt) which is observed at n + 1 discrete times with regular sampling interval ∆. Assuming that (Xt) is strictly stationary, we propose nonparametric estimators of the drift and diffusion coefficients obtained by a penalized least squares approach. Our estimators belong to a finite-dimensional function space whose dimension is selected by a data-driven method. We provide non-asymptotic risk bounds for the estimators. When the sampling interval tends to zero while the number of observations and the length of the observation time interval tend to infinity, we show that our estimators reach the minimax optimal rates of convergence. Numerical results based on exact simulations of diffusion processes are given for several examples of models and illustrate the qualities of our estimation algorithms. This is an electronic reprint of the original article published by the ISI/BS in Bernoulli, 2007, Vol. 13, No. 2, 514-543. This reprint differs from the original in pagination and typographic detail.
BackgroundCirculating tumor DNA (ctDNA) is an approved noninvasive biomarker to test for the presence of EGFR mutations at diagnosis or recurrence of lung cancer. However, studies evaluating ctDNA as a noninvasive “real-time” biomarker to provide prognostic and predictive information in treatment monitoring have given inconsistent results, mainly due to methodological differences. We have recently validated a next-generation sequencing (NGS) approach to detect ctDNA. Using this new approach, we evaluated the clinical usefulness of ctDNA monitoring in a prospective observational series of patients with non-small cell lung cancer (NSCLC).Methods and FindingsWe recruited 124 patients with newly diagnosed advanced NSCLC for ctDNA monitoring. The primary objective was to analyze the prognostic value of baseline ctDNA on overall survival. ctDNA was assessed by ultra-deep targeted NGS using our dedicated variant caller algorithm. Common mutations were validated by digital PCR. Out of the 109 patients with at least one follow-up marker mutation, plasma samples were contributive at baseline (n = 105), at first evaluation (n = 85), and at tumor progression (n = 66). We found that the presence of ctDNA at baseline was an independent marker of poor prognosis, with a median overall survival of 13.6 versus 21.5 mo (adjusted hazard ratio [HR] 1.82, 95% CI 1.01–3.55, p = 0.045) and a median progression-free survival of 4.9 versus 10.4 mo (adjusted HR 2.14, 95% CI 1.30–3.67, p = 0.002). It was also related to the presence of bone and liver metastasis. At first evaluation (E1) after treatment initiation, residual ctDNA was an early predictor of treatment benefit as judged by best radiological response and progression-free survival. Finally, negative ctDNA at E1 was associated with overall survival independently of Response Evaluation Criteria in Solid Tumors (RECIST) (HR 3.27, 95% CI 1.66–6.40, p < 0.001). Study population heterogeneity, over-representation of EGFR-mutated patients, and heterogeneous treatment types might limit the conclusions of this study, which require future validation in independent populations.ConclusionsIn this study of patients with newly diagnosed NSCLC, we found that ctDNA detection using targeted NGS was associated with poor prognosis. The heterogeneity of lung cancer molecular alterations, particularly at time of progression, impairs the ability of individual gene testing to accurately detect ctDNA in unselected patients. Further investigations are needed to evaluate the clinical impact of earlier evaluation times at 1 or 2 wk. Supporting clinical decisions, such as early treatment switching based on ctDNA positivity at first evaluation, will require dedicated interventional studies.
Attenuation correction in hybrid PET/MR scanners is still a challenging task. This paper describes a methodology for synthesizing a pseudo-CT volume from a single T1-weighted volume, thus allowing us to create accurate attenuation correction maps. Methods: We propose a fast pseudo-CT volume generation from a patient-specific MR T1-weighted image using a groupwise patch-based approach and an MRI-CT atlas dictionary. For every voxel in the input MR image, we compute the similarity of the patch containing that voxel to the patches of all MR images in the database that lie in a certain anatomic neighborhood. The pseudo-CT volume is obtained as a local weighted linear combination of the CT values of the corresponding patches. The algorithm was implemented in a graphical processing unit (GPU). Results: We evaluated our method both qualitatively and quantitatively for PET/MR correction. The approach performed successfully in all cases considered. We compared the SUVs of the PET image obtained after attenuation correction using the patient-specific CT volume and using the corresponding computed pseudo-CT volume. The patient-specific correlation between SUV obtained with both methods was high (R 2 5 0.9980, P , 0.0001), and the Bland-Altman test showed that the average of the differences was low (0.0006 ± 0.0594). A region-of-interest analysis was also performed. The correlation between SUV mean and SUV max for every region was high (R 2 5 0.9989, P , 0.0001, and R 2 5 0.9904, P , 0.0001, respectively). Conclusion:The results indicate that our method can accurately approximate the patient-specific CT volume and serves as a potential solution for accurate attenuation correction in hybrid PET/MR systems. The quality of the corrected PET scan using our pseudo-CT volume is comparable to having acquired a patient-specific CT scan, thus improving the results obtained with the ultrashort-echo-timebased attenuation correction maps currently used in the scanner. The GPU implementation substantially decreases computational time, making the approach suitable for real applications.
Targeted next-generation sequencing analyzed with the base-PER method represents a robust and low cost method to detect circulating tumor DNA in patients with cancer.
We introduce a new algorithm building an optimal dyadic decision tree (ODT). The method combines guaranteed performance in the learning theoretical sense and optimal search from the algorithmic point of view. Furthermore it inherits the explanatory power of tree approaches, while improving performance over classical approaches such as CART/C4.5, as shown on experiments on artificial and benchmark data.
This paper is devoted to the construction of a complete database which is intended to improve the implementation and the evaluation of automated facial reconstruction. This growing database is currently composed of 85 head CT-scans of healthy European subjects aged 20-65 years old. It also includes the triangulated surfaces of the face and the skull of each subject. These surfaces are extracted from CT-scans using an original combination of image-processing techniques which are presented in the paper. Besides, a set of 39 referenced anatomical skull landmarks were located manually on each scan. Using the geometrical information provided by triangulated surfaces, we compute facial soft-tissue depths at each known landmark positions. We report the average thickness values at each landmark and compare our measures to those of the traditional charts of [J. Rhine, C.E. Moore, Facial Tissue Thickness of American Caucasoïds, Maxwell Museum of Anthropology, Albuquerque, New Mexico, 1982] and of several recent in vivo studies [M.H. Manhein, G.A. Listi, R.E. Barsley, et al., In vivo facial tissue depth measurements for children and adults, Journal of Forensic Sciences 45 (1) (2000) 48-60; S. De Greef, P. Claes, D. Vandermeulen, et al., Large-scale in vivo Caucasian facial soft tissue thickness database for craniofacial reconstruction, Forensic Science International 159S (2006) S126-S146; R. Helmer, Schödelidentifizierung durch elektronische bildmischung, Kriminalistik Verlag GmbH, Heidelberg, 1984].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.