brief excerpts in connection with reviews or scholarly analysis. U se in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive narnes, trade narnes, trademarks, etc., in this publication, even ifthe former are not especially identified, is not to be taken as a sign that such narnes, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone.
Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large data sets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, i.e., the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data.
We extend the approach in [Ann. Statist. 38 (2010Statist. 38 ( ) 2499Statist. 38 ( -2524 for identifying locally optimal designs for nonlinear models. Conceptually the extension is relatively simple, but the consequences in terms of applications are profound. As we will demonstrate, we can obtain results for locally optimal designs under many optimality criteria and for a larger class of models than has been done hitherto. In many cases the results lead to optimal designs with the minimal number of support points.
We propose a new approach for identifying the support points of a locally optimal design when the model is a nonlinear model. In contrast to the commonly used geometric approach, we use an approach based on algebraic tools. Considerations are restricted to models with two parameters, and the general results are applied to often used special cases, including logistic, probit, double exponential and double reciprocal models for binary data, a loglinear Poisson regression model for count data, and the Michaelis-Menten model. The approach, which is also of value for multi-stage experiments, works both with constrained and unconstrained design regions and is relatively easy to implement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.