<p style='text-indent:20px;'>Computing the gradient of a function provides fundamental information about its behavior. This information is essential for several applications and algorithms across various fields. One common application that requires gradients are optimization techniques such as stochastic gradient descent, Newton's method and trust region methods. However, these methods usually require a numerical computation of the gradient at every iteration of the method which is prone to numerical errors. We propose a simple limited-memory technique for improving the accuracy of a numerically computed gradient in this gradient-based optimization framework by exploiting (1) a coordinate transformation of the gradient and (2) the history of previously taken descent directions. The method is verified empirically by extensive experimentation on both test functions and on real data applications. The proposed method is implemented in the <inline-formula><tex-math id="M1">\begin{document}$\texttt{R} $\end{document}</tex-math></inline-formula> package <inline-formula><tex-math id="M2">\begin{document}$ \texttt{smartGrad}$\end{document}</tex-math></inline-formula> and in C<inline-formula><tex-math id="M3">\begin{document}$ \texttt{++} $\end{document}</tex-math></inline-formula>.</p>
Background: The parameter uncertainty in the six-dimensional health state short form (SF-6D) value sets is commonly ignored. There are two sources of parameter uncertainty: uncertainty around the estimated regression coefficients and uncertainty around the model’s specification. This study explores these two sources of parameter uncertainty in the value sets using probabilistic sensitivity analysis (PSA) and a Bayesian approach. Methods: We used data from the original UK/SF-6D valuation study to evaluate the extent of parameter uncertainty in the value set. First, we re-estimated the Brazier model to replicate the published estimated coefficients. Second, we estimated standard errors around the predicted utility of each SF-6D state to assess the impact of parameter uncertainty on these estimated utilities. Third, we used Monte Carlo simulation technique to account for the uncertainty on these estimates. Finally, we used a Bayesian approach to quantifying parameter uncertainty in the value sets. The extent of parameter uncertainty in SF-6D value sets was assessed using data from the Hong Kong valuation study. Results: Including parameter uncertainty results in wider confidence/credible intervals and improved coverage probability using both approaches. Using PSA, the mean 95% confidence intervals widths for the mean utilities were 0.1394 (range: 0.0565–0.2239) and 0.0989 (0.0048–0.1252) with and without parameter uncertainty whilst, using the Bayesian approach, this was 0.1478 (0.053–0.1665). Upon evaluating the impact of parameter uncertainty on estimates of a population’s mean utility, the true standard error was underestimated by 79.1% (PSA) and 86.15% (Bayesian) when parameter uncertainty was ignored. Conclusions: Parameter uncertainty around the SF-6D value set has a large impact on the predicted utilities and estimated confidence intervals. This uncertainty should be accounted for when using SF-6D utilities in economic evaluations. Ignoring this additional information could impact misleadingly on policy decisions.
We address in this paper a new approach for fitting spatiotemporal models with application in disease mapping using the interaction types I, II, III, and IV proposed by [1]. When we account for the spatiotemporal interactions in disease-mapping models, inference becomes more useful in revealing unknown patterns in the data. However, when the number of locations and/or the number of time points is large, the inference gets computationally challenging due to the high number of required constraints necessary for inference, and this holds for various inference architectures including Markov chain Monte Carlo (MCMC) [2] and Integrated Nested Laplace Approximations (INLA) [3]. We re-formulate INLA approach based on dense matrices to fit the intrinsic spatiotemporal models with the four interaction types and account for the sum-to-zero constraints, and discuss how the new approach can be implemented in a high-performance computing framework. The computing time using the new approach does not depend on the number of constraints and can reach a 40-fold faster speed compared to INLA in realistic scenarios. This approach is verified by a simulation study and a real data application, and it is implemented in the R package INLAPLUS and the Python header function: inla1234().
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.