Many large optimization problems represent a family of models of varying size, corresponding to different discretizations. An example is optimal control problems where the solution is a function that is approximated by its values at finitely many points. We discuss optimization techniques suitable for nonlinear programs of this type, with an emphasis on algorithms that guarantee global convergence. The goal is to exploit the similar structure among the subproblems, using the solutions of smaller subproblems to accelerate the solution of larger, more refined subproblems.
Abstract. This paper examines the numerical performances of two methods for large-scale optimization: a limited memory quasi-Newton method (L-BFGS), and a discrete truncated-Newton method (TN). Various ways of classifying test problems are discussed in order to better understand the types of problems that each algorithm solves well. The L-BFGS and TN methods are also compared with the Polak-Ribire conjugate gradient method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.