In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature.
In this paper we consider the evolutionary Particle Swarm Optimization (PSO) algorithm, for the minimization of a computationally costly nonlinear function, in global optimization frameworks. We study a reformulation of the standard iteration of PSO [KE95, CK02] into a linear dynamic system. We carry out our analysis on a generalized PSO iteration (see [M04]), which\ud includes the standard one proposed in the literature.\ud We analyze three issues for the resulting generalized PSO: first, for any particle we give both theoretical and numerical evidence on an efficient choice of the starting point. Then, we study\ud the cases in which either deterministic and uniformly randomly distributed coefficients are considered in the scheme. Finally, some convergence analysis is also provided, along with some\ud necessary conditions to avoid diverging trajectories. The results proved in the paper can be immediately applied to the standard PSO iteration
a b s t r a c tDeterministic optimization algorithms are very attractive when the objective function is computationally expensive and therefore the statistical analysis of the optimization outcomes becomes too expensive. Among deterministic methods, deterministic particle swarm optimization (DPSO) has several attractive characteristics such as the simplicity of the heuristics, the ease of implementation, and its often fairly remarkable effectiveness. The performances of DPSO depend on four main setting parameters: the number of swarm particles, their initialization, the set of coefficients defining the swarm behavior, and (for box-constrained optimization) the method to handle the box constraints. Here, a parametric study of DPSO is presented, with application to simulation-based design in ship hydrodynamics. The objective is the identification of the most promising setup for both synchronous and asynchronous implementations of DPSO. The analysis is performed under the assumption of limited computational resources and large computational burden of the objective function evaluation. The analysis is conducted using 100 analytical test functions (with dimensionality from two to fifty) and three performance criteria, varying the swarm size, initialization, coefficients, and the method for the box constraints, resulting in more than 40,000 optimizations. The most promising setup is applied to the hull-form optimization of a high speed catamaran, for resistance reduction in calm water and at fixed speed, using a potential-flow solver.
In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an "adequate" negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence. On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun. © 2007 Springer Science+Business Media, LLC
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.