2011
DOI: 10.1145/2049662.2049669
|View full text |Cite
|
Sign up to set email alerts
|

Remark on “algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization”

Abstract: This remark describes an improvement and a correction to Algorithm 778. It is shown that the performance of the algorithm can be improved significantly by making a relatively simple modification to the subspace minimization phase. The correction concerns an error caused by the use of routine dpmeps to estimate machine precision.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
221
0
1

Year Published

2012
2012
2022
2022

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 369 publications
(224 citation statements)
references
References 5 publications
2
221
0
1
Order By: Relevance
“…In cases where the analyst wishes to impose prior constraints on the parameter space, the objective function may be maximized using constrained non-linear optimization methods, such as the L-BFGS-B algorithm (Byrd et al, 1995;Zhu et al, 1997;and Morales and Nocedal, 2011). For example, when estimating travel mode choice models, taste coefficients denoting sensitivities to travel times and costs are frequently constrained to be non-positive.…”
Section: An Expectation Maximization (Em) Algorithm For Model Estimationmentioning
confidence: 99%
“…In cases where the analyst wishes to impose prior constraints on the parameter space, the objective function may be maximized using constrained non-linear optimization methods, such as the L-BFGS-B algorithm (Byrd et al, 1995;Zhu et al, 1997;and Morales and Nocedal, 2011). For example, when estimating travel mode choice models, taste coefficients denoting sensitivities to travel times and costs are frequently constrained to be non-positive.…”
Section: An Expectation Maximization (Em) Algorithm For Model Estimationmentioning
confidence: 99%
“…With M being a hyperrectangle of R dM , we use L-BFGS-B (Limited-memory Broyden-Fletcher-Goldfarb-Shanno Bound constrained (Byrd et al, 1995;Zhu et al, 1997), version 3.0 (Morales and Nocedal, 2011)), a quasi-Newton method for bound-constrained optimization, to minimize the error. L-BFGS-B use an approximation of the Hessian matrix to direct the optimization (because the Hessian cannot be directly computed, it is approximated using finite differences).…”
Section: Inverse Modelmentioning
confidence: 99%
“…Among all possible algorithms and software, quasi-Newton methods are known to be efficient [5,22]. In our numerical experiments, we use the FORTRAN code L-BFGS-B [8,30] which has been recently upgraded to version 3.0 [21]. We use the default parameters of the code.…”
Section: Computing the Boundsmentioning
confidence: 99%