The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.48550/arxiv.2201.00241
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Batched Second-Order Adjoint Sensitivity for Reduced Space Methods

Abstract: This paper presents an efficient method for extracting the second-order sensitivities from a system of implicit nonlinear equations on upcoming graphical processing units (GPU) dominated computer systems. We design a custom automatic differentiation (AutoDiff) backend that targets highly parallel architectures by extracting the second-order information in batch. When the nonlinear equations are associated to a reduced space optimization problem, we leverage the parallel reverse-mode accumulation in a batched a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Hence, we avoid computing the full sensitivity matrix S and rely instead on a batched variant of the adjoint-adjoint algorithm [28]. First, we compute an LU factorization of G x , as P G x Q = L U , with P and Q two permutation matrices and L and U being respectively a lower and an upper triangular matrix (using SpRF, the factorization can be updated entirely on the GPU if the sparsity pattern of G x is the same along the iterations).…”
Section: Porting the Reduction Algorithm To The Gpumentioning
confidence: 99%
“…Hence, we avoid computing the full sensitivity matrix S and rely instead on a batched variant of the adjoint-adjoint algorithm [28]. First, we compute an LU factorization of G x , as P G x Q = L U , with P and Q two permutation matrices and L and U being respectively a lower and an upper triangular matrix (using SpRF, the factorization can be updated entirely on the GPU if the sparsity pattern of G x is the same along the iterations).…”
Section: Porting the Reduction Algorithm To The Gpumentioning
confidence: 99%