2009
DOI: 10.1098/rspa.2009.0096
|View full text |Cite
|
Sign up to set email alerts
|

An adjoint for likelihood maximization

Abstract: The process of likelihood maximization can be found in many different areas of computational modelling. However, the construction of such models via likelihood maximization requires the solution of a difficult multi-modal optimization problem involving an expensive O(n 3 ) factorization. The optimization techniques used to solve this problem may require many such factorizations and can result in a significant bottleneck. This article derives an adjoint formulation of the likelihood employed in the construction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 30 publications
0
18
0
Order By: Relevance
“…The likelihood gradients can either be calculated analytically or using reverse algorithmic di↵erentiation of the likelihood. The latter is much faster and less dependent on the number of inputs [33]. In this work, the likelihood gradients for the GEK are calculated analytically by estimating the derivative of the concentrated likelihood function with respect to the hyper-parameters as [33],…”
Section: Correlation Functionsmentioning
confidence: 99%
“…The likelihood gradients can either be calculated analytically or using reverse algorithmic di↵erentiation of the likelihood. The latter is much faster and less dependent on the number of inputs [33]. In this work, the likelihood gradients for the GEK are calculated analytically by estimating the derivative of the concentrated likelihood function with respect to the hyper-parameters as [33],…”
Section: Correlation Functionsmentioning
confidence: 99%
“…In this instance both the CPU and GPU implementations of the Kriging functions are developed using Matlab and its inbuilt GPU toolbox. It should also be noted at this point that all of the surrogate modelling and optimization processes presented within this paper, including the GPU acceleration are implemented within the proprietary Rolls-Royce optimization suite OPTIMATv2 [9,19,20,31,32], itself written in Matlab. Figure 12 presents a comparison of the cost associated with the calculation of the log-likelihood function as both the problem dimensionality and the number of sample points, n, increases when the function is evaluated using a desktop CPU operating in single and multi-threaded modes and two different GPUs.…”
Section: Gpu Accelerated Surrogate Model Constructionmentioning
confidence: 99%
“…In the following paper the hybrid particle swarm algorithm of Toal et al [19,20] which utilizes an adjoint of the likelihood function within a local search is employed.…”
Section: Krigingmentioning
confidence: 99%
“…¶ It should be noted that for high dimensional data sets, d > 10, a substantial increase in computational time can occur and techniques by [36] will be required to efficiently circumvent this cost. Further reductions in time can be achieved by keeping the hyperparameter θ c constant over several iterations.…”
Section: Co-krigingmentioning
confidence: 99%