2018
DOI: 10.1553/etna_vol50s71
|View full text |Cite
|
Sign up to set email alerts
|

Computation of induced orthogonal polynomial distributions

Abstract: We provide a robust and general algorithm for computing distribution functions associated to induced orthogonal polynomial measures. We leverage several tools for orthogonal polynomials to provide a spectrally-accurate method for a broad class of measures, which is stable for polynomial degrees up to at least degree 1000. Paired with other standard tools such as a numerical root-finding algorithm and inverse transform sampling, this provides a methodology for generating random samples from an induced orthogona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 18 publications
(56 reference statements)
0
15
0
Order By: Relevance
“…On the other hand, using a biased measure defined by multiplying an existing underlying measure by κ (which is how µ is defined) has become popular in recent years, and was first investigated for least-squares problems in [16,5]. A measure constructed in this way is called an induced measure, and computational algorithms for sampling from several multivariate induced distributions exist [22], and sampling from a multivariate induced measure for gPC approximations is considered in [19] using a different strategy to construct the basis Φ j . Notice that in all above mentioned references, the input density is known in prior.…”
Section: 3mentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, using a biased measure defined by multiplying an existing underlying measure by κ (which is how µ is defined) has become popular in recent years, and was first investigated for least-squares problems in [16,5]. A measure constructed in this way is called an induced measure, and computational algorithms for sampling from several multivariate induced distributions exist [22], and sampling from a multivariate induced measure for gPC approximations is considered in [19] using a different strategy to construct the basis Φ j . Notice that in all above mentioned references, the input density is known in prior.…”
Section: 3mentioning
confidence: 99%
“…However, an explicit form for the equilibrium measure is not known in general; alternatively the induced distribution introduced in [5] is an attractive sampler for its optimal stability properties. Algorithms for generating samples from fairly general classes of induced distributions are also available [22]. However, such algorithms are less helpful when the underlying distribution is not known.…”
Section: Introductionmentioning
confidence: 99%
“…In this section we present methods for discretizing the domain based on the support of the parameters and their distributions. The survey builds on a recent review of sampling techniques for polynomial least squares [34] and from more recent texts including Narayan [48,49] and Jakeman and Narayan [37]. Intuitively, the simplest sampling strategy for polynomial least squares is to generate random Monte-Carlo type samples based on the joint density ρ(ζ ζ ζ ).…”
Section: Selecting a Sampling Strategymentioning
confidence: 99%
“…In [35], the authors devise an MCMC strategy which seeks to find µ, thereby explicitly minimizing their coherence parameter (a weighted form of K ∞ ). In [48], formulations for computing µ via induced distributions is detailed. In our numerical experiments investigating the condition numbers of matrices obtained via such induced samples-in a similar vein to the Figures 3-the condition numbers were found to be comparable to those from Christoffel sampling.…”
Section: Coherence and Induced Samplingmentioning
confidence: 99%
“…The results in [11] suggest an exact sampling method and show optimal convergence estimates in the non-asymptotic case. The work in [27] provides efficient computational methods for exact sampling in the non-asymptotic case.…”
Section: Introductionmentioning
confidence: 99%