Our focus is on efficient estimation of tail probabilities of sums of correlated lognormals. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose three different procedures that can be rigorously shown to be asymptotically optimal as the tail probability of interest decreases to zero. The first algorithm is based on importance sampling and is as easy to implement as crude Monte Carlo. The second algorithm is based on an elegant conditional Monte Carlo strategy which involves polar coordinates and the third one is an importance sampling algorithm that can be shown to be strongly efficient.
Consider the problem of estimating the expectation of a non linear function of a conditional expectation. This function is allowed to be non-differentiable and discontinuous at a finite set of points to capture practical settings. We develop a nested simulation strategy to estimate this via simulation and identify bias and optimized mean square error allocation. We show that this mean square error converges to zero at the rate Γ −2/3 , as Γ → ∞, where Γ denotes the available computational budget. We also consider combining nested simulation technique with kernel based estimation methods. We note that while the kernel based method have a better convergence rate when the underlying random process has dimensionality less than or equal to three, pure nested simulation may be preferred when this dimension is above four.
Coping with inter-speaker variability (i.e., differences in the vocal tract characteristics of speakers) is still a major challenge for Automatic Speech Recognizers. In this paper, we discuss a method that compensates for differences in speaker characteristics. In particular, we demonstrate that when continuous density hidden Markov model based system is used as the back-end , a Knowledge-Based Front End (KBFE) can outperform the traditional Mel-Frequency Cepstral Coefficients (MFCCs), particularly when there is a mismatch in the gender and ages of the subjects used to train and test the recognizer.
Simulation-based ordinal optimization has frequently relied on large deviations analysis as a theoretical device for arguing that it is computationally easier to identify the best system out of d alternatives than to estimate the actual performance of a given design. In this paper, we argue that practical implementation of these large deviations-based methods need to estimate the underlying large deviations rate functions of the competing designs from the samples generated. Because such rate functions are difficult to estimate accurately (due to the heavy tails that naturally arise in this setting), the probability of mis-estimation will generally dominate the underlying large deviations probability, making it difficult to build reliable algorithms that are supported theoretically through large deviations analysis. However, when we justify ordinal optimization algorithms on the basis of guaranteed finite sample bounds (as can be done when the associated random variables are bounded), we show that satisfactory and practically implementable algorithms can be designed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.