Together with the fundamentals of probability, random processes and statistical analysis, this insightful book also presents a broad range of advanced topics and applications. There is extensive coverage of Bayesian vs. frequentist statistics, time series and spectral representation, inequalities, bound and approximation, maximum-likelihood estimation and the expectation-maximization (EM) algorithm, geometric Brownian motion and Itô process. Applications such as hidden Markov models (HMM), the Viterbi, BCJR, and Baum–Welch algorithms, algorithms for machine learning, Wiener and Kalman filters, and queueing and loss networks are treated in detail. The book will be useful to students and researchers in such areas as communications, signal processing, networks, machine learning, bioinformatics, econometrics and mathematical finance. With a solutions manual, lecture slides, supplementary materials and MATLAB programs all available online, it is ideal for classroom teaching as well as a valuable reference for professionals.
Abstract-Hidden Markov models (HMM's) are a powerful tool for modeling stochastic random processes. They are general enough to model with high accuracy a large variety of processes and are relatively simple allowing us to compute analytically many important parameters of the process which are very difficult to calculate for other models (such as complex Gaussian processes). Another advantage of using HMM's is the existence of powerful algorithms for fitting them to experimental data and approximating other processes. In this paper, we demonstrate that communication channel fading can be accurately modeled by HMM's, and we find closed-form solutions for the probability distribution of fade duration and the number of level crossings.
Abstract-This paper addresses the problem of training sequence design for multiple-antenna transmissions over quasi-static frequency-selective channels. To achieve the channel estimation minimum mean square error, the training sequences transmitted from the multiple antennas must have impulse-like auto correlation and zero cross correlation. We reduce the problem of designing multiple training sequences to the much easier and well-understood problem of designing a single training sequence with impulse-like auto correlation. To this end, we propose to encode the training symbols with a space-time code, that may be the same or different from the space-time code that encodes the information symbols.Optimal sequences do not exist for all training sequence lengths and constellation alphabets. We also propose a method to easily identify training sequences that belong to a standard 2 -PSK constellation for an arbitrary training sequence length and an arbitrary number of unknown channel taps. Performance bounds derived indicate that these sequences achieve near-optimum performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.