Some of the most used sampling mechanisms that implicitly leverage a social network depend on tuning parameters; for instance, Respondent-Driven Sampling (RDS) is specified by the number of seeds and maximum number of referrals. We are interested in the problem of optimizing these sampling mechanisms with respect to their tuning parameters in order to optimize the inference on a population quantity, where such quantity is a function of the network and measurements taken at the nodes. This is done by formulating the problem in terms of decision theory and information theory, in turn. The optimization procedure for different network sampling mechanisms is illustrated via simulations in the fashion of the ones used for Bayesian clinical trials.
We provide a general framework for constructing probability distributions on Riemannian manifolds, taking advantage of area-preserving maps and isometries. Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions that are straightforward to sample from, suitable for use within Monte Carlo algorithms and latent variable models, such as autoencoders. As an illustration, we empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model. Finally, we take advantage of the generalized description of this framework to posit questions for future work.
A rich class of network models associate each node with a low-dimensional latent coordinate that controls the propensity for connections to form. Models of this type are well established in the literature, where it is typical to assume that the underlying geometry is Euclidean. Recent work has explored the consequences of this choice and has motivated the study of models which rely on non-Euclidean latent geometries, with a primary focus on spherical and hyperbolic geometry. In this paper 1 , we examine to what extent latent features can be inferred from the observable links in the network, considering network models which rely on spherical, hyperbolic and lattice geometries. For each geometry, we describe a latent network model, detail constraints on the latent coordinates which remove the well-known identifiability issues, and present schemes for Bayesian estimation. Thus, we develop a computational procedures to perform inference for network models in which the properties of the underlying geometry play a vital role. Furthermore, we access the validity of those models with real data applications.
Probabilistic machine learning models are often insufficient to help with decisions on interventions because those models find correlations -not causal relationships. If observational data is only available and experimentations are infeasible, the correct approach to study the impact of an intervention is to invoke Pearl's causality framework. Even that framework assumes that the underlying causal graph is known, which is seldom the case in practice. When the causal structure is not known, one may use out-of-the-box algorithms to find causal dependencies from observational data. However, there exists no method that also accounts for the decision-maker's prior knowledge when developing the causal structure either. The objective of this paper is to develop rational approaches for making decisions from observational data in the presence of causal graph uncertainty and prior knowledge from the decision-maker. We use ensemble methods like Bayesian Model Averaging (BMA) to infer set of causal graphs that can represent the data generation process. We provide decisions by computing the expected value and risk of potential interventions explicitly. We demonstrate our approach by applying them in different example contexts.
Disruption management during the airline scheduling process can be compartmentalized into proactive and reactive processes depending upon the time of schedule execution. The state of the art for decision-making in airline disruption management involves a heuristic human-centric approach that does not categorically study uncertainty in proactive and reactive processes for managing airline schedule disruptions. Hence, this paper introduces an uncertainty transfer function model (UTFM) framework that characterizes uncertainty for proactive airline disruption management before schedule execution, reactive airline disruption management during schedule execution, and proactive airline disruption management after schedule execution to enable the construction of quantitative tools that can allow an intelligent agent to rationalize complex interactions and procedures for robust airline disruption management. Specifically, we use historical scheduling and operations data from a major U.S. airline to facilitate the development and assessment of the UTFM, defined by hidden Markov models (a special class of probabilistic graphical models) that can efficiently perform pattern learning and inference on portions of large data sets.We employ the UTFM to assess two independent and separately disrupted This article represents sections of a chapter from the corresponding author's completed doctoral dissertation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.