Influence maximization is the task of finding k seed nodes in a social network such that the expected number of activated nodes in the network (under certain influence propagation model), referred to as the influence spread, is maximized. Lattice influence maximization (LIM) generalizes influence maximization such that, instead of selecting k seed nodes, one selects a vector x = (x1, . . . , x d ) from a discrete space X called a lattice, where xj corresponds to the j-th marketing strategy and x represents a marketing strategy mix. Each strategy mix x has probability hu(x) to activate a node u as a seed. LIM is the task of finding a strategy mix under the constraint j xj ≤ k such that its influence spread is maximized. We adapt the reverse influence sampling (RIS) approach and design scalable algorithms for LIM. We first design the IMM-PRR algorithm based on partial reverse-reachable sets as a general solution for LIM, and improve IMM-PRR for a large family of models where each strategy independently activates seed nodes. We then propose an alternative algorithm IMM-VSN based on virtual strategy nodes, for the family of models with independent strategy activations. We prove that both IMM-PRR and IMM-VSN guarantees 1 − 1/e − ε approximation for small ε > 0. Empirically, through extensive tests we demonstrate that IMM-VSN runs faster than IMM-PRR and much faster than other baseline algorithms while providing the same level of influence spread. We conclude that IMM-VSN is the best one for models with independent strategy activations, while IMM-PRR works for general modes without this assumption. Finally, we extend LIM to the partitioned budget case where strategies are partitioned into groups, each of which has a separate budget, and show that a minor variation of our algorithms would achieve 1/2 − ε approximation ratio with the same time complexity.
Image-like data from quantum systems promises to offer greater insight into the physics of correlated quantum matter. However, the traditional framework of condensed matter physics lacks principled approaches for analyzing such data. Machine learning models are a powerful theoretical tool for analyzing image-like data including many-body snapshots from quantum simulators. Recently, they have successfully distinguished between simulated snapshots that are indistinguishable from one and two point correlation functions. Thus far, the complexity of these models has inhibited new physical insights from such approaches. Here, we develop a set of nonlinearities for use in a neural network architecture that discovers features in the data which are directly interpretable in terms of physical observables. Applied to simulated snapshots produced by two candidate theories approximating the doped Fermi-Hubbard model, we uncover that the key distinguishing features are fourth-order spin-charge correlators. Our approach lends itself well to the construction of simple, versatile, end-to-end interpretable architectures, thus paving the way for new physical insights from machine learning studies of experimental and numerical data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.