Let σ be a first-order signature and let Wn be the set of all σ-structures with domain [n] = {1, . . . , n}. We can think of each structure in Wn as representing a "possible state of the world (or the context of interest)", or simply a "possible world" as we will say, following common informal terminology in the field of Statistical Relational Artificial Intelligence. By an inference framework we mean a class F of pairs (P, L), where P = (Pn : n = 1, 2, 3, . . .) and each Pn is a probability distribution on Wn, and L is a logic with truth values in the unit interval [0, 1].The inference frameworks that we consider will contain pairs (P, L) where P is determined by a so-called probabilistic graphical model, a concept used in AI and machine learning, and L is a logic with expressive capabilities of interest when analysing data sets, so for example, we consider logics which can express statements about, for example, (conditional) probabilities or (arithmetic or geometric) averages.From the point of view of probabilistic and logical expressivity one may consider an inference framework as optimal if it allows any pair (P, L) where P = (Pn : n = 1, 2, 3, . . .) is a sequence of probability distributions on Wn and L is a logic. But from the point of view of using a pair (P, L) from such an inference framework for making inferences on Wn when n is large we face the problem of computational complexity. This motivates looking for an "optimal" trade-off between expressivity and computational efficiency. The issue of computational complexity also arises when learning a probabilistic graphical model as one may want to use a formal language (logic) for describing events that are relevant for learning the graphical model. Learning a more complex graphical model, which in turn determines a sequence (Pn : n = 1, 2, 3, . . .) of more complex probability distributions on Wn, generally requires more computational resources.We define a notion that an inference framework is "asymptotically at least as expressive" as another inference framework. This relation is a preorder and we describe a (strict) partial order on the equivalence classes of some inference frameworks that in our opinion are natural in the context of machine learning and artificial intelligence, illustrated by Figure 1. The results have bearing on issues concerning efficient learning and probabilistic inference, but are also new instances of results in finite model theory about "almost sure elimination" of extra syntactic features (e.g quantifiers) beyond the connectives. Often such a result has a logical convergence law as a corollary.