“…Mérouane Debbah and Ralf Müller correctly describe these joint probabilities as a model with as many degrees of freedom as possible, which leaves free degrees for correlation to exist or not [4] (p.1674). This avoids the introduction of unjustified information [4] (p.1672) corresponding to the simple intuition behind PME: when updating your probabilities, waste no useful information and do not gain information unless the evidence compels you to gain it (see [4] (p.1685f), [5] (p.376), [6,7], [8] (p.186)). The principle comes with its own formal apparatus, not unlike probability theory itself: Shannon's information entropy [9], the Kullback-Leibler divergence (see [10,11], [12] (p.308ff), [13] (p.262ff)), the use of Lagrange multipliers (see [3] (p.409ff), [12] (p.327f), [13] (p.281)), and the log-inverse relationship between information and probability (see [14][15][16][17]).…”