In recent years, there has been a growing interest in considering the quantitative aspects of Information Flow, partly because often the a priori knowledge of the secret information can be represented by a probability distribution, and partly because the mechanisms to protect the information may use randomization to obfuscate the relation between the secrets and the observables.Several works in literature use an Information Theoretic approach to model the problem and define the leakage in a quantitative way, see for example [17,4,9,10,13,12,2]. The idea is that the system is seen as a channel. The input represents the secret, the output represents the observable, and the correlation between the input and output (mutual information) represents the information leakage. The worst case leakage corresponds then to the capacity of the channel, which is by definition the maximum mutual information that can be obtained by varying the input distribution.In the works mentioned above, the notion of mutual information is based on Shannon entropy, which (because of its mathematical properties) is the most established measure of uncertainty. From the security point of view, this measure corresponds to a particular model of attack and a particular way of estimating the security threat (vulnerability of the secret). Other notions have been considered, and argued to be more appropriate for security in certain scenarios. These include: Rényi min-entropy [1,16], Bayes risk [3], guessing entropy [11], and marginal guesswork [14]. Köpf and Basin discuss the relation between bruteforce guessing attacks and entropy in [8], in the context of information flow induced by a deterministic program, and define the information leakage as difference between the input entropy and the conditional one, namely the entropy based on the a priori input distribution, and the entropy of the a posteriori distribution (i.e. after observing teh output), respectively. One of their main results is that, in their framework, the notion of leakage under the various notions of attacks considered in their paper is always non-negative.In this talk, we extend the analysis of Köpf and Basin to the probabilistic scenario, and we consider also other notions of entropy, including the family of entropies proposed by Rényi [15]. We argue that in the probabilistic case the notion of information leakage needs to be revised. In fact, when the same secret can give different observables (according to a probability distribution),