More and more works deal with statistical systems far from equilibrium, dominated by unidirectional stochastic processes, augmented by rare resets. We analyze the construction of the entropic distance measure appropriate for such dynamics. We demonstrate that a power-like nonlinearity in the state probability in the master equation naturally leads to the Tsallis (HavrdaCharvát, Aczél-Daróczy) q-entropy formula in the context of seeking for the maximal entropy state at stationarity. A few possible applications of a certain simple and linear master equation to phenomena studied in statistical physics are listed at the end.Keywords: q-entropy; entropic distance; Matthew principle
Definition and Properties of Entropic DistanceDealing with the dynamics of classical probabilities, we would like to propose a general recipe for defining the corresponding formula for the entropic divergence between two probability distributions. Our goal is to handle complex systems with a stochastic dynamics, generalized to nonlinear dependence on the probabilities. For the study of quantum state probabilities and their distance measures we refer to a recent paper [1] and references therein.Entropic distance, more properly called "entropic divergence", is traditionally interpreted as a relative entropy, as a difference between entropies with a prior condition, and without [2]. It is also the Boltzmann-Shannon entropy of a distribution relative to another [3]. Looking at this construction, however, from the viewpoint of a generalized entropy [4], the simple difference or logarithm of a ratio cannot be held as a definition anymore.Instead, in this paper, we explore a reverse engineering concept: seeking an entropic divergence formula, which is subject to some wanted properties, we consider entropy as a derived quantity. More precisely, we seek entropic divergence formulas appropriate for given stochastic dynamics, shrinking during the approach to a stationary distribution, whenever it exists, and establish the entropy formula from this distance to the uniform distribution. By doing so we serve two goals: (i) having constructed a non-negative entropic distance we derive an entropy formula which is maximal for the uniform distribution; and (ii) we come as near as possible to the classical difference formula for the relative entropy.Starting from a given master equation, it is far from trivial that which is the most suitable entropy divergence formula for analyzing the stability of a stationary solution. In the present paper we provide a general procedure to obtain a general entropic divergence formula for atypical cases. Although we exemplify only the well-known cases of the logarithmic formula of the Kullback-Leibler and that of the Renyi divergence, our result readily generalizes to an infinite number of cases, distinguished by the dependence on the initial state probability at each transition term.