Feature selection problems arise in a variety of applications, such as microarray analysis, clinical prediction, text categorization, image classification and face recognition, multi-label learning, and classification of internet traffic. Among the various classes of methods, forward feature selection methods based on mutual information have become very popular and are widely used in practice. However, comparative evaluations of these methods have been limited by being based on specific datasets and classifiers. In this paper, we develop a theoretical framework that allows evaluating the methods based on their theoretical properties. Our framework is grounded on the properties of the target objective function that the methods try to approximate, and on a novel categorization of features, according to their contribution to the explanation of the class; we derive upper and lower bounds for the target objective function and relate these bounds with the feature types. Then, we characterize the types of approximations taken by the methods, and analyze how these approximations cope with the good properties of the target objective function. Additionally, we develop a distributional setting designed to illustrate the various deficiencies of the methods, and provide several examples of wrong feature selections. Based on our work, we identify clearly the methods that should be avoided, and the methods that currently have the best performance.Recently, there has been several attempts to undergo a theoretical evaluation of forward feature selection methods based on MI.[17] and [18] provide an interpretation of the objective function of actual methods as approximations of a target objective function, which is similar to ours. However, they do not study the consequences of these approximations from a theoretical point-of-view, i.e. how the various types of approximations affect the good properties of the target objective function, which is the main contribution of our work. Moreover, they do not cover all types of feature selection methods currently proposed. [26] evaluated methods based on a distributional setting similar to ours, but the analysis is restricted to the group of methods that ignore complementarity, and again, does not address the theoretical properties of the methods.The rest of the paper is organized as follows. We introduce some background on entropy and MI in Section 2. This is followed, in Section 3, by the presentation of the main concepts associated with conditional MI and MI between three random vectors. In Section 4, we focus on explaining the general context concerning forward feature selection methods based on MI, namely the target objective function, the categorization of features, and the relation between the feature types and the bounds of the target objective function. In Section 5, we introduce representative feature selection methods based on MI, along with their properties and drawbacks. In Section 6, we present a distribution based setting where some of the main drawbacks of the representative...