Class-imbalance extent metrics measure how imbalanced the data are. In pattern classification, it is usually expected that the higher the imbalance extent, the worse the classification performance, and thus an appropriate imbalance extent metric should show a negative correlation with the classification performance. Existing metrics, such as the popular imbalance ratio (IR), only consider the e↵ect of the sample sizes of di↵erent classes. However, we note that the dimensionality of imbalanced data also a↵ects the classification performance. Datasets with the same IR can present distinct classification performances when their dimensionalities are di↵erent, making IR suboptimal to reflect the imbalance extent for classification. We also observe that the classification performance becomes better with more discriminative features. Inspired by these observations, we propose a new imbalance extent metric, the adjusted IR, by adding a penalty term of the number of discriminative features that is e↵ectively determined by the Pearson correlation test. The adjusted IR adaptively revises the IR when the number of discriminative features varies. The empirical studies demonstrate the e↵ectiveness of the adjusted IR, in terms of its better negative correlation with the classification performance.