Feature transformation is an essential part of data preprocessing to improve the predictive performance of machine learning (ML) algorithms. Box-Cox transformation with the goal of separability is proven to align with the performance improvement of ML algorithms. However, the features mapped using Box-Cox transformation preserve the order of the data, so it is ineffective when used to improve the separability of multimodal distributed features. This research aims to build a feature transformation method using quadratic functions to improve class separability that can adaptively change the order of the data when necessary. Fisher score (Fs) measures the separability level by maximizing the Fisher's Criteria of the quadratic function. In addition to increasing the Fs value of each feature, this method can also make the feature more informative, as evidenced by the increasing value of information gain, information gain ratio, Gini decrease, ANOVA, Chi-Square, reliefF, and FCBF. The increase in Fs is particularly significant for bimodally distributed features. Experiments were conducted on 11 public datasets with two statistical-based machine learning algorithms representing linear and nonlinear ML algorithms to validate the success of this method, namely LDA and QDA. The experimental results show an improvement in accuracy in almost all datasets and ML algorithms, where the highest accuracy improvement is 0.268 for LDA and 0.188 for QDA.