Automated feature engineering has gained considerable attention in academia and industry. Nevertheless, existing systems often lack practical scalability and efficiency. This paper introduces BigFeat, a scalable and interpretable framework that streamlines critical phases of the machine learning pipeline: feature engineering, model selection, and hyperparameter tuning. BigFeat presents two execution options: as a standalone feature engineering framework, denoted as BigFeat-FE, and as an AutoML framework, referred to as BigFeat-AutoML. BigFeat-FE optimizes input feature quality with the ultimate aim of maximizing predictive performance, based on a user-defined metric. BigFeat-FE employs a dynamic feature generation and selection mechanism that systematically creates a set of expressive features. These features not only enhance prediction performance but also prioritize interpretability. BigFeat-FE employs a metalearning technique to warm-start the optimization process, resulting in significant overall performance gains. BigFeat-AutoML, tailored for algorithm selection and hyperparameter tuning, harnesses a random search method over the space of interpretable models. We conducted extensive experiments, and the results demonstrate that BigFeat-FE consistently outperforms stateof-the-art automated feature engineering frameworks, such as AutoFeat and SAFE, across a wide range of datasets, achieving an average performance improvement of 8.65% compared to AutoFeat and 4.71% compared to SAFE, respectively. Additionally, BigFeat-AutoML demonstrates substantial performance improvement when compared to TPOT and Autosklearn, with average improvements of 0.74% over TPOT and 2.25% over Autosklearn, respectively. Furthermore, BigFeat's scalability is affirmed through its linear complexity, and execution times, averaging 20 times faster than AutoFeat and 14 times faster than SAFE.Impact Statement-The emergence of automated scalable interpretable feature engineering is reshaping the landscape of data science and machine learning. It automates feature creation and selection, providing interpretable features while eliminating the time-consuming burden of manual feature engineering. By streamlining the feature engineering process, this advancement empowers data scientists to concentrate on higher-level tasks and model development. Its scalability ensures it can handle vast and complex datasets across various industries, from healthcare to finance. Interpretability takes center stage in this innovation, enhancing model trustworthiness and facilitating regulatory compliance. The automation of different stages of the machine learning pipeline, including feature engineering, model selection, and hyperparameter tuning, holds the promise of improving predictive modeling, decision-making, and overall efficiency in data-driven applications. This heralds a new era in data science and machine learning, where the potential for transformative advancements is on the horizon.