Fractional calculus gained a lot of attention in the last couple of years. Researchers discovered that processes in various fields follow rather fractional dynamics than ordinary integer-ordered dynamics, meaning the corresponding differential equations feature non-integer valued derivatives. There are several arguments for why this is the case, one of them being that fractional derivatives’ inherit spatiotemporal memory and/or the ability to express complex naturally occurring phenomena. Another popular topic nowadays is machine learning, i.e., learning behavior and patterns from historical data. In our ever-changing world with ever-increasing amounts of data, machine learning is a powerful tool for data analysis, problem-solving, modeling, and prediction. It further provides many insights and discoveries in various scientific disciplines. As these two modern-day topics provide a lot of potential for combined approaches to describe complex dynamics, this article reviews combined approaches of fractional derivatives and machine learning from the past, puts them into context, and thus provides a list of possible combined approaches and the corresponding techniques. Note, however, that this article does not deal with neural networks, as there already is profound literature on neural networks and fractional calculus. We sorted past combined approaches from the literature into three categories, i.e., preprocessing, machine learning & fractional dynamics, and optimization. The contributions of fractional derivatives to machine learning are manifold as they provide powerful preprocessing and feature augmentation techniques, can improve physically informed machine learning, and are capable of improving hyperparameter optimization. Thus, this article serves to motivate researchers dealing with data-based problems, to be specific machine learning practitioners, to adopt new tools and enhance their existing approaches.
In this article, we investigate the applicability of quantum machine learning for classification tasks using two quantum classifiers from the Qiskit Python environment: the Variational Quantum Classifier (VQC) and the Quantum Kernel Estimator (QKE). We evaluate the performance of these classifiers on six widely known and publicly available benchmark datasets and analyze how their performance varies with the number of samples on two artificially generated test classification datasets. As quantum machine learning is based on unitary transformations, this paper explores data structures and application fields that could be particularly suitable for quantum advantages. Hereby, we developed a data set based on concepts from quantum mechanics using the exponential map of a Lie algebra. This dataset will be made publicly available and contributes a novel contribution to the empirical evaluation of quantum supremacy. We further compared the performance of VQC and QKE on six widely applicable datasets to contextualize our results.\\ Our results demonstrate that the VQC and QKE perform better than basic machine learning algorithms such as advanced linear regression models (Ridge and Lasso). They do not match the accuracy and runtime performance of sophisticated modern boosting classifiers like XGBoost, LightGBM, or CatBoost. Therefore, we conclude that while quantum machine learning algorithms have the potential to surpass classical machine learning methods in the future, especially when physical quantum infrastructure becomes widely available, they currently lag behind classical approaches. Our investigations also show that classical machine learning approaches have superior performance classifying datasets based on group structures, compared to quantum approaches that particularly use unitary processes.\\ Furthermore, our findings highlight the significant impact of different quantum simulators, feature maps, and quantum circuits on the performance of the employed quantum estimators. This observation emphasizes the need for researchers to provide detailed explanations of their hyperparameter choices for quantum machine learning algorithms, as this aspect is currently overlooked in many studies within the field.\\ To facilitate further research in this area and ensure the transparency of our study, we have made the complete code available in a linked GitHub repository.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.