design drugs to turn proteins off (or on)? How can we design proteins to perform new functions? Hence, there is a sense in which AlphaFold2's remarkable prediction comes without an explanation.This is an important sense, to be sure: AlphaFold2 does not explain how protein folding works. It seems to have somehow learned to bypass the step of explicitly modeling the biological mechanisms leading to the folded protein. Or maybe, there is an image of this mechanism somehow contained in the activation patterns of the nonlinear functions making up AlphaFold2. But this brings us to yet another sense in which the prediction comes without an explanation: AlphaFold2's own functioning in many ways needs an explanation itself.The implications of this want of explanation in the face of successful predictions are far reaching. For instance, the trustworthiness of ML algorithms in society is a big issue, and it depends, for the most part, on the ability to explain their functioning. Ethical issues like these generally profit from philosophers' inputs, as evidenced by the numerous projects and niches on the ethics and societal impact of Artificial Intelligence that we can see originate today. 1 However, for the sake of using ML in science, the other need for explanationthat concerned with an understanding of the mechanisms whose outcome is so successfully predicted by the ML algorithm-certainly obtains a special relevance as well. For, assuming that it remains difficult to understand and explain ML predictions, but that the scientific use of these methods keeps increasing over time, the question arises whether this changes the aims of science from explanation to 'mere' prediction.These and further issues were first explored, by the editors of this issue, in a workshop organized by P. Grünke and R. Hillerbrand at the Karlsruhe Institute of Technology. This was done as part of a project called The impact of computer simulations and machine learning on the epistemic status of LHC Data, in which F. J. Boge is also involved as a postdoctoral researcher. Said project, in turn, is part of an interdisciplinary research unit between physics, philosophy, history and social science, called The Epistemology of the Large Hadron Collider and co-funded by the German Research foundation (DFG) and the Austrian Science Fund (FWF).Much in the spirit of the research unit, the resulting workshop was an interdisciplinary effort, as it involved, next to philosophers, also scholars from the earth sciences (see Boge & Poznic, 2021). Given the fruitfulness of this workshop, the present Special Issue was created as a follow-up publication, even though the contributions to both largely differ.The essays collected in this Special Issue represent a broad spectrum of perspectives on the issue of explanation in the context of ML, as used in science and beyond. Below, we offer a brief summary of their core theses for the reader's orientation.1 Two of us (P. Grünke & R. Hillerbrand) have, for instance, as members of the AI Ethics Impact Group (AIEIG), particip...