Explainable artificial intelligence (XAI) is concerned with creating artificial intelligence that is intelligible and interpretable by humans. Many AI techniques build classifiers, some of which result in intelligible models, some of which don't. Rule extraction from classifiers treated as black boxes is an important topic in XAI, that aims to find rule sets that describe classifiers and that are understandable to humans. Neural networks provide one type of classifier where it is difficult to explain why the inputs map to the decision; support vector machines provide a second example of this kind. A third type of classifier, k-nearest neighbour (k-NN), gives more interpretable classifiers, but suffers from performance problems as the model is little more than a representation of the training data. This work investigates a technique to extract rules from classifiers where the underlying problem's feature space is Boolean, without looking at the inner structure of the classifier. For such a classifier with a small feature space, a Boolean function describing it can be directly calculated, whilst for a classifier with a larger feature space, a sampling method is investigated to produce rulebased approximations to the behaviour of the underlying classifier, with varying granularity, leading to XAI. The behaviour of the technique with neural network, support vector machine, and k-NN classifiers is experimentally assessed on a dataset of cross-site scripting (XSS) attacks, and proves to give very high accuracy and precision, often comparable to the classifier being approximated.