Discovering patterns in data that best describe the differences between classes allows to
hypothesize and reason about class-specific mechanisms. In molecular biology, for example,
these bear the promise of advancing the understanding of cellular processes differing between
tissues or diseases, which could lead to novel treatments. To be useful in practice, methods
that tackle the problem of finding such differential patterns have to be readily interpretable by
domain experts, and scalable to the extremely high-dimensional data.
In this work, we propose a novel, inherently interpretable binary neural network architecture
Diffnaps that extracts differential patterns from data. Diffnaps is scalable to hundreds
of thousands of features and robust to noise, thus overcoming the limitations of current
state-of-the-art methods in large-scale applications such as in biology. We show on synthetic
and real world data, including three biological applications, that unlike its competitors,
Diffnaps consistently yields accurate, succinct, and interpretable class descriptions.