Deep learning is becoming increasingly adopted in almost all sectors of business due to its ability to transform large quantities of data into high-performing models. The power of deep learning is that it can capture non-trivial and non-linear phenomena from data that even humans can find hard to interpret. Despite this, these models are generally regarded as black boxes, which, despite their performance, could hinder their adoption in some industries. Trusting these models for essential engineering applications such as automotive, energy, healthcare, and finance is difficult without any reasoning. In this context, the field of eXplainable AI attempts to develop techniques that temper the impenetrable nature of the models and promote a level of understanding of their behavior. Here, we present a novel framework, SpecXAI, based on the spectral characterization of the entire network. We show how this framework can be used to not only understand the model but also manipulate it into a more interpretable and linear symbolic representation, which is useful for better understanding the model's behavior.