Fuzzy systems are universally acknowledged as valuable tools to model complex phenomena while preserving a readable form of knowledge representation. The resort to natural language for expressing the terms involved in fuzzy rules, in fact, is a key-factor to conjugate mathematical formalism and logical inference with human-centered interpretability. That makes fuzzy systems specifically suitable in every real-world context where people are in charge of crucial decisions. That is because the self-explanatory nature of fuzzy rules profitably supports expert assessments. Additionally, as far as interpretability is investigated, it appears that: a) the simple adoption of fuzzy sets in modeling is not enough to ensure interpretability; b) fuzzy knowledge representation must confront the problem of preserving the overall system accuracy, thus yielding a trade-off which is frequently debated. Such issues have attracted a growing interest in the research community and became to assume a central role in the current literature panorama of Computational Intelligence. This chapter gives an overview of the topics related to fuzzy system interpretability, facing the ambitious goal of proposing some answers to a number of open challenging questions: What is interpretability? Why interpretability is worth considering? How to ensure interpretability, and how to assess (quantify) it? Finally, how to design interpretable fuzzy models?